This is sort of right. Reformer has taken books as a single input and not just in split batches. There are pretrained book-trained models on HuggingFace now. I'm not aware of book-length summarization models yet, but that's not due to the input length issues of the previous generation of xformer models (maybe due to lack of an obvious training set?). So your SQL example should be a thing of the past before long.