Yes. Solving context length has been tried in hundreds of different approaches, and yet most LLMs are almost identical to the original one from 2017.
Just to name a few families of approaches: Sparse Attention, Hierachical Attention, Global-Local Attention,Sliding Window Attention, Locality sensitive hashing Attention, State space model, EMA gated attention.
Notably, human working memory isn't great either. Which begs the question (if the comparison is valid) as to whether that limitation might be fundamental.
The failure mode is that only long context tasks benefit, short ones work fast enough with full attention, and better. It's amazing that OpenAI never used them in any serious LLM even though training costs are huge.
Just to name a few families of approaches: Sparse Attention, Hierachical Attention, Global-Local Attention,Sliding Window Attention, Locality sensitive hashing Attention, State space model, EMA gated attention.