> our linear transformers are somewhat useless, as the positive impact from the speedup seen in long contexts is undermined by the negative impact of degraded learning.
> In a future post, we will explain how to improve the learning of linear transformers
So the techniques here are useless without special secret sauce that they're not disclosing. Yet. Mamba is already out there solving similar problems, but the more the merrier. I hope they publish the useful part soon.
That's pretty common in academia. You publish something new that is worse than the state-of-the-art. To maintain some semblence of meaning for your work, you you then say that the shortcomings will be addressed in future papers. Often these papers never surface because somewhere along the way it turns out that even though your approach was new, it is fundamentally worse. This kind of stuff happens all the time in research and it only makes it to the surface thanks to this twisted publish or perish world academics now live in.
A counter perspective: it’s a good thing these ideas make it to the surface! Clearly someone thought it was a good enough idea to try, now others can have better data before trying that same rabbit hole
The sad part about the whole situation is that one has to hype the research as the new best thing ever rather than an experiment that was well motivated (not all of them are) with results that weren’t as nice as hoped
There's even a push for precommitments (e.g. "I'm doing an experiment on X that will finish on YY/ZZ") so that even if a failed experiment results in nothing publishable, other researchers will know not to waste their time trying to repeat it.
I didn't want to associate this account w/ my real name but now that you mentioned it wasn't right of me to not point that out. I added a disclaimer.
The posted algorithm and the one mentioned in my paper are very similar. It is just that the cumulative sum computation is parallelized in the posted website.
The point of this post isn’t the linear transformer algorithm. They’re surveying a variety of Linear transformers and showing a general form in order to talk at large about their performance characteristics.
I don't understand something, why do they claim they go from O(N*N) to O(N), but all they claim they are doing is removing one exponentiation operation, which is O(1)? Where is the O(N) they are removing?
Removing the exponential allows some linear algebra based tricks. It makes the state space linear. Linearity allows a kind of running sum, where the state space at time T is quickly computable from the state space at time T-1.
That linearity model simplification has model expressiveness costs, which is why they don't fit the training data as well.
It's described explicitly in section 1 where they first reduce to a linear relationship and then recognize that a portion of the formula can be captured in a state variable, and rewrite as a recurrence relation.
By persisting the state variable across subsequent computations they transform the quadratic formula for computing output into a linear formula computing output and next state from current state.
It's kind of like memoization, but since it's a number it's constant space too.
You should check out MoE-Mamba (https://arxiv.org/abs/2401.04081), it's faster and more accurate than Transformer-MoE. Of course only time will tell if it's better when scaled up further than the paper goes.
Great writeup and interesting experiments. I can’t help but wonder what would happen in you instead made a rectified linear attention. Is that even possible?
> In a future post, we will explain how to improve the learning of linear transformers
So the techniques here are useless without special secret sauce that they're not disclosing. Yet. Mamba is already out there solving similar problems, but the more the merrier. I hope they publish the useful part soon.