You should compare the number of top AI scientists each company has. I think those numbers are comparable (I’m guessing each has a couple of dozen). Also how attractive each company is to the best young researchers.
The way I think about QKV projections: Q defines sensitivity of token i features when computing similarity of this token to all other tokens. K defines visibility of token j features when it’s selected by all other tokens. V defines what features are important when doing weighted sum of all tokens.
Don't get caught up in interpreting QKV, it is a waste of time, since completely different attention formulations (e.g. merged attention [1]) still give you the similarities / multiplicative interactions, but may even work better [2]. EDIT: Oh and attention is much more broad than scaled dot-product attention [3].
Read the third link / review paper, it is not at all the case that all attention is based on QKV projections.
Your terms "sensitivity", "visibility", and "important" are too vague and lack any clear mathematical meaning, so IMO add nothing to any understanding. "Important" also seems factually wrong, given these layers are stacked, so later weights and operations can in fact inflate / reverse things. Deriving e.g. feature importances from self-attention layers remains a highly disputed area (e.g. [1] vs [2], for just the tip of the iceberg).
You are also assuming that the importance of attention is the highly-specific QKV structure and projection, but there is very little reason to believe that based on the third review link I shared. Or, if you'd like another example of why not to focus so much on scaled dot-product attention, see that it is just a subset of a broader category of multiplicative interactions (https://openreview.net/pdf?id=rylnK6VtDH).
1. The two papers you linked are about importance of attention weights, not QKV projections. This is orthogonal to our discussion.
2. I don't see how the transformations done in one attention block can be reversed in the next block (or in the FFN network immediately after the first block): can you please explain?
3. All state of the art open source LLMs (DeepSeek, Qwen, Kimi, etc) still use all three QKV projections, and largely the same original attention algorithm with some efficiency tweaks (grouped query, MLA, etc) which are done strictly to make the models faster/lighter, not smarter.
4. When GPT2 came out, I myself tried to remove various ops from attention blocks, and evaluated the impact. Among other things I tried removing individual projections (using unmodified input vectors instead), and in all three cases I observed quality degradation (when training from scratch).
5. The terms "sensitivity", "visibility", and "important" all attempt to describe feature importance when performing pattern matching. I use these terms in the same sense as importance of features matched by convolutional layer kernels, which scan the input image and match patterns.
No. Each projection is ~5% of total FLOPs/params. Not enough model capacity change to care. From what I remember, removing one of them was worse than other two, I think it was Q. But in all three cases, degradation (in both loss and perplexity) was significant.
1. I do not think it is orthogonal, but, regardless, there is plenty of research trying to get explainability out of all aspects of scaled dot-product attention layers (weights, QKV projections, activations, other aspects), and trying to explain deep models generally via sort of bottom-up mechanistic approaches. I think it can be clearly argued this does not give us much and is probably a waste of time (see e.g. https://ai-frontiers.org/articles/the-misguided-quest-for-me...). I think this is especially clear when you have evidence (in research, at least) that other mechanisms and layers can produce highly similar results.
2. I didn't say the transformations can be reversed, I said if you interpret anything as an importance (e.g. a magnitude), that can be inflated / reversed by whatever weights are learned by later layers. Negative values and/or weights make this even more annoying / complicated.
3. Not sure how this is relevant, but, yes, any reasons for caring about QKV and scaled dot-product attention specifics are mostly related to performance and/or current popular leading models. But there is nothing fundamentally important about scaled dot-product attention, it most likely just happens to be something that was settled on prematurely because it works quite well and is easy to parallelize. Or, if you like the kernel smoothing explanation also mentioned in this thread, scaled dot-product self-attention implements something very similar to a particularly simple and nice form of kernel smoothing.
4. Yup, removing ops from scaled dot-product attention blocks is going to dramatically reduce expressivity, because there really aren't much ops there to remove. But there is enough work on low-rank attention, linear attentions, and sparse attentions, that show you can remove a lot of expressivity and still do quite well. And, of course, the huge amount of helpful other types of attention I linked before give gains in some cases too. You should be skeptical about any really simple or clear story about what is going on here. In particular, there is no clear reason why a small hypernetwork couldn't be used to approximate something more general than scaled dot-product attention, except that, obviously this is going to be more expensive, and in practice you can probably just get the same approximate flexibility by stacking simpler attention layers.
5. I still find that doesn't give me any clear mathematical meaning.
I suspect our learning goals are at odds. If you want to focus solely on the very specific kind of attention used in the popular transformer models today, perhaps because you are interested in optimizations or distillation or something, then by all means try to come up with special intuitions about Q, K, and V, if you think that will help here. But those intuitions will likely not translate well to future and existing modifications and improvements to attention layers, in transformers or otherwise. You will be better served learning about attention broadly and developing intuitions based on that.
Others have mentioned the kernel smoothing interpretation, and I think multiplicative interactions are the clearer deeper generalization of what is really important and valuable here. Also, the useful intuitions in DL have been less about e.g. "feature importances" and "sensitivity" and such, but tend to come more from linear algebra and calculus, and tend to involve things like matrix conditioning and regularization / smoothing and Lipschitz constants and the like. In particular, the softmax in self-attention is probably not doing what people typically say it does (https://arxiv.org/html/2410.18613v1), and the real point is that all these attention layers are trained in an end-to-end fashion where all layers are interdependent on each other to varying complicated degrees. Focusing on very specific interpretations ("Q is this, K is that"), especially where these interpretations are sort of vaguely metaphorical, like yours, is not likely to result in much deep understanding, in my opinion.
Per your point 4, some current hyped work is pushing hard in this direction [1, 2, 3]. The basic idea is to think of attention as a way of implementing an associative memory. Variants like SDPA or gated linear attention can then be derived as methods for optimizing this memory online such that a particular query will return a particular value. Different attention variants correspond to different ways of defining how the memory produces a value in response to a query, and how we measure how well the produced value matches the desired value.
Some of the attention-like ops proposed in this new work are most simply described as implementing the associative memory with a hypernetwork that maps keys to values with weights that are optimized at test time to minimize value retrieval error. Like you suggest, designing these hypernetworks to permit efficient implementations is tricky.
It's a more constrained interpretation of attention than you're advocating for, since it follows the "attention as associative memory" perspective, but the general idea of test-time optimization could be applied to other mechanisms for letting information interact non-linearly across arbitrary nodes in the compute graph.
perhaps because you are interested in optimizations or distillation or something
Yes, my job is model compression: quantization, pruning, factorization, ops fusion/approximation/caching, in the context of hw/sw codesign.
In general, I agree with you that simple intuitions often break down in DL - I observed it many times. I also agree that we don't have good understanding how these systems work. Hopefully this situation is more like pre-Newtonian physics, and Newtons are coming.
Financial freedom is about not having to worry about losing your job, or tolerating shitty work conditions. Why would you retire if you do what you love? I think the real problem might be if there's nothing you actually love doing (long term), that's when money won't help.
But that's the problem - if you don't like doing anything, what will you do? What will you fill your life with? You will quickly get bored of anything you try. Your life will have no meaning, and you will probably turn to alcohol or drugs.
Had this exact situation. Turned to drugs. Lived like a GTA character. That’s unsustainable, similar to luxury vacations which turn dull. Now I‘m finding investors for curing MS with a team of researchers who‘ve built sth that is more accurate than CRISPR to cure my sister.
I actually want to get in contact with Sergey Brin about that because we might have something interesting for him - but my American contacts are only connected to Musk and people like a Polygon founder and music/hollywood people. This is not a psychotic or exaggerated message, I‘m sure HN can vet us (@dang) and get us contacts, currently I‘m talking to family offices in Saudi Arabia.
About meaning: if you get bored, aim for bigger positive impact.
short: We have a rigorously validated antigen-specific immune tolerance platform with bystander suppression, NIH/MS Society backing and a clear translational gap.
Everyone, even the laziest person likes doing something even if that’s just parking yourself in front of the TV and stuffing your face.
Most people though genuinely like activities that most times would be impossible to monetize enough to make a living, which isn’t a problem if you’re rich. Alternatively, there are plenty of things people want to do that they have no intention of being the best at, they want to dabble.
Indeed, video games are probably the things most of humanity will retire to if they didn't attach so much ego and meaning to their jobs and by extension, the people around them.
Just be sure to swap games once in a while so you don't get bored.
There are, and often times they're stuck in a loop of presenting decks and status, writing proposals rather than doing this kind of research.
That said, interpreting user feedback is a multi-role job. PMs, UX, and Eng should be doing so. Everyone has their strengths.
One of the most interesting things I've had a chance to be a part of is watching UX studies. They take a mock (or an alpha version) and put it in front of an external volunteer and let them work through it. Usually PM, UX, and Eng are watching the stream and taking notes.
When you get to a company that's that big, the roles are much more finely specialized.
I forget the title now, but we had someone who interfaced with our team and did the whole "talk to customers" thing. Her feedback was then incorporated into our day-to-day roadmap through a complex series of people that ended with our team's product manager.
So people at Google do indeed do this, they just aren't engineers, usually aren't product managers, frequently are several layers removed from engineers, and as a consequence usually have all the problems GP described.
PM is a fake job where the majority have long learned that they can simply (1) appease leadership and (2) push down on engineering to advance their career. You will notice this does not actually involve understanding or learning about products.
It's why the GP got that confused reaction about reading user reports. Talk to someone outside big company who has no power? Why?
I've had the pleasant experience of having worked for PMs at several companies (not at Google) who were great at their jobs, and advocated for the devs. They also had no problem with devs talking directly with clients, and in fact they encouraged it since it was usually the fastest way to understand and solve a problem.
Almost every job in the US is primarily about pleasing leadership at the end of the day.
If companies didn’t want that sort of incentive structure to play out then they would insulate employees from the whims of their bosses with things like contracts or golden parachutes that come out of their leaderships budget.
They pretty much don’t though, so you need to please your leadership first to get through the threat of at will employment, before considering anything else.
If you’re lucky what pleases your leadership is productive and if your super lucky what pleases them even pleases you.
Gotta suck it up and eat shit or quit if it doesn’t though
Strange question. If you don’t know why you need this, you probably don’t. It will be the same as with the introductory AI course you did 20 years ago.
Karpathy's videos aren't an AI (except in modern sense of AI=LLMs) course, or a machine learning course, or even a neural network course for that matter (despite the title) - it's really just "From Zero to LLMs".
Neural nets were taught in my Uni in the late 90s. They were presented as the AI technique, which was however computationally infeasible at the time. Moreover, it was clearly stated that all supporting ideas were developed and researched 20 years prior, and the field was basically stagnated due to hardware not being there.
I remember reading "neural network" articles back from late 80's, early 90's, which weren't just about ANNs, but also other connectionist approaches like Kohonen's Self-Organizing Maps and Stephen Grossberg's Adaptive Resonance Theory (ART) ... I don't know how your university taught it, but back then this seemed more futuristic brain-related stuff, not a practical "AI" technique.
Anyone who watches the videos and follows along will indeed come up to speed on the basics of neural nets, at least with respect to MLPs. It's an excellent introduction.
Sure, the basics of neural nets, but it seems just as a foundation leading to LLMs. He doesn't cover the zoo of ANN architectures such as ResNets, RNNs, LSTMs, GANs, diffusion models, etc, and barely touches on regularization, optimization, etc, other than mentioning BatchNorm and promising ADAM in a later video.
It's a useful series of videos no doubt, but his goal is to strip things down to basics and show how an ANN like a Transformer can be built from the ground up without using all the tools/libraries that would actually be used in practice.
I love to sit alone in a cafe - reading. Before smartphones I was reading newspapers or books. Now I read on my phone or tablet. While there, I don’t want to talk to anyone, I just want to sit and read quietly.
Which is a bit tragic for someone like me who lives in a place where I hardly know anyone and is desperate to talk to the other people sitting alone in the cafe
Meaning a GPT but next token is a live sensor reading or a servo angle or accelerometer state. Then connect that GPT with an actual LLM as a controller and you (hopefully) have a physical machine with arms, legs and a mind.
It's not just literacy, although that's nearly required to engage in the public discourse. It's really more like indoctrination. The voting populace has to have an implicit belief in public institutions, believe that attempting to vote or losing a vote is not cataclysmic (not a great reason for violence or retribution), and have patience that the system will gradually correct with future votes rather than require a authoritarian to restore order. I think you can also add a distaste for cults of personalities, and a willingness to vote in disagreement with religious leaders. Lastly, voters have to have a shared delusion that their vote matters, which it practically does not (economic value of voting is negative for the individual).
Russia likely doesn't meet these requirements, and the U.S. has had many failed democratic experiments in places like Afghanistan where this culture is missing.
If someone is interested in low level tensor implementation details they could benefit from a course/book “let’s build numpy in C”. No need to complicate DL library design discussion with that stuff.
reply