Hacker News new | past | comments | ask | show | jobs | submit login

So I first learned about cosine similarity in the context of traditional information retrieval, and the simplified models used in that field before the development of LLMs, TensorFlow, and large-scale machine learning might prove instructive.

Imagine you have a simple bag-of-words model of a document, where you just count the number of occurrences of each word in the document. Numerically, this is represented as a vector where each dimension is one token (so, you might have one number for the word "number", another for "cosine", another for "the", and so on), and the magnitude of that component is the count of the number of times it occurs. Intuitively, cosine similarity is a measure of how frequently the same word appears in both documents. Words that appear in both documents get multiplied together, but words that are only in one get multiplied by zero and drop out of the cosine sum. So because "cosine", "number", and "vector" appear frequently in my post, it will appear similar to other documents about math. Because "words" and "documents" appear frequently, it will appear similar to other documents about metalanguage or information retrieval.

And intuitively, the reason the magnitude doesn't matter is that those counts will be much higher in longer documents, but the length of the document doesn't say much about what the document is about. The reason you take the cosine (which has a denominator of magnitude-squared) is a form of length normalization, so that you can get sensible results without biasing toward shorter or longer documents.

Most machine-learned embeddings are similar. The components of the vector are features that your ML model has determined are important. If the product of the same dimension of two items is large, it indicates that they are similar in that dimension. If it's zero, it indicates that that feature is not particularly representative of the item. Embeddings are often normalized, and for normalized vectors the fact that magnitude drops out doesn't really matter. But it doesn't hurt either: the magnitude will be one, so magnitude^2 is also 1 and you just take the pair-wise product of the vectors.




> the reason the magnitude doesn't matter is that those counts will be much higher in longer documents ...

To be a bit more explicit (of my intuition). The vector is encoding a ratio, isn't it? You want to treat 3:2, 6:4, 12:8, ... as equivalent in this case; normalization does exactly that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: