Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Something needs to generate the document embeddings since the LLM itself won't


No, this is completely wrong. You can get embeddings from the LLM itself, e.g the last layer.


Doesn't the last layer output a variable-size vector based on seq length? It'd take a bit of hacking to get it to be a semantic vector.

Additionally, that vector is trained to predict next token as opposed to semantic similarity. I'd assume models trained specifically towards semantic similarity would outperform (I have not bothered comparing both in the past - MMTEB seems to imply so)

At that point - it seems quite reasonable to just pass the sentence into an embedding model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: