Hacker News new | past | comments | ask | show | jobs | submit login

Using retrieval to look up a fact and then citing that fact in response to a query (with attribution) is absolutely within the capabilities of current LLMs.

LLMs "hallucinate" all the time when pulling data from their weights (because that's how that works, it's all just token generation). But if the correct data is placed within their context they are very capable of presenting the data in natural language.




Like some kind pre-processing db lookup?


If you read the article, that’s literally what this is talking about. There’s a simple diagram and everything.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: