Using retrieval to look up a fact and then citing that fact in response to a query (with attribution) is absolutely within the capabilities of current LLMs.
LLMs "hallucinate" all the time when pulling data from their weights (because that's how that works, it's all just token generation). But if the correct data is placed within their context they are very capable of presenting the data in natural language.
LLMs "hallucinate" all the time when pulling data from their weights (because that's how that works, it's all just token generation). But if the correct data is placed within their context they are very capable of presenting the data in natural language.