If I absolutely need to avoid hallucinations (e.g. when building marketing/sales assistants for the businesses), then I allow LLM to control and drive search for the relevant documents.
On a high level:
(1) give LLM ability and enough information to "expand" user query into a multiple search phrases. Search engine will use them to find most relevant fragments via a form of embedding search
(2) Get highest ranking document fragments and "show" them to LLM saying: "This are the results that were found in the document database using your search phrases via embedding similarity. Refine the search"
(3) Repeat that a couple of times, then rank final documents and combine them for the final answer synthesis.