Hacker News new | past | comments | ask | show | jobs | submit login
Better RAG Results with Reciprocal Rank Fusion and Hybrid Search (assembled.com)
249 points by johnjwang 4 months ago | hide | past | favorite | 57 comments



If you're looking for an example of RRF + Hybrid Search with PostgreSQL, I've put together a FastAPI app here that uses RAG with those options:

https://github.com/Azure-Samples/rag-postgres-openai-python/

Here's the RRF+Hybrid part: https://github.com/Azure-Samples/rag-postgres-openai-python/...

That's largely based off a sample from the pgvector repo, with a few tweaks.

Agreed that Hybrid is the way to go, it's what the Azure AI Search team also recommends, based off their research:

https://techcommunity.microsoft.com/t5/ai-azure-ai-services-...


This is awesome, thank you.


First take at hybrid search with Postgres pg_vector based on this: https://gist.github.com/cpursley/dae0a0be442f27e6af79d6bfc2b...


I also found pure RAG with vector search to not work. I was creating a bot that could find answers to questions about things by looking at Slack discussions.

At first, I downloaded entire channels, loaded them into a vector DB, and did RAG. The results sucked. Vector searches don't understand things very well, and in this world, specific keywords and error messages are very searchable.

Instead, I take the user's query, ask an LLM (Claude / Bedrock) to find keywords, then search Slack using the API, get results, and use an LLM to filter for discussions that are relevant, then summarize them all in a response.

This is slow, of course, so it's very multi-threaded. A typical response will be within 30 seconds.


I find these discussions funny.

For decades we had search engines based on the query terms (keywords). Then there were lots of discussions and some implementations to put a semantic search on top of it to improve the keyword search. A hybrid search. Google Search did exactly that already in 2015 [1].

Now we start from pure semantic search and put keyword search on top of it to improve the semantic search and call it hybrid search.

In both approaches, the overall search performance is exactly identical - to the last digit.

I am glad, that so far, no one has called this an innovation. But you could certainly write a lot of blog articles about it.

[1] https://searchengineland.com/semantic-search-entity-based-se...


Except now the semantic capabilities are so much stronger. The transformer allows the model to get meaning from words that are far apart from each other


You are talking about English, right? And only for searches without any special technical terms or abbreviations?

Also my use case includes more than 20 languages. To find usable embeddings for all languages is next to impossible. However, there are keyword plugins for most languages in Solr or ElasticSearch.

Btw. In my benchmarks the result look something like this in English (MAP=mean average precision):

BM25(keyword search) -> MAP=45%

Embedding (Ada-002) -> MAP=49%

Hybrid (BM25 + Embedding) -> MAP=57%

Hybrid (Embedding + BM25) -> MAP=57%

And that's before you use synonym dictionaries for keyword searches.


I'm curious, in your benchmark, what's the difference between BM25+Embedding and Embedding+BM25? And what do you use to make the embedding

If you make the embedding with an LLM, it should work for any language the LLM is trained on.


BM25+Embedding and Embedding+BM25 is exactly the same and shows the commutative relation whether you start from keyword search or semantic search.

For my tests, I used Ada-002. As data I used small news articles and no chunking and no preprocessing. The query for the articles is embedded directly.

Of course, improvements can be done for both approaches. That should just exemplify, what you might expect with hybrid search.


When you’re creating your embedding you can store keywords from the content (using an LLM) in the metadata of each chunk which would positively increase the relevancy of results turned from the retrieval.

LlamaIndex does this out of the box.


That's interesting! I didn't know that


Are you doing this for a product or for internal usage?


Zero shot key phrase extraction is a reasonably well-studied field. I don’t know what the current SOTA is, but the one that was pretty hot shit last time I needed one was kbir-inspec which is on HuggingFace and you can test it right on the page.

Might be worth a shot if performance is a tricky spot in your setup.


Thanks for sharing, I like the approach and it makes a lot of sense for the problem space. Especially using existing products vs building/hosting your own.

I was however tripped up by this sentence close to the beginning:

> we encountered a significant challenge with RAG: relying solely on vector search (even using both dense and sparse vectors) doesn’t always deliver satisfactory results for certain queries.

Not to be overly pedantic, but that's a problem with vector similarity, not RAG as a concept.

Although the author is clearly aware of that - I have had numerous conversations in the past few months alone of people essentially saying "RAG doesn't work because I use pg_vector (or whatever) and it never finds what I'm looking for" not realizing 1) it's not the only way to do RAG, and 2) there is often a fair difference between the embeddings and the vectorized query, and with awareness of why that is you can figure out how to fix it.

https://medium.com/@cdg2718/why-your-rag-doesnt-work-9755726... basically says everything I often say to people with RAG/vector search problems but again, seems like the assembled team has it handled :)


Author here: you're for sure right -- it's not a problem with RAG the theoretical concept. In fact, I think RAG implementations should likely be specific to their use cases (e.g. our hybrid search approach works well for customer support, but I'm not sure if it would work as well in other contexts, say for legal bots).

I've seen the whole gamut of RAG implementations as well, and the implementation, specifically prompting and the document search has a lot to do with the end quality.


re: legal, I saw a post on this idea where their RAG system was designed to return the actual text from the document rather than a LLM response or summary. The LLM played a role in turning the query into the search params, but the insight was that for certain kinds of documents, you want the actual source because of the existing, human written summary or the detailed nuances therein


Sounds more like Generation Augmented Retrieval in that case.


It wasn't this GAR post, I remember them calling out legal docs explicitly, might have seen it on Twitter

https://blog.luk.sh/rag-vs-gar


Do you happen to have any good references for GAR implementation?


> Not to be overly pedantic, but that's a problem with vector similarity, not RAG as a concept.

Vector similarity has a surprising failure mode. It only indexes explicit information, missing out the implicit one. For example "The second word of this phrase, decremented by one" is "first", do you think these strings will embed the same? Calculated results don't retrieve well. Also, deductions in general.

How about "I agree with what John said, but I'd rather apply Victor's solution"? It won't embed like the answer you seek. Multi-hop information seeking questions don't retrieve well.

The obvious fix is to pre-ingest all the RAG text into a LLM and calculate these deductions before embedding.


Having worked building out a RAG SaaS platform for the past year and having worked on the vendor side of several keyword-based search systems in the past 10 years, I can say it's absolutely necessary to have some kind of hybrid search for most use cases I've seen.

The problem is that most people don't have experience optimizing even 1 of the retrieval systems (vector or keyword), so a lot of users that try to DIY build end up with an awful time trying to get to prod. People are talking about things like RRF (which are needed) but then missing other big-picture things like the mistakes everyone makes when building out a keyword search (not getting the right language rules in place) and also not getting the right vector side (finding the right embedding models, chunking strategies, etc).

I recognize I have a bit of a conflict of interest since I'm at a RAG vendor, but I'll abstain from the name/self-promotion and say: I've seen so many cases where people get this wrong, if you're thinking RAG you really should be hiring a consultant or looking at a complete platform from people that have done it more. Or be prepared to spend a lot of cycles learning and iterating


People dramatically underestimate the complexity of even reasonably relevant search systems.

One reason is unlike other data products - it’s an active, conscious action of users. If ads or recommendations are wrong nobody gets mad. But screw up search and it’s like the shop sales person taking you to the wrong aisle. It’s actively frustrating.

So basically every useful search system is disliked to some degree because it will get some things wrong some of the time.


As someone who has spent way too long building a RAG system for internal use, would be interested to know what your platform is.

Don't think it's overly self-promotional if first asked :)

If you still don't wanna say, feel free to email, email in profile


We've been building some systems for clients recently including Moody's using Lucene-based engines for the R-part - the G part tends to be OpenAI or some such service but there's also appetite for internally hosted LLMs. The trick is good measurement, as I explained in this talk at State of Open Con. https://www.youtube.com/watch?v=Ghbd1RkNgpM


Vectara


For another set of measurements that support RRF + Hybrid > vectors, we (Azure AI Search team) did a bunch of evaluations a few months ago: https://techcommunity.microsoft.com/t5/ai-azure-ai-services-...

We also included supporting data in that write up showing you can improve significantly on top of Hybrid/RRF using a reranking stage (assuming you have a good reranker model), so we shipped one as an optional step as part of our search engine.


RRF is alright, but I've had better results with relative score, or distribution-based scoring.

LlamaIndex has a module for exactly this

https://docs.llamaindex.ai/en/stable/examples/retrievers/rel...


RRF is a simple and effective means of fused ranking for multiple recall. Within our open source RAG product RAGFlow(https://github.com/infiniflow/ragflow), Elasticsearch is currently used instead of other general vector databases, because it can provide hybrid search right now. Under the default cases, embedding based reranker is not required, just RRF is enough, while even if reranker is used, keywords based retrieval is also a MUST to be hybridized with embedding based retrieval, that's just what RAGFlow's latest 0.7 release has provided.

On the other hand let me introduce another database we developed, Infinity(https://github.com/infiniflow/infinity), which can provide the hybrid search, you can see the performance here(https://github.com/infiniflow/infinity/blob/main/docs/refere...), both vector search and full-text search could perform much faster than other open source alternatives.

From the next version(weeks later), Infinity will also provide more comprehensive hybrid search capabilities, what you have mentioned the 3-way recalls(dense vector, sparse vector, keyword search) could be provided within single request.


Elastic Search is publishing a lot of interesting posts on this topic although with a bit of marketing for ex https://www.elastic.co/search-labs/blog/semantic-reranking-w...


pg_search (full text search Postgres extension) can be used with pgvector for hybrid search over Postgres tables. It comes with a helpful hybrid search function that uses relative score fusion. Whereas rank fusion considers just the order of the results, relative score fusion uses the actual metrics outputted by text/vector search.


I've implemented a very similar RAG hybrid solution, and it has improved LLM responses enormously. There are other things you can do too that have huge improvements, like destructuring your data and placing it into a graph structure, with queryable edge relationships. I think we're just scratching the surface.


This is really interesting, do you have other recommendations for improvements (gladly with sources I you have any)? I have to build a RAG solution for my job and right now I am collecting information to determine the best way to go ahead.


I'm exploring tooling for building these graphs and would love to pick your brain about your use case, if you're willing. No pressure! wade at tractorbeam dot ai


Reciprocal rank scoring is just one way of forcing scores into a fixed distribution: in this case, decaying with the reciprocal of its rank. But it also assumes fixed weight from all components, i.e. the top ranked keyword match has equal relevance to the top ranked semantic match.

There are a couple ways around this. Either learning the relative importance based on the query, and/or using a separate reranking function (usually a DNN) that also takes user behavior into account.


Meilisearch has a really clean implementation of this. Can easily adjust keyword vs vector weighting per query.


In case you just want a single Postgres function that does RRF (pgvector+fts): https://supabase.com/docs/guides/ai/hybrid-search

(disclaimer: supabase dev who went down the rabbit hole with hybrid search)


Both of your references use RRF=1/(60+Rank)

So I'm not sure why the article uses 1/Rank alone. Did you test both and find that the smoothing didn't help? My understanding is that it has been pretty important for the best results.


It's a good call out -- we use smoothing parameters that are closer to what you see in the academic articles (they're tuned slightly, but not much).

We used 1/Rank in the article for simplicity purposes, though I can see why this might be confusing to an astute reader.


The composability of RRF is definitely one of its most appealing characteristics. It doesn't matter what algorithm or vendor you have, you can just fuse with ranks alone. I've seen it shine when fusing lexical and vector search results where semantic attributes like styles and exact attributes like quantities are mixed together in queries, e.g., "modern formal watch with 40mm face".

While it's not such a problem in RAG, one downside is that it complicates pagination for results (there are a few different ways to tackle this).


Pardon my ignorance but I was hung up on this line.

> Out-of-sync document stores could lead to subtle bugs, such as a document being present in one store but not another.

But then the article suggests to upload synchronously in S3/DDB and then sync asynchronously to actual document stores. How does this solve out of sync issue? It doesn't. It can't be solved is what I'm thinking.

> Data, numbers

How much data are we talking about?


As soon as the indexed documents contain lingo of any kind you need hybrid search IMHO.

Additionally, if you can add conditional fuzzy matching into the mix so fat fingering something still yields a workable result is even better for UX (something along the lines of "the results from the tf-idf search are garbage, let's redo the search with fuzzy matching this time).


I have no doubt this probably produces better results than a simple vector search, but you cannot escape the fact that you are converting a query to a set of results, and so the quality and intent of the query matter. In fact, it matters more than the search mechanics. Anyone who has ever used a search engine or some other search mechanism knows that intuitively.


Hybrid might work for English but where are you going to get sparse embeddings like SPLADE or ELSERv2 for most other languages? Vector search with ada-002 or text-003-large capped to the first 500-1000 dimensions will give you a support for 100+ languages for free. If you are using BM25, then you need to train BM25 on every single separate knowledge base which is annoying and expensive.


Great article. Hybrid search works well for a lot of scenarios.

The tradeoffs of using existing systems vs building your own resonate with me. What we eventually experienced, however, is that periods of bad search performance often correlated to out-of-date search indices.

I'd be interested in another article detailing how you monitor search. It can be tricky to keep an entire search system moving.


1. Does anyone know a postgres reranking extension, to go beyond RRF through ML models or at least custom code?

2. If anyone is observing significant gains from incorporating knowledge graphs into the retrieval step, what kind of a knowledge graph are you working with, what is your retrieval algorithm, and what technology are you using to store it?


Re 1) pgvector has an example in the repo that uses a model for re-ranking: https://github.com/pgvector/pgvector-python/blob/master/exam...

I'm not using that in my own experiments since I don't want to worry about the performance of running a model on production, but seems worth a try.


That's outside the database, though. This is closer to what I had in mind: https://postgresml.org/blog/how-to-improve-search-results-wi...


May I ask you if you tried hybrid search directly on Pinecone, using Bm25 or splade?


Any tips on accomplishing this in Postgres with pg_vector?


I commented above with a pgvector example that does it-

https://news.ycombinator.com/item?id=40527925


Supabase has some good examples on their website, search for hybrid search. I needed to tune the function they have there but it should show you how to approach it.



I'm actually doing something like that already, I'm mostly referring to the Reciprocal Rank Fusion (RRF) part of this to squeeze more out.


Llamaindex rerank module


Thanks! Not using Python, but this is still really useful.


how does this compare to differentiable search index (DSI) and it's updated version (DSI++)?


Absolutely makes sense!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: