Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rerank 3: A new foundation model for efficient enterprise search and retrieval (cohere.com)
45 points by bguberfain on April 11, 2024 | hide | past | favorite | 5 comments


Being as charitable as possible here, and Rerank 3 might be the bee's knees, but the examples are absolutely awful. Do you really need to use embeddings + a large language model to search for "action" and "Christian Bale" in two columns[1]?

Your interface can literally just be two dropdowns. I'd like to see things like "the actor that played the Joker in that movie about Bob Dylan" if you're really trying to flex your semantic search muscles.

[1] https://colab.research.google.com/drive/1sKEZY_7G9icbsVxkeEI...


Someone correct me if I'm mistaken, but Cohere appears to be using BM25 and semantic search (Embed Multilingual) individually as baselines in order to look better. A more suitable baseline would be the Reciprocal rank fusion (RRF) of BM25 and semantic search. And those latencies seem high; seconds to rerank?


El5 me what is rerank model? Why 4k context window size is considered large?


Imagine you have 100 documents in a database and you "query" the documents and return 20 candidate results.

Similarity gave you 20 results but Re-ranking sorted those results further providing relevance.

That 4K is per document.

Edit: With sorted relevance, you can drop the lower scoring documents according to the model's confidence that the information in the subset is adequate to answer the query.


20 results, each have 4k snippet, feed to this ranking model, ranking model produces a score based on query

Is this a correct understanding?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: