Major props to the authors of this library. I re-built https://progscrape.com [1] on top of it last year, replacing an ancient Python2 AppEngine codebase that I had neglected for a while. It's a great library and insanely fast, as in indexing the entire library of 1M stories on a Raspberry Pi in seconds.
I'm able to host a service on a Pi at home with full-text search and a regular peak load of a few rps (not much, admittedly), with a CPU that barely spikes above a few percent. I've load tested searches on the Pi up to ~100rps and it held up. I keep thinking I should write up my experiences with it. It was pretty much a drop-in, super-useful library and the team was very responsive with bug reports, of which there were very few.
If you want to see how responsive the search is on such a small device, try clicking the labels on each story -- it's virtually instantaneous to query, and this is hitting up to 10 years * 12 months of search shards! https://progscrape.com/?search=javascript
I'd recommend looking at it over Lucene for modern projects. I am a big fan, as you might be able to tell. Given how well it scales on a tiny little ARM64, I'd wager your experiences on bigger iron will be even more fantastic.
It is a very nice library. I’m using it for a very work in progress incremental email backup CLI tool for email providers using JMAP.
I wanted users to be able to search their backups. As I’m using Rust Tantivy looked like just the right thing for the job. Indexing happens so fast for an email I did not bother to move the work to a separate thread. And search across thousands of emails seems to be no problem.
If anyone wants search for their Rust application they should take a look at Tantivy.
AFAIK, PostgreSQL doesn't provide a way to get the IDF of a term, which makes its ranking function pretty limited. tf-idf (and its varians, like Okapi BM25) are kinda table stakes for an information retrieval system IMO.
I'm not saying PostgreSQL's functionality is useless, but if you need ranking based on the relative frequency of a term in a corpus, then I don't believe PostgreSQL can handle that unless something has changed in the last few years. Usually the reason to use something like Lucene or Tantivy is precisely for its ranking support that incorporates inverse document frequency.
Postgres's FTS is actually quite solid! You can get very far with just the built-in tsvector. The ranking could be improved, though, which was one of the reasons for creating pg_search in the first place: https://github.com/paradedb/paradedb/tree/dev/pg_search (disclaimer: I work on pg_search @ ParadeDB)
Okay, but I didn't say it wasn't solid. I just said its ranking wasn't great because it lacks IDFs. It seems like we must be in violent agreement, given that you work on something that must be adding IDFs to PostgreSQL FTS. :P
Had a surprisingly good experience with combined power of Quickwit and Clickhouse for multilingual search pet project. Finally something usable for Chinese, Japanese, Korean
I recently deployed Quickwit (based on Tantivy, from the same team) in production to index a few billion objects and have been very pleased with it. Indexing rates are fantastic. Query latency is competitive.
Perhaps most importantly, separation of compute and storage has proven invaluable. Being able to spin up a new search service over a few billion objects in object storage (complete with complex aggregations) without having to pay for long-running beefy servers has enabled some new use cases that otherwise would have been quite expensive. If/when the use case justifies beefy servers, Quickwit also provides an option to improve performance by caching data on each server.
Huge bonus: the team is very responsive and helpful on Discord.
Another resource is a trigram search index (in Go) used by etsy/hound[0] based on an article (and code) from Russ Cox: Regular Expression Matching with a Trigram Index[1].
I was searching for a Meilisearch alternative (which sends out telemetry
by default) and found Tantivy. It's more of a search engine builder,
but the setup looks pretty simple [0].
Hm, I am interested, but I would love to use it as a rust lib and just have rust types instead of some json config...
The java sdk of meilisearch was also nice, same: no need for a cli and manual configuration. I just pointed it to a db entity and indexed whole tables...
But instead of this, I would prefer some way to just hand it JSON and for it to just index all the fields...
for comparison, this is my meilisearch SDK code:
fun createCustomers() {
val client = Client(Config("http://localhost:7700", "password"))
val index = client.index("customers")
val customers = transaction {
val customers = Customer.all()
val json = customers.map { CustomerJson.from(it) }
Json.encodeToString(ListSerializer(CustomerJson.serializer()), json)
}
index.addDocuments(customers, "id")
}
OP is entitled to make political choices when selecting software.
Some of us have specific principles of which things like opt-out telemetry might run afoul.
OP will choose their software, I choose mine and you choose yours; none of us need to call each other petty or otherwise cast such negative judgement; a free market is a free market.
Suggesting you should be less judgemental is not white-knighting, nor is it irrational. Sorry bud, but not everyone thinks the way you do, different people have different principles.
Feel free to explain how either of the two comments of yours I've replied to represent principled discussion or added value, because I'm not seeing it.
It's a minor complaint, but I'm also evaluating it for a minor project.
I just don't like the fact that I can forget to add a flag once and, oh,
now I'm sending telemetry on my personal medical documents.
Meilisearch only sends anonymized telemetry events. We only send API endpoints usage; nothing like raw documents goes through the wire. You can look at the exhaustive list of all collected data on our website [1].
Hey PSeitz, Meilisearch CEO here. Sorry to hear that you failed to index a low volume of data. When did you last try Meilisearch? We have made significant improvements in the indexing speed. We have a customer with hundreds of gigabytes of raw data on our cloud, and it scales amazingly well. https://x.com/Kerollmops/status/1772575242885484864
Frankly, I'm okay with Meillisearch for instant search because y'all are clear about analytics choices, offer understandable FOSS Rust, and have a non-AGPL license. If/when we make some money, I'm in favor of $upporting and consulting of tools used to keep them alive out of self-interest.
Tantivy is also used in an interesting Vector Database product called LanceDb - https://lancedb.github.io/lancedb/fts/ to provide full text search capabilities. Last time I looked it was only through the python bindings, though I know they're looking to implement the rust bindings natively to support other platforms.
I started working on a personal project a few years ago, after being insanely frustrated with the resource hog that is elasticsearch. That is coming from someone who's personal computer has more resources than what a number of generous startups allocate for their product. I opted for Tantivy for two reasons: one was my desire to do the whole thing in rust and second was Tantivy itself: performance is 10/10, documentation is second to none and the library is as ergonomic as they get. Sadly the project was a bite that was way too big for a single guy to handle in his spare time, so I abandoned it. Regardless, Tantivy is absolutely awesome.
I've been following Tantivy for a little while. It's impressive the grit that the founders have, and the performance that Tantivy has been able to achieve lately.
Mad props to all the team! I'm a firm believer they will succeed on their quest!
As someone who's used Lucene and Solr extensively, my biggest wishlist item has been support for upgrades. Typically Lucene (and Solr, and ES) indexes cannot be upgraded to new versions (it is possible in some cases, but let's ignore that for convenience). For many large projects, reindexing is a very expensive (and sometimes impossible) ordeal.
There are cases where this will probably never be possible (fields with lossy indexing where the datatype's indexing algorithm changed), but in many cases all the information is there, and it would be really nice if such indexes could be identified and upgraded.
Tantivy is great! I was using Postgres FTS with trigrams to index a few hundred thousand address strings for a project of mine [0], but this didn’t scale as well as I’d hoped with a couple million addresses. Replaced it with the tantivy cli [1] and it works a charm (ms searches on a single core vm).
This is nice, I used Solr for a while and it worked well but I hated the Java underneath it, and some aspects of it seemed needlessly slow. But, I think this is still a 20th century style of search engine and we need more modern approaches. Especially, those of us with small datasets compared to internet search behemoths can probably take an effiency hit to get more useful results.
What I really want is being able to index documents in multiple languages. Not all my users use the same language, and I don't want their documents and queries to assume English (for stop words, stemming, etc). This is a limitation of most search libraries at this point.
You have a big list of separate libraries providing support for a variety of languages? Great. Unfortunately that doesn't help me make a real multi-language app though. Doing that work right now, with multiple indexes and routing the query, seems very difficult.
I ‘m using https://stork-search.net for my static website search, but it’s no longer maintained. So yeah, Tantivy would be a great candidate to replace it! :)
In practice, a combination of full text and vector databases often gives superior performance than just one of the types. It's called hybrid search. Here's an article that talks a bit about this: https://opster.com/guides/opensearch/opensearch-machine-lear...
Often you take the results from both vector search and lexical search and merge them through algorithms like Reciprocal Rank Fusion.
You can think of a full-text index as being like a vector database that's highly specialized and optimized for the use-case where your documents and queries are both represented as "bags of words", i.e. very high-dimensional and very sparse.
Which works great when you want to retrieve documents that actually contain the specific keywords in your search query, as opposed to using embeddings to find something roughly in the same semantic ballpark.
Vector databases are good for documents, but if you have a fact database or some other more succinct information store, it's quite slow to retrieve compared to trigram/full text while often performing worse.
I'm able to host a service on a Pi at home with full-text search and a regular peak load of a few rps (not much, admittedly), with a CPU that barely spikes above a few percent. I've load tested searches on the Pi up to ~100rps and it held up. I keep thinking I should write up my experiences with it. It was pretty much a drop-in, super-useful library and the team was very responsive with bug reports, of which there were very few.
If you want to see how responsive the search is on such a small device, try clicking the labels on each story -- it's virtually instantaneous to query, and this is hitting up to 10 years * 12 months of search shards! https://progscrape.com/?search=javascript
I'd recommend looking at it over Lucene for modern projects. I am a big fan, as you might be able to tell. Given how well it scales on a tiny little ARM64, I'd wager your experiences on bigger iron will be even more fantastic.
[1] https://github.com/progscrape/progscrape