I’m curious to try it out. There seem to be many options to upload a document and ask stuff about it.
But, the holy grail is an LLM that can successfully work on a large corpus of documents and data like slack history, huge wiki installations and answer useful questions with proper references.
I tried a few, but they don’t really hit the mark. We need the usability of a simple search engine UI with private data sources.
The approach in the paper has rough edges, but the metrics are bonkers (double digit percentage POINTS improvement over dual encoders). This paper was written before the LLM craze, and I am not aware of any further developments in that area. I think that this area might be ripe for some break through innovation.
If you want to allocate resources to building out the AI, connecting and ingesting sources, setting up rag, fine tuning and hyper param optimization...
Most companies lack the expertise and resources. Kapa means they get a docs bot while maintaining focus on what they do best.
Kapa must be doing something right since they seem to be growing. Having used it in a few discords, it's what I'd expect for quality for a saas product built on current ai capabilities.
> Kapa must be doing something right since they seem to be growing
It's marketing. The person you responded to said they're all marketing. Saying they "must be doing something right" because other people are also falling for it is how you get scammed.
That's what they say. What I see is high engagement of users in discord channels.
Even OpenAI, GP's alternative, is listed as using Kapa... and no public sign ups available yet either
I saw a glimpse of the internal dashboard companies get. It's much more than just question answering. Another big piece is the feedback, seeing user interaction, and being able to improve things over time
Rag is limited in that sense. Since the max amount of data you can send is still limited by the token amount that the LLM can process.
But if all you wanted is a search engine that's a bit easier.
The problem is often that a huge wiki installation etc will have a lot of outdated data etc. Which will still be an issue for an llm. And if you had fixed the data you might as well just search for the things you need no?
I think it depends of what they want. Like a search is indeed an easy solution, but if they want a summarization or a generated, straight answer so then things get a little bit harder.
A solution that combines RAG and function calling could span the correct depth, but yeah, the context depth is what will determine usefulness for user interaction.
I'd like to play with giving it more turns. When answering a question the note interesting ones require searching, reading, then searching again, reading more etc.
Looks promising, especially if you can select just your docs and avoid interacting with Mistral.
I’ll give it a try to see how it performs. So far I’ve had mixed results with other similar solutions.
Yes! Of course because the LLM is running locally it is not as advanced as bigger models like Claude or GPT, but you can definately quiz the documents. From my experience it performs better with specific questions rather than more ambigous questions that require extensive understanding of the whole document.
I have collected so much information in text files on my computer that it has become unmanageable to find anything. Now with local AI solutions, I wondered if I could create a smart search engine that could provide answers to the information that exists on my personal data.
My question is.
1 - Even if there is so much data that I can no longer find stuff, how much text data is needed to train an LLM to work ok? Im not after an AI that could answer general question, only an AI that should be able to answer what I already know exist in the data.
2 - I understand that the more structured the data are, the better, but how important is it when training an LLM with structured data? Does it just figuring stuff out anyways in a good way mostly?
3 - Any recommendation where to start, how to run an LLM AI locally, train on your own data?
Thanks for sharing! I look forward to playing with this once I get off my phone. Took a look at the code, though, to see if you've implemented any of the tricks I've been too lazy to try.
So we're embedding all 8000 chars behind a single vector index. I wonder if certain documents perform better at this fidelity than others. To say nothing of missed "prompt expansion" opportunities.
Of all the off the shelf text splitters I have tried, the recursive character splitter actually performs really well. Especially if the chunk size is so large you will likely have more than the actual needed context in a chunk anyway.
Regarding the index usually a mix of BM25 and vector index seems to perform best for most generic data.
Not sure if this helps but this is from tinkering with Mistral 7B on both my M1 Pro (10 Core, 16 GB RAM) and WSL 2 w/ CUDA (Acer Predator 17, i7-7700HK, GTX 1070 Mobile, 16GB DRAM, 8GB VRAM).
- Got 15 - 18 Tokens / sec on WSL 2 with slightly higher on M1. Can think of that to about 10 - 15 words per second. Both were using GPU. Haven’t tried CPU on M1 but on WSL 2 it was low single digits - super slow for anything productive.
- Used Mistral 7B via llamafile cross-platform APE executable.
- For local-uses I found increasing the context size increased the RAM a lot - but it’s fast enough. I am considering adding another 16x1 or 8x2.
Tinkering with building a RAG with some of my documents using the vector stores and chaining multiple calls now.
I haven’t seen on how it fares on uncensored use-cases, but from what I see Q5_K variants of Mistral 7B are not very far from Mixtral 8x7B (the latter requires 64GB of RAM which I don’t have).
Tried open-webui yesterday with Ollama for spinning up some of these. It’s pretty good.
Right now the minimum amount of RAM I would recommend is 16gb, I think it can run with less memory but that will require a few changes here and there (although they might reduce performance). I would also strongly recommend using a GPU over CPU, in my experience it can make the LLM run twice as fast if not more. Only Nvidia GPUs are supported for now and the CUDA toolkit 12.2 is required to run Dot.
Curious about the choice of FAISS. It's a bit older now, and there are many more options for creating and selecting embeddings. Does FAISS still offer some advantages?
Hi! I'm the guy who made Dot. I remember experimenting with a few different vector stores in the early stages of the project but decided to settle with FAISS. I mainly chose it because it made it easy to perform the whole embedding process locally and also because it allows to merge vector stores which is what I use to load multiple types of documents at once. But I am definately not an expert on the topic and would really appreciate suggestions on other alternatives that might work better! :)
Tried giving it a folder with a bunch of .pdfs, it takes soooo long to index them (and there's no progress bar or status indicator anywhere), and once I ask a question it's just stuck on "Dot is typing" for an hour. Maybe add an option to stream the output, at least I understand if it's doing something or not?
With those settings I would recommend GPU. CUDA acceleration really makes it faster, but keep in mind the CUDA toolkit 12.2 install will be a some 3-4gb
But, the holy grail is an LLM that can successfully work on a large corpus of documents and data like slack history, huge wiki installations and answer useful questions with proper references.
I tried a few, but they don’t really hit the mark. We need the usability of a simple search engine UI with private data sources.