Hacker News new | past | comments | ask | show | jobs | submit login

> Mostly but it does upload some of the vectorized data to insert into the prompt for context. When you do a query llama-index tries to discover content related to your prompt and injects it for context so its not entirely local.

When you say "upload some of the vectorized data" do you mean in a numerical embedding form or that it will embed the original text from original similar-seeming notes directly into the prompt? I've only ever done the latter, is there a way to build denser prompts instead? I can't find examples on Google.




The numerical vector representing the embedding is only useful for finding documents that are similar to your search query.

Those documents are then injected into your prompt and sent to some kind of LLM completion system such as GPT.

So yes, you will be sending chunks of your actual notes over the wire.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: