> Mostly but it does upload some of the vectorized data to insert into the prompt for context. When you do a query llama-index tries to discover content related to your prompt and injects it for context so its not entirely local.
When you say "upload some of the vectorized data" do you mean in a numerical embedding form or that it will embed the original text from original similar-seeming notes directly into the prompt? I've only ever done the latter, is there a way to build denser prompts instead? I can't find examples on Google.
When you say "upload some of the vectorized data" do you mean in a numerical embedding form or that it will embed the original text from original similar-seeming notes directly into the prompt? I've only ever done the latter, is there a way to build denser prompts instead? I can't find examples on Google.