Hacker News new | past | comments | ask | show | jobs | submit login

> 1. To use LLMs effectively, you often need to generate and store more than 1 vector per document.

Could you elaborate this point for me? What would cause the 1 document -> ~ 100 vectors blowup; do you store vector embeddings for sections of the document, or use multiple models to create several types of vectors?




If you look at something like LangChain[0], it supports/recommends splitting larger documents into smaller chunks. In this way, when doing something like semantic search you can get the specific paragraph/section that holds the closest relevance, rather than having to read the entire document again (think of a 100 page PDF).

https://python.langchain.com/en/latest/modules/indexes/text_...


This is generally very context/use case specific. In general, if a document is a `Dict[str, Any]`, then you either have to have one (or multiple) vector(s) per field, unless you want to combine vectors across fields (it's not self-evident how you'd best do that). In saying that, specific reason's to do this (or why I've done it in the past).

1. Chunking long text fields in documents so as to get a better semantic vector for them (also you can only fit so much into an LLM). 2. Differently to 1. chunking long text fields (or even chunking images, audio, etc), is one way to perform highlighting. It helps to answer the question, for example, for a given document what about it was the reason it was returned? You can then point to the area in the image/text/audio that was most relevant. 3. You may want to run different LLMs on different fields (perhaps a separate multi-modal LLM vs a standard text LLM), or like another comment said have different transforms/representations of the same field.

Perhaps 100 vectors is non-standard, but definitely not unseen.


Only Vespa allows you to index multiple vectors per schema field, avoiding duplicating all the meta data of the document into the "chunk", and avoids maintaining the document to chunk fan-out. See https://blog.vespa.ai/semantic-search-with-multi-vector-inde...


I’m not a data scientist but I think I know why one document could lead to many vectors.

(Happy to be corrected and/or schooled.)

A vector is a list of numbers each of which represents weight accorded to a certain word along a certain dimension.

Let’s take an example.

Is an “apple” a “positive” or a “negative” thing? Most people would associate positivity with apples. So, for the general population, the vector for “apple” along the 0-1 continuum where 0 represents negative sentiment and 1 represents positive sentiment would be something like [0.8].

Let’s add one more dimension. Is an apple associated with computers (1) or not (0)? For the majority of the world where Windows has a massive market share, “apple” would recall a fruit, not a sleek laptop. Therefore, the vector for apple along the computer/non-computer dimension is probably [0.3].

Taking this together, apple = [0.8, 0.3] where positionally, 0.8 is the value for positive/negative sentiment while 0.3 is computer/non-computer.

Agree?

(Hoping you do)

But that [0.8, 0.3] vector is for the general population.

Would a bible literalist who publishes blogs on bible stories feel the same way?

For someone like that, the notion of the original sin could taint their sentiments towards the apple. So they might weight an apple at 0.2 on the positive/negative line. Since they’re bloggers, it’s more likely they associate apple with computers so they might call it 0.5. Therefore, their apple vector is [0.2, 0.5].

Extend this to more content and you’ll see why there are more than one vector.

At least that’s how I understood it. Happy to be corrected and/or schooled.


In my opinion, you could represent "apple" as a vector, for example, [0.99, 0.3, 0.7] in relation to [fruits, computers, religion]. Then, you can create different user vectors that describe the interests of various groups. For instance, the general population might have a vector like [0.8, 0.2, 0.1], geeks as [0.6, 0.95, 0.05], and religious people as [0.7, 0.1, 0.95].

By creating these user vectors, you can compare them with the "apple" vector and find the best match using ANN. This approach allows you to determine which group is most interested in a given context or aspect of the word "apple." The ANN will help you identify similarities or patterns in the user vectors and the "apple" vector to find the most relevant matches.

Thank you


I don’t know what ANN is but your comment raises two questions in my mind -

1. Where did your first vector of [0.99, 0.3, 0.7] come from? You later present the concept of user vectors which are vectors for different cohorts of users but don’t name the first vector as a user vector.

2. I feel my example of vectors for “general population users” and “bible literalist blogger” user aligns with your “user vector” concept.


Modern text embeddings are not word-based like that.


If my understanding and explanation are directionally correct, I’m happy. I’ll be the first one to admit I’m not a data scientist.

Do you have a good example of how an actual data scientist would present the idea of vectors as applied to sentences/documents to a layperson?


Storing one document as one embedding is like making a movie poster the average of all frames in the film.


That is a very good analogy!


:D Thanks!


One thing others didn't mention is that "document" is a general term but in some cases (e.g., question answering) the typical document can be a very short paragraph and take much less memory than the vector. Also note that with some ML architecture the vector is very large (e.g., an entire very layer output)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: