Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean I can already search my photos for “dog” or “burger” or words in text on photos. Adding an LLM to chat about it is just a new interface is it not?


I think the important thing is that the semantic index tracks all you do through all your apps.


They are likely implemented very differently. I’m not certain but I imagine the current photos app uses an image model to detect and label objects which you can search against. I expect Semantic Index (by virtue of the name) to be a vector store of embeddings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: