Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am pretty skeptical of how useful "memory" is for these models. I often need to start over with fresh context to get LLMs out of a rut. Depending on what I am working on I often find ChatGPT's memory system has made answers worse because it sometimes assumes certain tasks are related when they aren't and I have not really gotten much value out of it.

I am even more skeptical on a conceptual level. The LLM memories aren't constructing a self-consistent and up to date model of facts. They seem to remember snippets from your chats, but even a perfect AI may not be able to get enough context from your chats to make useful memories. Things you talk about may be unrelated or they get stale but you might not know which memories your answers are coming from but if you did have to manage that manually it would kind of defeat the purpose of memories in the first place.



That is my experience as well. This memory feature strikes me as beneficial for Anthropic but not for end users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: