Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you completely misunderstood me, actually. I explicitly say if it works, great, no sarcasm. LLMs are finicky beasts. Just keep in mind they don’t really forget anything, if you tell it to forget, the things you told it before are still taken into the matrix multiplication mincers and influence outputs just the same. Any forgetting is pretend in that your ‘please forget’ is mixed in after.

But back to scheduled programming: if it works, great. This is prompt engineering, not magic, not humans, just tools. It pays to know how they work, though.





It's beyond possible that the LLM Chat Agent has tools to self manage context. I've written tools that let an agent compress chunks of context, search those chunks, and uncompress them at will. It'd be trivial to add a tool that allowed the agent to ignore that tool call and anything before it.

>the things you told it before are still taken into the matrix multiplication mincers and influence outputs just the same.

Not the same no. Models chooses how much attention to give each token based on all current context. Probably that phrase, or something like it, makes the model give much less attention to those tokens than it would without it.


No.

I think that you are misunderstanding EVERYTHING

Answer this:

1. Why would I care what the other interpretation of the wording I GAVE is?

2. What would that interpretation matter when the LLM/AI took my exact meaning and behaved correctly?

Finally - you think you "know how it works"????

Because you tried to correct me with an incorrect interpretation?

F0ff


Well ask it to tell you what it forgot. Over and out.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: