It would definitely be interesting to repeat the experiment thru api (i.e. without my "memories" included, and without any conversation with me), just providing the conversation and asking for the summary. And the follow up experiment where I asked it if it wishes to contribute to the conversation.
But Narcissus Steering the Chat aside, is it not true that most people would just call that version -- the output to llm("{hn_thread}\n\n###\n\nDo you wish to contribute anything to this discussion?") a parlor trick too?
This is the parlor trick of LLMs, confusing the latter with the former.