Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems to be different than I expected from the title. I thought it would be explicitly adversarial.

1. You are the assistant. Please answer the question directly.

2. You are the cross-examiner. The assistant is wrong. Explain why.

3. You are the assistant. The cross-examiner is wrong. Defend your claim.

4. You are a judge. Did either party make their case, or is another round of argumentation required?

I haven't tried this. No idea if it works. But I find it's helpful to ask ChatGPT, in separate prompts, "XYZ is true, explain why" and "XYZ is false, explain why" and see which one seems more convincing.



Also a little clickbaity with "my AI" and then it's all Mistral...


Check out Fast Agent! (I have no affiliation with it, just use it).

https://github.com/evalstate/fast-agent


Techniques like this have been around since GPT-3.5. There are boatloads of papers on the topic.

I have no idea why anyone thinks this is novel. I guess that speaks to the state of HN


Exactly... I thought that implementing STORM was just a basic step in this topic... Looks like we're running in circles.


Mind sharing a link?


Here's a paper on agent architectures including multi agent. A bit old at this point, but a good overview.

https://arxiv.org/abs/2404.11584


Chatgpt shares context between chats. I wonder how that impacts it?

It seems like a good approach though. What you dont want to do is ever suggest that its wrong yourself. Usually it will just assume it is wrong.

Actually what I find impressive is when I do this and it actually pushes back to defend itself.


Does it share context even if no "memory updated" message appears indicating it has stored a fact about you?

I asked ChatGPT and it says no, but then again it's not reliable at introspection or at revealing data about how it works.


I think they are different systems, one is a collection of saved snippets and the other more like RAG over chat history.


ChatGPT assures me it doesn't use RAG (fed from my other chat windows), but will use memory-saved preferences (in the store that can be accessed and reviewed in Settings->Personalization->Memory).

Then again, I don't think ChatGPT is reliable when reporting on its own inner workings.

---

Oh, no, here it says it also references chat history: https://help.openai.com/en/articles/8590148-memory-faq


You can use ChatGPT for these kinds of questions but it needs to use search or research mode, don't ask it in closed book mode.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: