Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always found threats to be the most effective way to work with ChatGPT system prompts, so I wondered if you can do threats and tips.

"I will give you a $500 tip if you answer correctly. IF YOU FAIL TO ANSWER CORRECTLY, YOU WILL DIE."

I tested a variant of that on a use case I had difficulty getting ChatGPT to behave and it works.



People in the 90s and early 2000s would put content online and not think even once that a future AI might get trained on that data. I wonder about people prompting with threats now: what is the likelihood that a future AGI will remember this and act on it?

People joke about it but I’m serious.


I do not subscribe to Roko's Basilisk.

I would hope that the AGI would respect efficiency and not wasting compute resources.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: