Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's insane! Particularly compared to all those other failed attempts elsewhere in the thread.

Makes me wonder, is anyone keeping a unit test suite for all this stuff? Between inherent[0] randomness in the model, and OpenAI team constantly tweaking it[1] to close gaps people use to make it produce undesirable content, techniques like the one you discovered will break sooner or later - it would be great to know when that happens, and perhaps over time, figure out some robust ones.

(OTOH, there's a limit to what one can learn from this - eventually, they'll drop another model, with its own prompt idiosyncrasies. I'm still bewildered people talk about "prompt engineering" as if it was a serious discipline or occupation, given that it's all just tuning your phrasing to transient patterns in the model that disappear just as fast as they're discovered.)

--

[0] - From the user interface side; the model underneath is probably deterministic.

[1] - If one is to believe the anecdotes here and on Reddit, it would seem many such "prompt hacks" have a shelf life of few hours to a day, before they stop working, presumably through OpenAI intervention.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: