Hacker News new | past | comments | ask | show | jobs | submit login

When this happens, I'll usually say something along the lines of:

"This isn't working and I'd like to start this again with a new ChatGPT conversation. Can you suggest a new improved prompt to complete this task, that takes into account everything we've learned so far?"

It has given me good prompt suggestions that can immediately get a script working on the first try, after a frustrating series of blind spot bugs.




I do a similar thing when the latest GPT+DALLE version says "I'm sorry I can't make a picture of that because it would violate content standards" (yesterday, this was because I asked for a visualization of medication acting to reduce arterial plaque. I can only assume arteries in the body ended up looking like dicks)

So I say "Ok, let's start over. Rewrite my prompt in a way that minimizes the chance of the resulting image producing something that would trigger content standards checking"


I’ll give this a try when it undoubtedly happens to me later today while debugging something ;)


It seems surprising that this would work, because in my experience these LLMs don't really have good prompt-crafting skills.

Can you please share a ChatGPT example where that was successful, including having the new prompt outperform the old one?


I've also not had much success with asking it to craft prompts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: