Hacker News new | past | comments | ask | show | jobs | submit login

You may see that we try to suggest follow-up questions or question improvements where we think better context-in will result in a better result-out.

Curious what will happen if you modify the question to be more explicit?

I have seen that PMs and data-trained folk tend to be very articulate in asking for exactly what they want and that tends to lead to significantly better LLM responses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: