Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not going to lie, this is one of the few reasons I use LLMs at all. Even if I feel like I have a decent idea if I don't have anyone around to listen I'll just lob thoughts at an AI just to ask for alternatives, dissenting opinions, critiques, etc. Typically much of the output are things I already considered, but even that can be validating itself as a sort of reminder that I did think things through. And on some occasions it does raise things I wouldn't have considered which can be great to stop and chew on before proceeding.


How do you reconcile the fact that LLMs can be pretty fair-weather? Meaning, while they can serve as a sounding board and often raise perspectives you might not have thought of, they don't have much conviction and will change their tune if you push them in the other direction enough.


My approach is to compliment the LLM on something I've not thought of and ask it to sell me on the approach, expound on its position, and ask probing questions. If I get a feeling something's off I just go do independent research like normal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: