Hacker News new | past | comments | ask | show | jobs | submit login

It's true, it can be maddening (impossible?) to chase down all the edge-case failures LLMs produce. But outside of life/death applications with extreme accuracy requirements (eg: medical diagnostics) the attitude I've seen is: who cares? A lot of users "get" AI now and don't really expect it to be 100% reliable. They're satisfied with a 95% solution, especially if it was deployed quickly and produces something they can iterate on for the last 5%.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: