Hacker News new | past | comments | ask | show | jobs | submit login

LLMs are still just text generators. These are statistical models that cannot think or solve logical problems. They might fool people, as Weizenbaum's "Eliza" did in the late 60s, by generating code that sort of runs sometimes, but identifying and solving a logic problem is something I reliably see these things fail at.



Have you tried the latest models, using them with Cursor etc? They might not be truly intelligent but I’d be surprised if an SWE can’t see that they are already offering a lot of value.

They probably can’t solve totally novel problems but they are good at transposing existing solutions to new domains. I’ve built some pretty crazy stuff with just prompts - granted I can prompt with detailed technical instructions when needed as I’m a SWE, similar to instructing a junior. I’ve built prototypes which would take days in hours which to me is hugely exciting.

The code quality of pure AI generated code isn’t great but my approach right now is to use that to prototype things mostly with prompts (it takes as much time to build a prototype as it would to create a mock up or document explaining the idea previously) then once we are committed to it, I’ll rebuild it mostly by hand but using Cursor to help.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: