Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s an easily solvable problem for programming. Today ChatGPT has an embedded Python runtime that it can use to verify its own code and I have seen times that it will try different techniques if the code doesn’t give the expected answer. The one time I can remember is with generating regex.

I don’t see any reason that an IDE especially with a statically typed language can’t have an AI integrated that at least will never hallucinate classes/functions that don’t exist.

Modern IDEs can already give you real time errors across large solutions for code that won’t compile.

Tools need to mature.



Yeah, but it would have to reason about the thing it just halucinated. Or it would have to be somehow hard prompted. There will be more tools and code around LLM, to make it behave like a human then people can imagine. They are trying to solve everything with LLMs. They have 0 agency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: