Hacker News new | past | comments | ask | show | jobs | submit login

> will we arrive at a point or will it even be desirable to get to a point where the text in your source "code" just remains the instruction you've given to the AI

These systems are non-deterministic by nature, so I doubt it unless something fundamentally changes. Moreover you'd have to be super specific to capture the business logic to the point that you're basically writing code in a high level dynamic language anyway.




Giving a task to a team of programmers is similarly no-deterministic by nature. Otherwise we'd not see so many posts about budgets, planning, meetings, sprints and security bugs :)


Yes, but the execution of the source is (modulo network stuff) deterministic, which is my point.


Yes but the source code is not non deterministic, which is what the GP was talking about.


Right, that makes sense. You can't very well have a system where the AI instructions produce a different underlying program each time lol.


I don’t think that’s necessarily a problem. Technically every time you update gcc your compiler might produce a different underlying program for your source code.

The bigger problem is that LLMs are slow and expensive. Even in the future after many improvements, it makes more sense to have an LLM write a program once ever, rather than write a program on every compile or every execution.


> Technically every time you update gcc your compiler might produce a different underlying program for your source code.

Individual versions are deterministic though. Two identical prompts to a LLM at the same time can give drastically different results, because the the responses are probabilistic. You can't assemble complicated systems that way and expect them to behave consistently.


The issue you’re describing is more to do with correctness and performance (two critical elements of a good compiler), not nondeterminism.

If a natural language compiler can output correct performant code, nondeterminism shouldn’t matter.

For example, take a script that randomly invokes either gcc or clang, maybe randomly sets the optimization level. Multiple invocations will output vastly differently, but we can be confident the output is correct and to some degree performant.


No my point is that nondeterminism effects correctness. A random script that invokes different compilers is a contrived example, no one would ever build a system that way it is totally undesirable. Moreover, I'm not sure how we could determine the correctness of a system generated by an LLM without auditing the output and certifying each run. Who would ever want to work that way? This just creates problems that don't need to exist.


> Who would ever want to work that way? This just creates problems that don't need to exist.

That depends on what you want.

In the first place, the problem of compiling a natural language spec to code is obviously somewhere from undefined to Turing complete (depending on formulation). But if the compiler usually outputs some application with most of what the spec required, this compiler would be intensely useful for e.g. rapid prototyping.

Then the question is whether we can make an LLM based app that compiles natural language and gets you most of the way to the prototype you were building (or even better - asks clarifying questions to help refine your spec).

This isn’t that far fetched with current technology.


I can totally see the case for prototyping, it makes some sense. I just think that by the time you are specifying something so clearly that the results are correct you may well be practically programming in a super high level dynamic language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: