Hacker News new | past | comments | ask | show | jobs | submit login

The compiler is still not part of the picture, when LLMs start being able to produce binaries straight out of prompts, then programmers will indeed be obsolete.

This is the holy grail of low-code products.




Why is an unauditable result the holy grail? Is the goal to blindly trust the code generated by an LLM, with at best a suite of tests that can only validate the surface of the black box?


Money, low-code is the holy grail that business no longer need IT folks, or at very least, reduce the amount of FTEs they need to care about.

See all the SaaS products, without any access to their implementation, programable via graphical tooling, or orchestrated via Web API integration tools, e.g. Boomi.


Is it no different to you when the black box is created by an LLM rather than a company with guarantees of service and a legal entity you can go after in case of breach of contract?

Where does the trust in a binary spit out by an LLM come from? The binary is likely unique and therefore your trust can't be based on other users' experience, there likely isn't any financial incentive or risk on the part of the LLM should the binary have bugs or vulnerabilities, and you can't audit it if you wanted to.


As usual this kind of things will sorted out, as developers have to search for something else.

QA, acceptance testing whatever, no different from buying closed source software.

Only those that never observed the replacement of factory workers by complete robot based chains can think this will never happen to them.

Here is a taste of the future,

https://www.microsoft.com/en-us/power-platform/products/powe...


Assembly line robots are still a bit different from LLMs directly generating binaries though, right?

An assembly line robot is programmed with a very specific repeatable task that can easily be quality tested to ensure that there aren't manufacturing defects. An LLM generating binaries is doing this one off, meaning it isn't repeatable, and the logic of the binary isn't human auditable meaning we have to trust that it does what was asked of it and nothing more.


The same line of arguments from Assembly language developers against FORTRAN compilers and the machine code they could generate.

There are ACM papers about it.

It didn't hold on.

Do you really inspect the machine code generated by your AOT or JIT compilers, in every single execution of the compiler?

Do you manually inspect every single binary installed into the computer?


There's a fundamental difference between a compiler and a generative LLM algorithm, though. One is predictable, repeatable, and testable. The other will answer the same question slightly differently every time its asked.

Would you trust a compiler's byte code if it spit out slightly different instructions every time you gave it the same input? Would you feel confident in the reliability and performance of the output? How can you meaningfully debug or performance profile your program when you don't know what the LLM did and can't reproduce the issue locally short of running the exact copy of the deployed binary?

Comparing compilers and LLMs really is apples and oranges. That doesn't mean LLMs aren't sometimes helpful or that they should never be used in any situation, but LLM fundamentally are a bad fit for the requirements of a compiler.


So who is instructing the LLMs on what sort of binaries to produce? Who is testing the binaries? Who is deploying them? Who is instructing the LLMs to perform maintenance and upgrades? You think the managers are up for all that? Or the customers who don’t know what they want?


Just like offshoring nowadays, you take the developers out of the loop, and keep PO, architects and QA.

Instead of warm bodies somewhere on the other side of the planet, it is a LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: