Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of computer users are domain experts in something like chemistry or physics or material science. Computing to them is just a tool in their field, e.g. simulating molecular dynamics, or radiation transfer. They dot every i and cross every t _in_their_competency_domain_, but the underlying code may be a horrible FORTRAN mess. LLMs potentially can help them write modern code using modern libraries and tooling.

My go-to analogy is assembly language programming: it used to be an essential skill, but now is essentially delegated to compilers outside of some limited specialized cases. I think LLMs will be seen as the compiler technology of the next wave of computing.



The difference is that compilers involve rules we can enumerate, adjust, etc.

Consider calculators: Their consistency and adherence to requirements was necessary for adoption. Nobody would be using them if they gave unpredictable wrong answers, or where calculations involving 420 and 69 somehow keep yielding 5318008. (To be read upside-down, of course.)


But thats the point, an llm is a vastly different object to a calculator. Its a new type of tool for better or worse based on probabilities, distributions.

If you can internalise that fact and look at it like having a probable answer rather than an exact answer it makes sense.

Calculators cant have a stab at writing an entire c compiler. A lot of people cant either or takes a lot of iteration anyway, no one one shotted complicated code before llms either.

I feel discussion shouldnt be about how they work as the fundamental objection, rather the costs and impacts they have.


The compilers used to be unreliable too, e.g. at higher optimizations and such. People worked on them and they got better.

I think LLMs will get better, as well.


nice. 3x.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: