Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The experiences seem so different, that I'm having a hard time wrapping my mind around it.

Because we only see very disjointed descriptions, with no attempt to quantify what we're talking about.

For every description of how LLMs work or don't work we know only some, but not all of the following:

- Do we know which projects people work on? No

- Do we know which codebases (greenfield, mature, proprietary etc.) people work on? No

- Do we know the level of expertise the people have? Is the expertise in the same domain, codebase, language that they apply LLMs to?

- How much additional work did they have reviewing, fixing, deploying, finishing etc.?

Even if you have one person describing all of the above, you will not be able to compare their experience to anyone else's because you have no idea what others answer for any of those bullet points.

And that's before we get into how all these systems and agents are completely non-deterministic, and works now may not work even 1 minute from now for the exact same problem.

And that's before we ask the question of how a senior engineer's experience with a greenfield project in React with one agent and model can even be compared to a bon-coding designer in a closed-source proprietary codebase in OCaml with a different agent and model (or even the same, because of non-determinism).



> And that's before we get into how all these systems and agents are completely non-deterministic,

And that is the main issue. For some the value is reproducible results, for others, as long as they got a good result, it's fine.

It's like coin tossing. You may want tail all the time, because that's your chosen bet. You may prefer tail, but don't mind losing money if it's head. You may not interested in either, but you're doing the tossing and wants to know the techniques that works best for getting tail. Or you're just trying and if it's tail, your reaction is only "That's interesting".

The coin itself does not matter and the tossing is just an action. The output is what get judged. And the judgment will vary based on the person doing it.

So software engineering used to be the pursuit of tail of the time (by putting the coin on the ground, not tossing it). Then LLMs users say it's fine to toss the coin, because you'll get tail eventually. And companies are now pursuing the best coin tossing techniques to get tail. And for some, when the coin tossing gives tail, they only say "that's a nice toss".


> And companies are now pursuing the best coin tossing techniques to get tail.

With the only difference that the techniques for throwing coins can be verified by comparing the results of the tosses. More generally it's known as forcing https://en.wikipedia.org/wiki/Forcing_(magic)

What we have instead is companies (and people) saying they have perfected the toss not just for a specific coin, but for any objects in general. When it's very hard to prove that it's true even for a single coin :)

That said, I really like your comment :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: