Hacker News new | past | comments | ask | show | jobs | submit login

The cognitive dissonance of seemingly educated people defending the LLMs when it comes to writing code is my top mystery for the entirety of 2024.

Call me if you find a good reason. I still have not.




In my opinion people that harp on about how LLMs have been game changer for them are the people that put themselves as never actually having built anything sophisticated enough that a team of engineers can work and extend on for years.


This back and forth is so tiring.

I have built web services used by many Fortune 100 companies, built and maintained and operated them for many years.

But I'm not doing that anymore. Now I'm working on my own, building lots of prototypes and proof-of-concepts. For that I've founding LLMs to be extremely helpful and time-saving. Who the hell cares if it's not maintainable for years? I'll likely be throwing it out anyway. The point is not to build a maintainable system, it's to see if the system is worth maintaining at all.

Are there software engineers who will not find LLMs helpful? Absolutely. Are there software engineers who will find LLMs extremely helpful? Absolutely.

Both can exist at the same time.


I agree with you and I don't think OP disagrees either. The point if contention is the inevitable and immediate death of programming as a profession.


Surely nobody has that binary a view?

What are the likely impacts over the next 1, 5, 10, 20 years. People getting into development now have the most incredible technology to help them skill up, but also more risk than we had in decades past. There's a continuum of impact and it's not 0 or 100%, and it's not immediate.

What I consider inevitable: humans will keep trying to automate anything that looks repeatable. As long as there is a good chance of financial gain from adding automation, we'll try it. Coding is now potentially at risk of increasing automation, with wildcards on "how much" and "what will the impact be". I'm extremely happy to have nuanced discussions, but I balk at both extremes of "LLMs can scale to hard AGI, give up now" and "we're safe forever". We need shorthand for our assumptions and beliefs so we can discuss differences on the finer points without fixating on obviously incorrect straw men. (The latter aimed at the general tone of these discussions, not your comment.)


And I'll keep telling you that I never said I'm safe forever.


And I never said both can't exist at the same time. Are you certain you are not the one fighting straw men and are tiring yourself with the imagined extreme dichotomy?

My issue is with people claiming LLMs are undoubtedly going to remove programming as a profession. LLMs work fine for one-off code -- when they don't make mistakes even there, that is. They don't work for a lot of other areas, like code you have to iterate on multiple times because the outer world and the business requirements keep changing.

Works for you? Good! Use it, get more productive, you'll only get applause for me. But my work does not involve one-off code and for me LLMs are not impressive because I had to rewrite their code (and to eye-ball it for bugs) multiple times.

"Right tool for the job" and all.


"Game changer" maybe.

But what you'll probably find is that people that are skilled communicators are currently getting a decent productivity boost from LLMs, and I suspect that the difference between many that are bullish vs bearish is quite likely coming down to ability to structure and communicate thoughts effectively.

Personally, I've found AI to be a large productivity boost - especially once I've put certain patterns and structure into code. It's then like hitting the N2O button on the keyboard.

Sure, there are people making toy apps using LLMs that are going to quickly become a unmaintainable mess, but don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.


> I suspect that the difference between many that are bullish vs bearish is quite likely coming down to ability to structure and communicate thoughts effectively.

Strange perspective. I found LLMs lacking in less popular programming languages, for example. It's mostly down to statistics.

I agree that being able to communicate well with an LLM gives you more results. It's a productivity enabler of sorts. It is not a game changer however.

> don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.

OK, I am open to proof. But people are just saying it and leaving the claims hanging.


Yep, as cynical and demeaning that must sound to them, I am arriving at the same conclusion.


My last project made millions for the bank I was working at within the first 2 years and is now a case study at one of our extremely large vendors who you have definitely heard of. I conceptualised it, designed it, wrote the most important code. My boss said my contribution would last decades. You persist with making statements about people in the discussion, when you know nothing about their context aside from one opinion on one issue. Focus on the argument not the people.


You're the one who claimed that I'm arrogant and pulled that out of thin air. Take your own advice.

I also have no idea what your comment here had to do with LLMs, will you elaborate?


[flagged]


Indeed, your blaming of me being arrogant is like a school yard indeed. You started it, you got called out, and are now acting superior. I owe you no grace if you started off the way you did in the other sub-thread.

Go away, I don't want you in my replies. Let me discuss with people who actually address the argument and not looking to paint me as... something.


All the programming for the Apollo Program took less then a year and Microsoft Teams is decades in development obviously they are better than NASA programmers.


the programming for the NASA program is very simple; the logistics of the mission, which has nothing to do with programming, is what was complex

You're essentially saying "the programming to do basic arithmetic and physics took only a year" as if that's remotely impressive compared to the complexity of something like Microsoft Teams. Simultaneous editing of a document by itself is more complicated than anything an Apollo program had to do


I want to not like this comment, but I think you are right! There's a reason people like to say your watch has more compute power than the computers it took to put man on the moon.


but that’s thing, right? it is not “seemingly” there are A LOT of highly educated people here telling you LLMs are doing amazing shit for them - perhaps a wise response should be “lemme go back and see whether it can also become part of my own toolbox…”

I have spent 24 years coding without LLMs, cannot fathom now spending more than a day without it…


I have tried them a few times but they were only good at generating a few snippets.

If you have scrutable and interesting examples, I am willing to look them up and try to apply them to my work.


Which needs does it fulfil that you weren't able to fulfil yourself? (Advance prediction: these are needs that would be fulfilled better by improvements to conventional tooling.)


The short answer is they're trying to sell it.


Agreed, but I can't imagine that everybody here on HN that's defending LLMs so... misguidedly and obviously ignoring observable reality... are financially interested in LLMs' success, can they?


I’m financially interested in Anthropic being successfull since it means their prices are more likely to go down, or for their models to get (even) better.

Honestly, if you don’t think it works for you, that’s fine with me. I just feel the dismissive attitude is weird since it’s so incredibly useful to me.


Given they're a VC backed company rushing to claim market share and apparently selling well below cost, it's not clear that that will be the case. Compare the ride share companies for one case where prices went up once they were successful.


Why is it weird, exactly? I don't write throwaway projects so the mostly one-off nature of LLM code generation is not useful to me. And I'm not the only one either.

If you can give examples of incredible usefulness then that can advance the discussion. Otherwise it's just us trying to out-shout each other, and I'm not interested in that.


It's a message board frequented by extremely tech-involved people. I'd expect the vast majority of people here to have some financial interest in LLMs - big-tech equity, direct employment, AI startup, AI influencer, or whatever.


Yeah, very likely. It's the new gold rush, and they are commanding wages that make me drool (and also make me want to howl in pain and agony and envy but hey, let's not mention the obvious, shall we?).

I always forget the name of that law but... it's hard to make somebody understand something if their salary depends on them not understanding it.


For similar reasons, I can confidently say that your disliking of LLMs is sour grapes.


I could take you seriously if you didn't make elementary English mistakes. Also think what you like, makes zero difference to me.


Conflict of interest perhaps


Yep. Still makes me wonder if they are not seeing the truth but just refusing to admit it.


It’s a line by Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”


Orrrrr more simpler than imagining a vast conspiracy is that your observations just don't match theirs. If you're writing, say, C# with some esoteric libraries using CoPilot, it's easy to see it as glorified auto-complete that hallucinates to the point of being unusable because there's not enough training data. If you're using Claude with Aider to write a webpage using NextJS, you'll see it as a revolution in programming because of how much of that is on stack overflow. The other side of it is just how much the code needs to work, vs has to look good. If you're used to engineering the most beautifully organized code before shipping once, vs shipped some gross hacks you're ashamed of shipping, and the absolute quality of the code is secondary to it working and passing tests, then the generated code having an extra variable or crap that doesn't get used isn't as big an indightment of LLM-assisted programming that you believe it to be.

Why do you think your observable reality is the only one, and the correct one at that? Looking at your mindset, as well as the objections to the contrary (and their belief that they're correct), the truth is likely somewhere in-between the two extremes.


Where did I imply conspiracy? People have been known to turn a blind eye to criticism towards stuff they like ever since... forever.

The funny thing about the rest of your comment is that I'm in full agreement with you but somehow you decided that I'm an extremist. I'm not. I'm simply tired of people who make zero effort to prove their hypothesis and just call me arrogant or old / stuck in my ways, again with zero demonstration how LLMs "revolutionize" programming _exactly_.

And you have made more effort in that direction than most people I discussed this with, by the way. Thanks for that.


You said you couldn't imagine a conspiracy and I was responding to that. As far as zero demonstration, simonw has a bunch of detailed examples at:https://simonwillison.net/tags/ai-assisted-programming/ or maybe https://simonwillison.net/tags/claude-artifacts/, but the proof is in the pudding, as they say, so setting aside some time and $20 to install Aider and get it working w/ Claude, and then building a web app is the best way to experience either the second coming, or an overhyped let down. (or somewhere in the middle.)

Still, I don't think it's a hypothesis that most are operating under, but a lived experience that either it works for them or it does not. Just the other day I used ChatGPT to write me a program to split a file into chunks along a delimiter. Something I could absolutely do, in at least a half-dozen languages, but writing that program myself would have distracted me from the actual task at hand, so I had the LLM do it. It's a trivial script, but the point is I didn't have to break my concentration on the other task to get that done. Again, I absolutely could have done it myself, but that would have been a total distraction. https://chatgpt.com/share/67655615-cc44-8009-88c3-5a241d083a...

On a side project I'm working on, I said "now add a button to reset the camera view" (literally, aider can take voice input). we're not quite at the scene from star trek where scottie talks into the mac to try and make transparnet aluminum, but we're not that far off! The LLM went and added the button, wired it into a function call that called into the rendering engine and reset the view. Again, I very much could have done that myself, but it would have taken me longer just to flip through the files involved and type out the same thing. It's not just the time saved, which, I didn't have a stopwatch and a screen recorder, but apart from the time, it's not having to drop my thinking into that frame of reference, so I can think more deeply about the other problems to be solved. Sort of why ceo isn't an IC and why IC's aren't supposed to manage.

Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't, but there's just so much that LLMs can do, as they exist now, that it's not hyperbole to say it's redefined programming, for those specific use cases. But where that use case is "build a web app", I don't know about you, but I use a lot of web apps these days.


These are the kind of the informed takes that I love. Thank you.

> Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't

You might call me a detractor. I think of myself as being informed and feeling the need to point out where do LLMs end and programmers begin because apparently even on an educated forum like HN people shill and exaggerate all the time. That was the reason for most of my comments in this thread.


And it’s the short wrong answer. I get absolutely zero benefit if you or anyone else uses it. This is not for you or I, it’s for young people who don’t know what to do. I’m a person who uses a few LLM’s, and I do so because I see the risks and want to participate rather than have change imposed on me.


OK, do that. Seems like a case of FOMO to me but it might very well turn out that you're right and I'm wrong.

However, that's absolutely not clear today.


Genuine question: how hard have you tried to find a good reason?


Obviously not very hard. And looking at the blatant and unproven claims on HN gave me the view that the proponents are not interested in giving proof; they simply want to silence anyone who disagrees LLMs are useful for programming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: