Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

+1 - I wish at least one of these AI boosters had shown us a real commercialised product they've built.


AI boosters? Like people are planted by Sam Altman like the way they hire crowds for political events or something? Hey! Maybe I’m AI! You’re absolutely right!

In seriousness: I’m sure there are projects that are heavily powered by Claude, myself and a lot of other people I know use Claude almost exclusively to write and then leverage it as a tool when reviewing. Almost everyone I hear that has this super negative hostile attitude references some “promise” that has gone unfulfilled but it’s so silly: judge the product they are producing and maybe just maybe consider the rate of progress to _guess_ where things are heading


I never said "planted", that is your own assumption, albeit a wrong one. I do respect it though, as it is at least a product of a human mind. But you don't have to be "planted" to champion an idea, you are clearly championing it out of some kind of conviction, many seem to do. I was just giving you a bit of reality check.

If you want to show me how to "guess where things are heading" / I am actually one of the early adopters of LLMs and have been engineering software professionally for almost half my life now. Why do you think I was an early adopter? Because I was skeptical or afraid of that tech? No, I was genuinely excited. Yes you can produce mountains of code, even more so if you were already an experienced engineer, like myself for example.

Yes you can even get it to produce somewhat acceptable outputs, with a lot of effort at prompting it and fatigue that comes with it. But at the end of the day, as an experienced engineer, I am not being more productive with it, I will end up being less productive because of all the sharp edges I have to take care of, all the sloppily produced code, unnecessary bloat, hallucinated or injected libraries etc.

Maybe for folks who were not good at maths or had trouble understanding how computers work this looks like a brave new world of opportunities. Surely that app looks good to you, how bad can it be? Just so you and other such vibe-coders understand, here is a parallel.

It is actually fairly simple for a group of aviation enthusiasts to build a flying airplane. We just need to work out some basic mechanics, controls and attach engines. It can be done, I've seen a couple of documentaries too. However, those planes are shit. Why? Because me and my team of enthusiast dont have the depth of knowledge of a team of aviation engineers to inform my decisions.

What is the tolerance for certain types of movements, what kind of materials do I need to pick, what should be my maintenance windows for various parts etc. There are things experts can decide on almost intuitively, yet with great precision, based on their many years of craft and that wonderful thing called human intelligence. So my team of enthusiasts puts together an airplane. Yeah it flies. It can even be steered. It rolls, pitches and yawns. It takes off and lands. But to me it's a black-box, because I don't understand many, many factors, forces, pressures, tensors, effects etc that are affecting an airplane during it's flight and takeoff. I am probably not even aware WHAT I should be aware of. Because I dont have that deep educaiton about mechanical engineering, materials, aerodynamics etc. Neither does my team. So my plane, while impressive to me and my team, will never take off commercially, not unless a team of professionals take it over and remakes it to professional standards. It will probably never even fly in a show. And if me or someone on my team dies flying it, you guessed it - our insurance sure as hell won't cover the costs.

So what you are doing with Claude and other tools, while it may look amazing to you, is not that impressive to the rest of us, because we can see those wheels beginning to fall off even before your first take off. Of course, before I can even tell that, I'd have to actually see your airplane, it's design plans etc. So perhaps first show us some of those "projects heavily powered by Claude" and their great success, especially commercial one (otherwise its a toy project), before you talk about them.

The fact that you are clearly not an expert on the topic of software engineering should guide you here - unless you know what you are talking about, it's better to not say anything at all.


How would you know whether he is an expert on the topic of software engineering or not?

For all I know, he is more competent than you; he figured out how to utilize Claude Code in a productive way, which is a point for him.

I'd have to guess whether you are an expert working on software not well suited for AI, or just average with a stubborn attitude towards AI and potentially not having tried the latest generation of models and agentic harnesses.


> How would you know whether he is an expert on the topic of software engineering or not?

Because of their views on the effectiveness of AI agents for generating code.


Considering those views are shared by a number of high profile, skilled engineers, this is obviously no basis for doubting someone's expertise.


I think it's worth framing things back to what we're reacting to. The top poster said:

> I really really want this to be true. I want to be relevant. I don’t know what to do if all those predictions are true and there is no need (or very little need) for programmers anymore.

The rest of the post is basically their human declaration of obsolescence to the programming field. To which someone reacted by saying that this sounds like shilling. And indeed it does for many professional developers, including those that supplement their craft with LLMs. Declaring that you feel inadequate because of LLMs only reveals something about you. Defending this position is a tell that puts anyone sharing that perspective in the same boat: you didn't know what you were doing in the first place. It's like when someone who couldn't solve the "invert a binary tree" problem gets offended because they believed they were tricked into an impossible task. No, you may be a smart person that understands enough of the rudiment of programming to hack some interesting scripts, but that's actually a pretty easy problem and failing to solve it indeed signals that you lack some fundamentals.

> Considering those views are shared by a number of high profile, skilled engineers, this is obviously no basis for doubting someone's expertise.

I've read Antirez, Simon Willison, Bryan Cantrill, and Armin Ronacher on how they work or want to work with AI. From none I've got this attitude that they're no longer needed as part of the process.


> Considering those views are shared by a number of high profile, skilled engineers, this is obviously no basis for doubting someone's expertise

Again, a lot of fluff, a lot of of "a number ofs", "highly this, highly that". But very little concrete information. What happened to the pocket PhDs promised for this past summer? Where are the single-dude billion dollar companies built with AI tools ? Or even a multiple-dudes billion dollar companies ? What are you talking about?


I've yet to see it from someone who isn't directly or indirectly affiliated with an organisation that would benefit from increased AI tool adoption. Not saying it's impossible, but...

Whereas there are what feels like endless examples of high profile, skilled engineers who are calling BS on the whole thing.


You can say the same about people saying the opposite. I haven’t heard from a single person who says AI can’t write code that does not a financially interest directly or indirectly in humans writing code.


Nobody says AI "can't write code". It very clearly can.


That seems rather disingenuous to me. I see many posts which clearly come from developers like you and me who are happy with the results they are getting.

Every time people on here comment something about "shilling" or "boosters". It would seem to me that in the rarest of cases someone shares their opinion to profit from it, while you act like that is super common.


Right: they disagree with me and so must not know what they’re talking about. Hey guess how I know neither of you are all as good as you think you are: your egos! You know what the brightest people at the top of their respective fields have in common? They tend not to think that new technologies they don’t understand how to use are dumb and they don’t think everyone who disagrees with them is dumb!


> you are clearly not an expert on the topic of software engineering should guide you here - unless you know what you are talking about, it's better to not say anything at all.

Yikes, pretty condescending. Also wrong!

IMO you are strawmanning pretty heavily here.

Believe it or not, using Claude to improve your productivity is pretty dissimilar to vibe coding a commercial airplane(?) which I would agree is probably not FAA approved.

I prefer not to toot my own horn, but to address an idea you seem to have that I don’t know math or CS(?) I have a PhD in astrophysics and a decade of industry experience in tech and other domains so I’m fairly certain I know how math and computers work but maybe not!


I’m an expert in what I do. A professional, and few people can do what I do. I have to say you are wrong. AI is changing the game. What you’ve written here might’ve been more relevant about 9 months ago, but everything has changed.


This is a typical no-proof "AI"-boosting response, and from an account created only 35 days ago.


Right I’m a bot made to promote AI like half the people on this thread.

I don’t know if you noticed a difference from other hype cycles but other ones were speculative. This one is also speculative but the greater divide is that the literal on the ground usefulness of AI is ALREADY going to change the world.

The speculation is that the AI will get better and will no longer need hand holding.


I'm having a lot of trouble understanding what you're trying to convey. You say there's a difference from previous "speculation" but also that it's still speculation. Then you go on to write "ALREADY going to" which is future tense (speculation), even clarifying what the speculation is.

Is this sarcasm, ragebait, or a serious argument?


Serious.

So let me explain it more clearly. AI as it is now is already changing the game. It will reduce the demand of swes across every company as an eventuality if we hold technological progress fixed. There is no speculation here. This comes from on the ground evidence from what I see day to day and what I do and my experience pair programming things from scratch with AI.

The speculation is this: if we follow the trendlines of AI improvement for the past decade and a half, the projection of past improvement indicates AI will only get better and better. It’s a reasonable speculation, but it is nonetheless speculative. I wouldn’t bet my life on continuous improvement of AI to the point of AGI but it’s now more than ever before a speculation that is not unrealistic.


>AI is ALREADY going to change the world.

Nice slop response. This is the same thing said about blockchain and NFTs, same schtick, different tech. The only thing "AI" has done is convince some people that it's a magical being that knows everything. Your comments seem to be somewhere on that spectrum. And, sure what if it isn't changing the world for the better, and actually makes things much worse? You're probably okay with that too, I guess, as long as your precious "AI" is doing the changing.

We've seen what social media and every-waking-hour access to tablets and the internet has done to kids - so much harm that some countries have banned social media for people under a certain age. I can see a future where "AI" will also be banned for minors to use, probably pretty soon too. The harms from "AI" being able to placate instead of create should be obvious, and children shouldn't be able to use it without adult supervision.

>The speculation is that the AI will get better and will no longer need hand holding.

This is nonsense. No AI is going to produce what someone wants without telling it exactly what to do and how to do it, so yes, it will always need hand holding, unless you like slurping up slop. I don't know you, if you aren't a bot, you might just be satisfied with slop? It's a race to the bottom, and it's not going to end up the way you think it will.


>This is nonsense. No AI is going to produce what someone wants without telling it exactly what to do and how to do it, so yes, it will always need hand holding, unless you like slurping up slop. I don't know you, if you aren't a bot, you might just be satisfied with slop? It's a race to the bottom, and it's not going to end up the way you think it will.

You're not thinking clearly. A couple years ago we didn't even have AI who could do this, then chatGPT came out we had AI who could barely do it, then we had AI who could do simple tasks with A lot of hand holding, now we have AI who can do complex human tasks with minimal hand holding. Where do you think the trendline is pointing.

Your hypothesis is going against all evidence. It's more wishful thinking and irrational. It's a race to the bottom because you wish it will be a race to the bottom, and we both know the trendline is pointing in the opposite direction.

>We've seen what social media and every-waking-hour access to tablets and the internet has done to kids - so much harm that some countries have banned social media for people under a certain age. I can see a future where "AI" will also be banned for minors to use, probably pretty soon too. The harms from "AI" being able to placate instead of create should be obvious, and children shouldn't be able to use it without adult supervision.

I agree AI is bad for us. My claim is it's going to change the world and it is already replacing human tasks. That's all. Whether that's good or bad for us is an ORTHOGANOL argument.


I use AI every day, and it's honestly crap. No it isn't significantly improving, it's hitting a wall. Every new model release is getting less and less good, so no, the "trendline" is not going up as much as you seem to think it is. It's plateaued. The only way "AI" is going to change the world is if stupid people put it in places that it really shouldn't be, thinking it will solve problems and not create even more problems.


Proof of what? Should you also have to prove you are not a bot sponsored by short-sellers? It’s all so so silly, anti-AI crowds on HN rehash so many of the same tired arguments it’s ridiculous:

- bad for environment: how? Why? - takes all creative output and doesn’t credit: common crawl has been around for decades and models have been training for decades, the difference is that now they’re good. Regurgitating training data is a known issue for which there are mitigations but welcome to the world of things not being as idealistic as some Stallman-esque hellscape everyone seems to want to live in - it’s bad and so no one should use it and any professionals who do don’t know what they’re doing: I have been so fortunate to personally know some of the brightest minds on this planet (Astro departmentments, AI research labs) and majority of them use AI for their jobs.


>Should you also have to prove you are not a bot sponsored by short-sellers?

On a 35 day-old account, yes. Anything "post-AI" is suspect now.

The rest of your comment reads like manufactured AI slop, replying to things I never even wrote in my one sentence comment. And no surprise coming from an account created 1 day ago.


I think it’s quite obvious I’m not writing AI slop.

The latest chatgpt for example will produce comments that are now distinguishable from the real thing only because they’re much better written. It’s insane that the main visible marker rn is that the arguments and writings it crafts are superior then what your average joe can write.

My shit writing can’t hold a candle and that’s pretty obvious. AI slop is not accepted here but I can post an example of what AI slop will now look like, if AI responded to you it would look like this:

Fair to be skeptical of new accounts. But account age and “sounds like AI” are not workable filters for truth. Humans can write like bots, bots can write like humans, and both can be new. That standard selects for tenure, not correctness.

More importantly, you did not engage any claim. If the position is simply “post-AI content from new accounts is suspect,” say that as a moderation concern. But as an argument, suspicion alone does not refute anything.

Pick one concrete claim and say why it is wrong or what evidence would change your mind. Otherwise “this reads like slop” is just pattern matching. That is exactly the failure mode being complained about.


I accused another user of writing AI slop in this specific thread, and here you are inserting yourself as if you are replying to comment I made to the other user. You certainly seem desperate to boost "AI" as much as you can. Your 37 day old account is also just as suspect as their 3 day old account. I'm not engaging with you any more so replying is kind of pointless.


> I’m an expert in what I do. A professional, and few people can do what I do

Are you an astronaut?


Obviously not troll, I know I’m bragging. But I have to emphasize that it is not some stupid oh “only domain experts know AI is shit. Everyone else is too stupid to understand how bad it is” That is patently wrong.

Few people can do what I do and as a result I likely make more money than you. But now with AI… everyone can do what I do. It has leveled the playing field… what I was before now matters fuck all. Understand?

I still make money right now. But that’s unlikely to last very long. I fully expect it to disappear within the next decade.


You are wrong. People like yourself will likely be smart enough to stay well employed into the future. It's the folks who are arguing with you trying to say that AI is useless who will quickly lose their jobs. And they'll be all shocked Pikachu face when they get a pink slip while their role gets reassigned to an AI agent


> It's the folks who are arguing with you trying to say that AI is useless who will quickly lose their jobs.

Why is it that in every hype there are always the guys like you that want to punish the non-believers? It's not enough to be potentially proven correct, your anger requires the demise of the heretics. It was the same story for cryptocurrencies.


He/she is probably one of those poor souls working for an AI-wrapper-startup who received a ton of compensation in "equity", which will be worth nothing when their founders get acquihired, Windsurf style ;) But until then, they get to threaten us all with the impending doom, because hey, they are looking into the eye of the storm, writing Very Complex Queries against the AI API or whatever...


Isn’t this the same type of emotional response he’s getting accused for? You’re speculating that he will be “punished” just as he speculated for you.

There’s emotions on both sides and the goal is to call it out, throw it to the side and cut through into the substance. The attitude should be: Which one of us is actually right? Rather than: I’m right and you’re a fucking idiot attitude I see everywhere.


Mate, I could not care less if he/her got "punished" or not. I was just assuming what might be driving someone to go and try and answer each and every one of my posts with very low quality comments, reeking of desperation and "elon-style" humour (cheap, cringe puns). You are assuming too much here.


Maybe he was just assuming something negative as well.

Both certainly look very negative and over the top.


Not too dissimilar to you. I wrote long rebuttals to you points and you just descended into put downs, stalking and false accusations. You essentially told me to fuck off from all of HN in one of your posts.

So it’s not like your anger is any better.


Bro idk why you waste your time writing all this. No one cares that you were an early adopter, all that means is that you used the rudimentary LLM implementations that were available from 2022-2024 which are now completely obselete. Whatever experience you think you have with AI tools is useless because you clearly haven't kept up with the times. AI platforms and tools have been changing quickly. Every six months the capabilities have massively improved.

Next time before you waste ten minutes typing out these self aggrandizing tirades maybe try asking the AI to just write it for you instead


Maybe he's already ahead of you by not using current models, 2026 models are going to make 2025 models completely obsolete, wasting time on them is dumb.


Hear hear!


This is such a fantastic response. And outsiders should very well be made aware what kind of plane they are stepping into. No offence to the aviation enthusiasts in your example but I will do everything in my power to avoid getting on their plane, in the same way I will do everything in my power to avoid using AI coded software that does anything important or critical...


  > but I will do everything in my power to avoid getting on their plane
speaking of airplanes... considering how much llm usage is being pushed top-down in many places, i wonder how long until some news drops of some catastrophic one-liner got through via llm generated code...


Are you joking? You realize entire companies and startups are littered with ppl who only use AI.


> littered with ppl who only use AI

"Littered" is a great verb to use here. Also I did not ask for a deviated proxy non-measure, like how many people who are choking themselves to death in a meaningless bullshit job are now surviving by having LLMs generate their spreadsheets and presentations. I asked for solid proof of succesful, commercial products built up by dreaming them up through LLMs.


The proof is all around you. I am talking about software professionals not some bullshit spread sheet thing.

What I’m saying is this: From my pov Everyone is using LLMs to write code now. The overwhelming majority of software products in existence today are now being changed with LLM code.

The majority of software products being created from scratch are also mostly LLM code.

This is obvious to me. It’s not speculation, where I live and where I’m from and where I work it’s the obvious status quo. When I see someone like you I’m thinking because the change happened so fast you’re one of the people living in a bubble. Your company and the people around you haven’t started using it because the culture hasn’t caught up.

Wait until you have that one coworker who’s going at 10x speed as everyone else and you find out it’s because of AI. That is what will slowly happen to these bubbles. To keep pace you will have to switch to AI to see the difference.

I also don’t know how to offer you proof. Do you use google? If so you’ve used products that have been changed by LLM code. Is that proof? Do you use any products built by a start up in the last year? The majority of that code will be written by an LLM.


> Your company and the people around you haven’t started using it because the culture hasn’t caught up.

We have been using LLMs since 2021, if I havent repeated that enough in these threads. What culture do I have to catch up with? I have been paying top tier LLM models for my entire team since it became an option. Do you think you are proselytizing to the un-initiated here? That is a naive view at best. My issue is that the tools are at best a worse replacement for the pre-2019 google search and at worst a huge danger in the hands of people who dont know what they are doing.


Doesn’t make sense to me. If it’s bad why pay for the tool?

Obviously your team disagrees that it’s a worse replacement for google or else why demand it against your will?

> at worst a huge danger in the hands of people who dont know what they are doing.

I agree with this. But the upside negates this and I agree with your own team on that.

Btw if you’re paying top dollar for AI.. your developers are unlikely using it as a google search replacement. At top dollar AI is used as an agent. What it ends up doing is extremely different from a google search in this mode. That may be good or bad but it is a distinctly different outcome then a google search and that makes your google analogy ill fitted to what your team is actually using it for.


Have you had your head in the sand for the past two years?

At the recent AWS conference, they were showcasing Kiro extensively with real life products that have been built with it. And the Amazon developers all allege that they've all been using Kiro and other AI tools and agents heavily for the past year+ now to build AWS's own services. Google and Microsoft have also reported similar internal efforts.

The platforms you interact with on a daily basis are now all being built with the help of AI tools and agents

If you think no one is building real commercial products with AI then you are either blind or an idiot or both. Why don't you just spend two seconds emailing your company AWS ProServe folks and ask them, I'm sure they'll give you a laundry list of things they're using AI for internally and sign you up for a Kiro demo as well


Amazon, Google and Microsoft are balls deep invested in AI, a rational person should draw 0 conclusions in them showcasing how productive they are with it.

I'd say it's more about the fear of their $50billion+ investments not paying off is creeping up on them.


It’s ok to have this prior but these are not speculative tools and capabilities, they exist today. If you remain unimpressed by them that’s fine, but to deny real people (not bots!) and real companies (we measure lots of stuff, I’ve seen the data at a large MAANG and have used their internal and external tools) get serious benefits _today_ and we still have about 4 more orders of magnitude to scale _existing_ paradigms, the writing on the wall is so obvious. It’s fine and reasonable to be skeptical and there are so many serious serious societal risks and issues to worry about and champion but to me if your position is akin to “this is all hype” it makes absolutely no sense to me


I'm sure you're interacting with a ton of tools built via agents, ironically even in software engineering people are trying to human-wash AI code due to anti-AI bias by people who should know better (if you think 100% of LLM outputs are "slop" with no quality consideration factored in, you're hopelessly biased). The commercialized seems like an arbitrary and pointless bar, I've seen some hot garbage that's "commercialized" and some great code that's not.


> The commercialized seems like an arbitrary and pointless bar

The point is that without mentioning specific software that readers know about, there isn’t really a way to evaluate a claim of 20x.


> I'm sure you're interacting with a ton of tools built via agents, ironically even in software engineering people are trying to human-wash AI code due to anti-AI bias

Please just for fun - reach out to for example Klarna support via their website and tell me how much of your experience can be attributed to an anti-AI bias and how much to the fact that the LLMs are a complete shit for any important production use cases.


My man here is reaching out to Klarna Support, this tells a LOT about his life decision making skills which clearly shine through as well in his comments on the topic of AI


Klarna functions as a payment provider as well, not just a payday loan service (which you are implying I assume). This comment says more about you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: