Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why code when you can just ask the computer to do what you want and get the results.

Because then you won't know the design of the code or how it even works.

The hard part of coding isn't writing the code itself. It's the design of the code that takes skill, and if you leave that part completely up to AI, you are taking your life in your hands. Bad idea.



Not saying it's a good idea. In fact, I watched someone on twitter debugging code. When the application errored out, he regenerated the code, including the issue in the prompt. Something else failed, the prompt was updated, and code regenerated. Now that of course was for visible errors.

When the person building the application doesn't know or care, the application will still be deployed.


I recently had the pleasure of reviewing AI-generated Ruby code at work. It was so nonsensical and couldn't manage to get basic map and reduce right. I didn't initially know it was AI generated, and I was at a loss of words regarding what I should write as feedback.

Something needs to be done. It should be uncontroversial to require solid understanding of fundamentals from software professionals, yet here we are discrediting knowledge by calling such things "gatekeeping." It's reckless behavior as the industry is hellbent on hoarding as much personal information as it possibly can. Information that any responsible professional should be working to keep secure at the very least.


> When the person building the application doesn't know or care, the application will still be deployed.

Resistance is futile.

We will adapt.


This is especially true when you start vibe coding critical systems that human life depends on.

Emergency services, hospital infrastructure, financial systems (like Social Security, where a missed check may actually mean people starve) are all places where you don't want to fail because of a weird edge case. It also feeds into fixing those edge cases requiring some understanding of design in general and also the design implemented.

Then there's the question of liability when something goes wrong. LLMs are still computers right now: they do exactly, and only what you tell them to do.


> Because then you won't know the design of the code or how it even works.

I would argue that this is already true for people who practice vibe coding, because otherwise they'd spend less time just banging it out themselves instead of twisting prompts to get something that mostly works and needs hours of debugging.


I can imagine a world where backend APIs are secured, hardened, protected against DOS attacks, etc.

Then any end users with the proper credentials can vibe code UIs (web apps, iOS and Android apps) that call those APIs to their heart's content.

We may also need operating systems and web browsers hardened in new ways to survive vibe coded apps.


SaS products that support custom code are already basically like that. Salesforce has multiple shockingly large documents on all the limits they enforce: https://developer.salesforce.com/docs/atlas.en-us.apexcode.m...

That does mean it's hard to break the app, but it also means people quite frequently run into those limits.


Few people bother with assembly these days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: