Hacker Newsnew | past | comments | ask | show | jobs | submit | ale's commentslogin

This reads like 2022 hype. It's like people stil do not understand that there's a correlation between exaggerating AI's alleged world-threatening capabilities and AI companies' market share value – and guess who's doing the hyping.


Tell me about one other industry which talked about how dangerous it is to get market share


The arms industry and information security industry (say, Palantir) come to mind - except the danger is more easily demonstrable in those cases, of course.


> - and guess who's doing the hyping[?]

Those that stand to gain the most from government contracts.

Them party donations ain't gonna pay for themselves.

And, when the .gov changes...and even if the gov changes....still laadsamoney!


Who would not want to say their product is the second coming of Christ if they could.


Comparing something like next.js to other frameworks doesn’t make much sense anymore given that most webdevs choose DX and easy deployment above anything else. Vercel’s growth is proof of that.


but next's dx is no longer good. vite has much better dx, the proof lies in the number of vite based frameworks in the post


Native to the web like web components or a native platform?


The good and bad aspect of this approach to AI in tech is that it revealed really how many developers out there are merely happy with getting something to work and get it out the door before clocking out and not actually understanding the inner workings of their code.


This is almost inevitable when something industrializes; people maximize profit by quickly shipping things that barely works. We need someone who try to excel in technology, and AI just amplifies this need.


> how many developers out there are merely happy with getting something to work and get it out the door

There's a very large number of cases where that's the right choice for the business.


Also for small cli tools and scripts that otherwise wouldn't get written.


Except that "to work" really means "to seem to work on the first try"


I find it to be actually a boon for small throw away side projects that I don't care about, and just want to have [1]

Actual code/projects? Detrimental

[1] E.g. I spent an evening on this: https://github.com/dmitriid/mop


whenever people complain about someone being "merely happy with getting something to work and get it out the door before clocking out" i wonder to myself if i'm dealing with someone that has The Protestant Ethic and the Spirit of Capitalism on their nightstand, or has never read Economic and Philosophic Manuscripts of 1844, or simply does not understand the significance of these two essays.

like ... you expect people to actually be committed to "the value of a hard day's work" for its own sake? when owners aren't committed to value of a hard day's worker? and you think that your position is the respectable/wise one? lol


Where did they say anything about a "hard day's work"? Are you making up arguments to attribute to them, lol

And are you assuming the alternative involves not clocking out? Because "clock out, finish when there's more time" is a very good option in many situations.


No, it's not about capitalism and exploitation, hard work propaganda etc. You can work to the contract (e.g. strictly whats in your work contract and not "above and beyond") while still retaining the quality of the work. So reduce the quantity but not the quality. This is about a ton of bootcamp developers that were created in the last 10ish years, for which, unlike the rest of us, it is just a better paid job.


in general it's safe to assume your conversation partner has not read every single essay you have and come away with the same exact thoughts


Given the remainder of the comment is "and not understanding the inner workings" it's safe to assume that "getting something to work" does not imply that it worked correctly.

Back in the days of SVN, I'd have to deal with people who committed syntax errors, broken unit tests, and other things that either worked but were obviously broken, or just flat out didn't work.

Taking a bit of pride in your work is as much for your coworkers as it is for yourself. Not everything needs to be some silly proles vs bourge screed.


Are CSRF attacks that common nowadays though? Even if your app is used by the 5% of browsers that don’t set the Origin header the chances of that being exploited are even more miniscule. Besides, most webdevs reach for token-based auth libraries before even knowing how to set a cookie header.


Curious about that too. In a modern web-app I always set HttpOnly cookies to prevent them being exposed to anything JavaScript, and SameSite=strict. Especially the later should prevent CSRF.


Erratum: What I'm saying here only applies for cookies with the attribute SameSite=None so it's irrelevant here, see the comments below.

(Former CTF hobbyist here) You might be mixing up XSS and CSRF protections. Cookie protections are useful against XSS vulnerabilities because they make it harder for attackers to get a hold on user sessions (often mediated through cookies). It doesn't really help against CSRF attacks though. Say you visit attacker.com and it contains an auto-submitting form making a POST request to yourwebsite.com/delete-my-account. In that case, your cookies would be sent along and if no CSRF protection is there (origin checks, tokens, ...) your account might end up deleted. I know it doesn't answer the original question but hope it's useful information nonetheless!


The SameSite cookie flag is effective against CSRF when you put it on your session cookie, it's one of its main use cases. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/... for more information.

SameSite=Lax (default for legacy sites in Chrome) will protect you against POST-based CSRF.

SameSite=Strict will also protect against GET-based CSRF (which shouldn't really exist as GET is not a safe method that should be allowed to trigger state changes, but in practice some applications do it). It does, however, also make it so users clicking a link to your page might not be logged in once they arrive unless you implement other measures.

In practice, SameSite=Lax is appropriate and just works for most sites. A notable exception are POST-based SAML SSO flows, which might require a SameSite=None cookie just for the login flow.


This page has some more information about the drawbacks/weaknesses of SameSite, worth a read: https://developer.mozilla.org/en-US/docs/Web/Security/Attack...

You usually need another method as well


Yes, you're definitely right that there are edge cases and I was simplifying a bit. Notably, it's called SameSite, NOT SameOrigin. Depending on your application that might matter a lot.

In practice, SameSite=Lax is already very effective in preventing _most_ CSRF attacks. However, I 100% agree with you that adding a second defense mechanism (such as the Sec header, a custom "Protect-Me-From-Csrf: true" header, or if you have a really sensitive use case, cryptographically secure CSRF tokens) is a very good idea.


Thanks for correcting me - I see my web sec knowledge is getting rusty!


Also cant you just spoof the origin header?


A CSRF is an attack against a logged in user, so has to be mediated via their browser.

If you can spoof the origin header of a second party when they navigate to a third party, a CSRF is a complete waste of whatever vulnerability you have found.


You can if you want to deliberately CORF yourself for some reason - it's there to protect you, but spoofing it doesn't give you any special access you wouldn't otherwise have.

The point is that arbitrary user's browsers out in the world won't spoof the Origin header, which is protecting them from CORF attacks.


Yes


I wonder if the discrepancy in analysis comes from the way the participants are asked to view the picture. English and Japanese are vastly different languages and even a simple question can be translated in subtly different ways.


It was on students of the faculty, so I guess all in English: https://www.apa.org/monitor/feb06/connection


It's kind of depressing to read Daniel's article[1] on this issue given the rising "popularity" of these lazy attempts at cash grabbing. I hope they manage to combat the AI slop in a way that does not involve fighting fire with fire though.

[1] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...


I went through some of these and the one that stood out to me was this one

https://hackerone.com/reports/2823554

Where the reporter says, "Sorry didnt mean to waste anyones time Badger, I thought you would be happy about this.".

People using LLMs think they are helping but in reality, they are not.


There's this very weird idea that makes some people think that the maintainer must have a godawful workflow and if I just showed him the output of _my_ workflow, I can ~~save the day~~ fix a bug for them.


why don’t they just limit the report to 100 chars or something? “Here’s the input, here’s the output, here’s why it sucks”. Easy to make a maybe/no decision at a glance.


Just like the old Carlin joke. Made me chuckle.


I wanted you to know that, due to this comment, I "lost" approximately 45 minutes watching George Carlin's best jokes


What was the joke?


It’s about time these types of articles actually include the types of tasks being “orchestrated” (as the author writes) that aren’t just plain refactoring chores or React boilerplate. Sanity has quite a backlog of long-requested features and the message here is that these agents are supposedly parallelizing a lot of the work. What kind of staff engineer has “80% of their code” written by a “junior developer who doesn't learn“?


IMO “junior developer who doesn't learn“ is not quite right. Claude is more like an senior, highly academic engineer who has read all the literature but hasn't ever written any code. Amazing encyclopaedic knowledge, zero taste.

I've been building commercial codebases with Claude for the last few months and almost all of my input is on taste and what defines success. The code itself is basically disposable.


> The code itself is basically disposable.

I'm finding this is the case for my work as well. The spec is the secret sauce, the code (and its many drafts) are disposable. Eventually I land on something serviceable, but until I do, I will easily drop a draft and start on a new one with a spec that is a little more refined.


I just like to add that the database design is the real secret sauce, important even more than external APIs in my opinion.


This is something that I've stumbled into as well. DB models AND dataflow. Getting both of those well spec'd makes things a lot easier.


Well, not DB design really, you can achieve the same thing by defining your POCOs well. I switched entirely to code-first design years ago. If you haven't worked with a good ORM, you're really missing out, though I admit there was quite a bit of friction at first.


No, I really am talking about how the database is organised. Tables representing objects, normalisation, etc. Whether or not it is accessed through the application with an ORM.


So how do you best store and iterate on the spec? One way I guess would be to work on a branch an modify Claude.md to reflect what the branch is for. Is that a good approach? Are there others?


If the code is disposable then where do all the rules and reasoning etc live apart from in your head?


In the spec.


Hmm... my code is the spec. It just happens to be executable. Is writing a precise spec in English easier than in a programming language?


The spec contains ambiguities and the code contains bugs. Clarifying ambiguities in the spec with stakeholders, allows one to reduce bugs in the code.


If you repeat this process until all ambiguities in the spec are eliminated, aren't you essentially left with code? Or at least something that looks more like code than plain English?


No


> The code itself is basically disposable.

This is key. We’re in mass production of software era. It’s easier and cheaper to replace a broken thing/part than to fix it, things being some units of code.


Eh, Claude is like a magical spaniel that can read and write very quickly, with early-stage alzheimers, on amphetamines.

Yes it knows a lot and can regurgitate things and create plausible code (if I have it run builds and fix errors every time it changes a file - which of course eats tokens) but having absolutely no understanding of how time or space works leads to 90% of its great ideas being nonsensical for UI tasks. Everything is needing very careful guidance and supervision otherwise it decides to do something different instead. For back end stuff, maybe it's better.

I'm on the fence regarding overall utility but $20/month could almost be worth it for a tool that can add a ton of debug logging in seconds, some months.


Hi Ale, author here. Skepticism is understandable, but trust me, I'm not just writing React boilerplate or refactoring.

I find it difficult to include examples because a lot of my work is boring backend work on existing closed-source applications. It's hard to share, but I'll give it a go with a few examples :)

----

First example: Our quota detection system (shipped last month) handles configurable threshold detection across billing metrics. The business logic is non-trivial, distinguishing counter vs gauge metrics, handling multiple consumers, and efficient SQL queries across time windows.

Claude's evolution: - First pass: Completely wrong approach (DB triggers) - Second pass: Right direction, wrong abstraction - Third pass: Working implementation, we could iterate on

---- Second example: Sentry monitoring wrapper for cron jobs, a reusable component to help us observe our cronjob usage

Claude's evolution: - First pass: Hard-coded the integration into each cron job, a maintainability nightmare. - Second pass: Using a wrapper, but the config is all wrong - Third pass: Again, OK implementation, we can iterate on it

----

The "80%" isn't about line count; it's about Claude handling the exploration space while I focus on architectural decisions. I still own every line that ships, but I'm reviewing and directing rather than typing.

This isn't writing boilerplate, it's core billing infrastructure. The difference is that Claude is treated like a very fast junior who needs clear boundaries rather than expecting senior-level architecture decisions.


We have all these superpowered AI vibe coders, and yet open source projects still have vast backlogs of open issues.

Things that make you go "Hmmmmmm."


You have to pay a recurring subscription to access the worthwhile tools in a meaningful capacity. This goes directly against why retail users of open source software, some of whom are also developers of it, actually use it - and you can tell a lot of developers do it because they find coding fun.

It’s a very different discussion when you’re building a product to sell.


The projects that have those backlogs dont allow ai made code


Actually providing examples of real tasks given to the AI and the subsequent results would break the illusion and give people opportunities to question the hype. Can't have that.

We'll just keep getting submission after submission talking about how amazing Claude Code is with zero real world examples.


Author here. It's fair enough. I didn't give real-world examples; that's partially down to what I typically work on. I usually work in brownfield backend logic in closed-source applications that don't showcase well.

Two recent production features:

1. *Quota crossing detection system* - Complex business logic for billing infrastructure - Detects when usage crosses configurable thresholds across multiple metric types - Time: 4 days parallel work vs ~10 days focused without AI

   The 3-attempt pattern was clear here:
   - Attempt 1: DB trigger approach - wouldn't scale for our requirements
   - Attempt 2: SQL detection but wrong interfaces, misunderstood counter vs gauge metrics
   - Attempt 3: Correct abstraction after explaining how values are stored and consumed
2. *Sentry monitoring wrapper for cron jobs* - Reusable component wrapping all cron jobs with monitoring - Time: 1 day parallel vs 2 days focused

Nothing glamorous, but they are real-world examples of changes I've deployed to production quicker because of Claude.


Really, zero real world examples? What about this?

https://news.ycombinator.com/item?id=44159166


Yes exactly. Show us the code and we can evaluate the advice. Otherwise it’s just an advertisement.


the kind of engineer who has been Salesified to the point that they write such drivel as "these learnings" instead of "lessons" in an article that allegedly has a technical audience.

it's funny because as I have gotten better as a dev I've gone backwards through his progression. when I was less experienced I relied on Google; now, just read the docs


Yeah, the trusty manual becomes #1 at around the same time as one starts actually engineering. You've entered the target audience!


These days, I often just go straight to the source (when available) to clear some confusion about the library/software behavior. It can be a quite nice 10 mn break.


To be fair Next.js is just following the natural progression of what their product always has been: holding devs' hands in all things deployment at the expense of vendor lock-in. Being aware of Vercel's limitations is not about the open web, it just means you should be setting up servers yourself at this point.


This

I was talking to a fellow dev at the company I work at and he was extolling the virtues of one-click nextjs deployments. As the conversation progressed it turned out he'd never actually had to set up or manage his own servers and felt that it was a waste of time for him to learn.

It hurt my soul. I am considering linking this article, but i feel it may come across as aggressive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: