Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also developer UX, common antipatterns, etc

This “the only thing that matters about code is whether it meets requirements” is such a tired take and I can’t imagine anyone seriously spouting it has has had to maintain real software.



I personally haven't made my my mind either way yet, but I imagine that a vibecoding advocate could say to you that maintaining code makes sense only when the code is expensive to produce.

If the code is cheap to produce, you don't maintain it, you just throw it away and regenerate.


If you have users, this only works if you have managed to encode nearly every user observable behavior into your test suite.

I’ve never seen this done even with LLMs. Not even close. And even if you did it, the test suite is almost definitely more complex than the code and will suffer from all the same maintainability problems.


And in that case how is it different than when random developers come on and off projects?


For one you don't let random devs hop on and off projects without code reviews, which is what people who say they don't care about the code should be doing.

And 2 clearly agents are worse at reasoning through code changes than humans are.


And the team lead with 7 developers isn’t going to be doing code reviews of all the code. At most he is going to be reviewing those critical paths.

I could care less about the implementation behind the vibe coded admin website that will only be used by a dozen people. I care about the authorization.

Even the ETL job, I cared only about the performance characteristics, the resulting correctness, concurrency, logging, and correctness of the results.


>And the team lead with 7 developers isn’t going to be doing code reviews of all the code. At most he is going to be reviewing those critical paths.

Why would the team lead need to review all 7 developers? If you're regularly swapping out every single developer on a team, you're gonna have problems.

>I could care less about the implementation behind the vibe coded admin website that will only be used by a dozen people. I care about the authorization.

If you only have 12 users sure do whatever you want. If you don't have users nothing is hard.


It was 12 users who monitored and managed the ETL job. If I had 1 million users what difference would the front end code have made if the backend architecture was secure, scalable, etc. if the login is taking 2 minutes. I can guarantee you it’s not because the developer failed to write SOLID code…


There you go arguing with strawmen again. I don’t give a single flying flip about SOLID, or Clean Code, or GoF. People who read Clean Code as their first programming but and made that their identity have been the bane of my existence as a programmer.

It’s not about how long something is taking although that is an observable behavior. It’s about how 1 million users over time will develop ways of using your product that you never thought about, much less documented or tested.

Perhaps you’ve heard the phrase “The purpose of the system is what it does”?

The system is the not the spec or the tests. An agent is only reasoning about how to add a new feature, and the only thing preventing from changing observable behavior is the tests. So if an agent is changing untested behavior it’s changing the purpose of the system.


Thats not exactly a great argument depending on undefined behavior. Should I as a developer depend on “undefined behavior” in C (yes undefined behavior is explicitly defined in C)?

On a user facing note, I did a project where I threw stats in DDB just for my own observation knowing very well that was the worse database to use since it does no aggregation type queries (sum, average, etc). I didn’t document it, I didn’t talk about it and yet the developer on their side used it when I specifically documented that he should subscribe to the SNS topic that I emit events to and ingest the data to his own Oracle database.

No library maintainer for instance of C# or Java library is going to promise that private functions that a developer got access to via reflection is not going to change.

I’m solely responsible for public documented interfaces and behaviors.

Oh and that gets back to an earlier point, how do I know that my systems will be able to be maintained? For the most part I design my systems to do a “thing” with clearly defined entry points and extension points and exit points and interfaces. In the case I’m referring to above - it was a search system that was based on “agents” some RAG based, one using a Postgres database with a similarity search, and an orchestrator. You extend the system by adding a new lambda and registering it and prioritizing results if the agent with my vibe coded GUI.

Apple is famous for instance for not caring if you tried to use private APIs and it broke in a new version.


>UB

This is a topic I happen to know a little about. You as a programmer should probably avoid UB for the most part, but the key point here is that programmers don’t follow this rule.

A while back a study found that SQLite, PostgreSQL, GCC, LLVM, Python, OpenSSL, Firefox all contained code that relied on unsigned overflow. Basically even though the C spec says it’s UB, almost every CPU you’ll run into uses twos compliment so it naturally wraps around.

When compiler authors tried to aggressively optimize and broke everything they had to roll that back and/or release flags to allow users to continue using the old behavior.

This kind of stuff happens all the time. The C spec is nearly worthless paper because what matters is what the compilers implement not what the spec tells them to implement. If you spend time talking to LLVM folks, breaking the world because they changed some unspecified behavior is one of their top concerns.

And this is programmers who know how to read specs.

Imagine you’re working on software used by nearly ever major movie studio. You think those users have ever read the spec for the software they are using? They don’t care about UB, they don’t even know the concept exists.

It doesn’t matter how well tested I think my software is. Even very simple software will have unspecified and untested behavior. You give the software a little time and some users and they will start exploiting that behavior. It I unleashed some agents on our code base to implement well architected features, without reviewing their output, and could somehow magically ensure that they didn’t break any workflow that we had documented, tested, or that was even known about to our organization, the head of NBCUniversal would be on the phone with my bosses bosses bosses boss demanding we change it back to the way it was within 24 hours.

Users depend on what the system does, not what you as a designer think it does. The purpose of a system is what it does. Not what it says it does.

We’ve been having this argument since the waterfall days. The code is the spec. We aren’t architects drawing blueprints. The code is the blueprint. If it was that easy to design systems like this all code would already be generated from UML graphs and flowcharts like we’ve been able to do for decades.


Back in my C days, I wrote C code that had to work on PCs that I had access to and mainframes that I never got a chance to test on on ancient compilers. Some little endian and some big endian. We had a custom make file that tried to warn against non portable behavior.

But are you really arguing that I shouldn’t feel free to change private methods because some developer somewhere might use reflection to access it or I shouldn’t change the schema of Sqllite database that is deeply embedded in library folder somewhere?

Or are you saying I should feel free to do

char foo() { char bar = “hello world”; return bar; }

and be upset when weird things happen when I upgrade my compiler?

What do you think Apple would do in that situation? They have multiple times over the past 3 decades said tough noogies if you didn’t do things using the documented APIs.

Jeff Bezos mandated documented interfaces and the “API mandate” in 2002z


You can change whatever you want, but if you make an internal change without signifying that it’s a branding change and it breaks a significant number of your important users workflows, you’re gonna have a bad time.

But that’s mostly irrelevant because most software isn’t written to be used by developers who should know better than to rely on undocumented behavior.

As for Amazon, the API mandate gets violated all the time.

And it’s funny that you should mention them because they just started requiring a code review from a senior engineer for all merged after issues with vibe coding.


So you know that from working at Amazon - ie they aren’t micro service focused (yes I worked at AWS) or that they break it all of the time?

You keep saying you can’t break users workflows. But that doesn’t jibe with reality. In B2B, the user isn’t the customer. B2B businesses break users workflows all of the time. I know people complain about how often AWS changes the console UI all of the time and you hear the same gripes from users all of the time in consumer software. How many people cancel their SaaS contracts because of a change in UI if the features remain?

Photoshop users complain (or did when I followed it closely) all of the time when Adobe broke their automations via AppleScript. They kept buying it.

But the point is that you specifically said that you can’t treat a system as a system of black boxes with well defined interfaces. You damn sure better believe any implementation I started from scratch with a team I did in product companies. It’s the only way you can keep a system manageable with ramp up.

And this is also part of the subject of Stevey’s “Platfotm Rant”

https://gist.github.com/chitchcock/1281611

It’s the reason you can’t fathom that you don’t have to worry about spooky action at a distance when you enforce modularity at the system level.

And even for customers, Apple has a long history of breaking backward compatible and while Microsoft worships at the alter of backward compatibility, major versions of Office have been breaking muscle memory UI for users since the 80s.

If an end users workflow is dependent on mucking with the backend database - more of an issue with desktop software - or an undocumented feature, it’s the same.

Developers have been doing that for years - changing the UI.


You seem to have had a very specific career that consisted mostly of building something new and moving on before you had any idea how it held up long term. I’ve heard enough to pretty confident that despite a 30 year career you don’t actually have much experience in anything other than greenfield projects. This explains the weird overconfidence you have in a methodology with absolutely no track record.

There’s a difference in breaking some user’s workflow ever no and again and doing it every time you add a feature or fix a bug.


So while you are attacking me.

1. You have claimed that Amazon doesn’t do micro services and don’t follow it even though you haven’t worked there (and I have) and I cited a famous letter from an ex-Google/ex-Amazon person who talked about the difference

2. I gave you plenty of well known B2B and B2C companies that “break user workflows” all of the time in new versions

3. I asked you should you go out of your way to not change undocumented behavior and gave you examples in both C (officially undefined behavior), and in managed languages like C# and Java.

Your concern about “breaking user workflows flows” because they relied on undocumented behavior is not shared by any major B2B or B2C company. Hell changing things up to break documented user workflows is not shared. The buyer “the business” is just going to tell the users to suck it up and get use to.

Again - I’ve got a proven track record of multiple companies hiring me including one trying to hire me back - well the acquirer of the startup wanting me back after I left before it got acquired - that’s existence proof that my architectural decisions stood the test of time over the almost four years after I left.

As someone who can talk just as well about the intricacies of C as well “how to create a sustainable development department”, do I really sound like I’m bullshitting?


I don’t trust the technical chops of anyone who has never stuck around long enough to see how their architecture changed and developed with use.

I’ve worked with plenty of expert beginners who sound exactly like you. In addition to the work history, your argument style screams overconfident bullshitter who reads the first line in an email and skips the rest.

You read me saying that companies routinely violate their technical guidelines and you skip reading and jump directly to the conclusion that I’m awfully that microservices don’t exist because that scores you a point in your mind and keeps you from having to think about possibly being wrong about something.


The developer UX are the markdown files if no developer ever looks at the code.

Whether you are tired of it or not, absolutely no one in your value you chain - your customers who give your company money or your management chain cares about your code beyond does it meet the functional and non functional requirements - they never did.

And of course whether it was done on time and on budget


As a consumer of goods, I care quite a bit about many of the “hows” of those goods just as much as the “whats”.

My home, which I own, for example, is very much a “what” that keeps me warm and dry. But the “how” of it was constructed is the difference between (1) me cursing the amateur and careless decision making of builders and (2) quietly sipping a cocktail on the beach, free of a care in the world.

“How” doesn’t matter until it matters, like when you put too much weight onto that piece of particle board IKEA furniture.


Do you know how every nail was put into your house? Does the general contractor?


I know where they fucked up and cost me thousands of dollars due to cutting corners during build-out and poor architectural decisions during planning. These kinds of things become very obvious during destructive inspection, which is probably why there are so many limitations on warranties; I digress.

He’s mildly controversial, but watch some @cyfyhomeinspections on YouTube to get a good idea of what you can infer of the “how” of building homes and how it affects homeowners. Especially relevant here because he seems to specialize in inspecting homes that are part of large developments where a single company builds out many homes very quickly and cuts tons of corners and makes the same mistakes repeatedly, kind of like LLM-generated code.


So you’re saying that whether it’s humans or AI - when you delegate something to others you have no idea whether it’s producing quality without you checking yourself…


> you have no idea whether it’s producing quality without you checking yourself

No, I can have some idea. For example, “brand perception”, which can be negatively impacted pretty heavily if things go south too often. See: GitHub, most recently.

I mean, there are already companies that have a negative reputation regarding software quality due to significant outsourcing (consultancies), or bloated management (IBM), or whatever tf Oracle does. We don’t have to pretend there’s a universe where software quality matters, we already live in one. AI will just be one more way to tank your company’s reputation with regards to quality, even if you can maintain profitability otherwise through business development schemes.


So as long as it is meeting the requirements of “it stays up consistently and doesn’t lose my code” you really don’t care how it was coded…

The same as I’ve been arguing about using an agent to do the grunt work of coding.

If GitHub’s login is slow, it isn’t because someone or something didn’t write SOLID code.


> So as long as it is meeting the requirements of “it stays up consistently and doesn’t lose my code” you really don’t care how it was coded…

I don’t think we’ll come to common ground on this topic due to mismatching definitions of fundamental concepts of software engineering. Maybe let’s meet again in a year or two and reflect upon our disagreement.


If you maintain software used by tens of thousands to millions of people, you will quickly realize that no specified functional and non-functional requirements cover anywhere near all user workflows or observable behaviors.

If you mostly parachute in solutions as a consultant, or hand down architecture from above, you won’t have much experience with that, so it’s reasonable for you to underestimate it.


AWS S3 by itself is made up of 300 microservices. Absolutely no developer at AWS knows how every line of code was written.

The scalability requirements are part of the “non functional requirements”. I know that the vibe coded internal admin website will never be used by more than a dozen people just like I know the ETL implementation can scale to the required number of transactions because I actually tested it for that scalability.

In fact, the one I gave to the client was my second attempt because my first one fell flat on its face when I ran it at the required scale


I'm not talking about scalability requirements. I'm talking about the different workflows that 10 million people will come up with when they use a program that won't exist in any requirements docs.


Do you think that AI coded implementations just magically get done witkoug requirements?


You're not understanding what I'm saying. If you go tell your agents to add this new feature to an app, and you do it by writing up a new requirements doc. If you don't review the code, they will change a million different "implementation details" in order to add the new feature that will break workflows that aren't specified anywhere.

The code is the spec. No natural language specification will ever full cover every behavior you care about in practice. No test suite will either.

If you don't know this, you haven't maintained non-trivial software.


And have you never seen what a overzealous developer can do and wreck havoc on an existing code base without a testing harness? Let a developer lose with something like Resharper which has existed since at least the mid 2000’s

If your test don’t cover your use cases, you are just as much in danger from a new developer. It’s an issue with your testing methodology in either case.

And there is also plan mode that you should be reviewing


Of course they can. Those kinds of developers cause problems constantly. It's one of the biggest reasons we have code reviews. Automated tests help too.

But even with all of that we still have bugs and broken workflows. Now take that human and remove most of their ability to reason about how code changes affect non-local functionality and make them type 1000x faster. And don't have anyone review their code.

The code is the spec, someone needs to be reviewing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: