Hacker Newsnew | past | comments | ask | show | jobs | submit | srvaroa's commentslogin

Looking forward to someone writing the Spring equivalent this on the JVM


Why? It would be nearly identical, just changing the names of the frameworks.


[flagged]


Or maybe they just don't have oversized egos.

The key to using a framework effectively, whether it's Spring in Java or SAP for your business, is to accept that the framework knows better than you - especially when it objectively does not- and when there's a difference between how you or your business think of things, vs. how the framework frames them, it's your thoughts and your business that must change. Otherwise, you're fighting the framework, and that's worse than just not using it.


Do you not think I've heard that line before? The framework knows nothing. It's made by a bunch of children that make CRUD apps wrapping an SQL query behind an HTTP server over and over again. They don't make any applications that do anything commercially or technically interesting, resorting themselves to infinitely copying data structures with increasingly complex annotations, a practice they call "business logic", to trick themselves into feeling like they're doing something.

I've been there. I've seen it. It doesn't lead anywhere. The abstractions that spring (and other heavyweight JavaEE-style frameworks) provide is razor thin, and usually implemented in the most trivial possible way. The frameworks, like the applications often built on them, do nothing interesting.

EDIT: I realize this is a pretty unkind way to put it. I hope readers can understand the argument along with the indignation I express. I do believe very strongly in these points, but wish I could express them without quite as much anger. I can't tough. Parse out what useful stuff you can glean, and leave the rest along with the knowledge that you don't have to impress me.


I'm sorry too, I realize I didn't make my point clear. Yes, frameworks are stupid. Their designs are likely suboptimal from the start, and only get worse over time, accumulating hacks to address biggest issue while they double down on going the wrong direction. A competent engineer will easily come up with better ways of doing any individual thing a framework offers, and they have a good shot at designing a framework much better for the needs of their team and their business.

Which is why I brought up SAP. It's well-known that adopting SAP usually ends up burning untold millions of dollars on getting expensive consultants to customize it for specific needs of the company, until it either gets written off and abandoned, or results in a "successful" adoption that everyone hates.

It's less well-known that the key to adopting SAP effectively and getting the promised value out of it is to change your business process to fit SAP. Yes, whatever processes a business had originally were likely much smarter and better for their specific situation and domain, but if they really want the benefits that come with SAP, abandoning wisdom of your old ways is the price of admission.

I say the same is true of software frameworks. Most businesses don't do anything interesting or deep either; you don't integrate with SAP to handle novel challenges, you do it to scale and streamline your business, which actually involves making most work more mindless and boring, so it can be reliably and consistently done by average employees. Software frameworks, too, aren't there to help you with novel technical challenges; they're there to allow businesses to be more efficient at doing the same boring shit every other business is doing.

I personally hate those frameworks, but that's because they're not meant for people like me; it doesn't mean they don't work. They're just another form of bureaucracy - they suck all life and fun and individuality from work, but by doing that, they enable it to scale.


"When AI is able to deliver coding tasks based on a prompt, there won’t be enough copies of the Mythical Man Month to dissuade business folks from trying to accelerate road maps and product strategies by provisioning fleets of AI coding agents."

Submitted yesterday, but no luck :D

https://varoa.net/2025/04/07/ai-generated-code.html


I like your blog!


Thanks!



You're not the first person to equate "typing more code faster" with velocity, productivity, execution speed, or whatever you want to call it. Even for Luu, typing speed is important not for generating more faster, it's about being able to accomplish the important tasks that are a part of the job in less time.


I'm sure many people have equated "typing code faster" with velocity, productivity, and execution speed. But I am not doing it.

I'm saying that given N developers, if they all type code faster, you get more code. From the linked text: "Will this translate into a net increase of actual value or just an accumulation of slop and technical debt? Regardless of the answer, there will be more raw code."

The argument I'm making is based on more volume of code. Not on quality, value, etc. of whatever that code is doing.

On your point about nobody having thought "my problem is that can't type fast enough". The problem "shit, I'd know what I want, I'd rather spend 10 minutes typing than 1 hour" is real. An LLM allows you to say "I want a piece of code that does X" and it hydrates that into the 500 lines of code. You materialize the thing faster.

Now. Yes we can go back to your point. Was that idea good? Is the code good enough? etc. Those are valid questions, but orthogonal to my point. My point is not about the quality of outputs etc. It's about "this thing helps you churn more code, and throw it to the systems that digest code".


>7. This incident has reminded that many people mistakenly assume that git tags are immutable, especially if they are in semver format. Although it's rare for such tags to be changed, they are not immutable by design

IME, this will be more "learned" than "reminded". Many many people set up pipelines to build artefacts based on tags (e.g. a common practise being "on tag with some pattern, then build artefact:$tag") and are just surprised if you call out the flaws.

It's one of many practises adopted because everyone does it but without basic awareness of the tradeoffs. Semver is another similar case of inherited practise, where surprisingly many people seem to believe that labelling software with a particular string magically translates into hard guarantees about its behaviour.


I theorized about this vulnerability a while back when I noticed new commits didn't disable automerging. This is an insane default from GH.

EDIT: seems GitHub has finally noticed (or started to care); just went to test this and auto merge has been seemingly disabled sitewide. Even though the setting is enabled, no option to automerge PRs shows up.

Seems I was right to worry!

EDIT2: We just tested this on GitLab's CI since they also have an auto-merge function and it appears they've done things correctly. Auto-merge enablement is only valid for the commit for which it was enabled; new pushes disable auto-merge. Much more sensible and secure.


GitLab has had this behaviour (disable auto-merge when new commits are pushed) since long before GitHub even had auto-merge.

It’s such an obvious attack vector, I’m pretty sure I tested GitLab soon after the feature initially rolled out.


Tags can be signed, and the signature can be verified. It's about as easy as signing / verifying commits. One can even make signing tags as the default option when creating tags.

This won't help in this case though, because a legitimate bot was tricked into working with a rogue commit; a tricked bot could as well sign a tag with a legitimate key.

"Immutable tags" of course exist, they are commit hashes, but they are uninformative :(


How else should we do it?


By commit hash


It seems to me that pinning to a sha was not sufficient; the Renovate bot was updating actions referenced by sha.

Example: https://github.com/chains-project/maven-lockfile/pull/1111/f...

This appears to be governed by the `pinGitHubActionDigests` helper configured in `renovate.json`.


Well, this AI operates now at staff+ level


And is paid like one with today's token costs!


The primary reason I keep writing is that this, the plain written word, is The Platform to transport ideas in time and space. It has passed the test of time in ways that each of those platforms you mention have not (nor any other medium, e.g. video, etc. at least in our lifetimes).

I think of those platforms more as distribution / syndication mediums, using them for stuff like throwing it out there and reach others (which I somewhat care about because the topics I write about are those I like to exchange opinions about with others, but not obsess in terms of needing validation) and keep some degree of isolation from their policies, relative importance vs others, algorithms, UI/UX choices etc.


KISS > DRY


DRY for the sake of DRY is like not drinking water when you're thirsty.


I worked on an internal platform for a large engineering org and was responsible for choosing what features we put in.

We had the technical means to track everything from commits to reviews, jiras, deploys, etc. Some of our most celebrated and impactful features were reporting on Accelerate metrics and related. E.g. deploy frequency, size of changes, duration of stages in the dev cycle and such.

I set a very inflexible red line: we don’t expose data more granular than at team level. Ever. No excuse.

Quite a few line managers would come to me asking for “personal metrics”, and even engineers in the team were super interested in building that (“with all this data we could…”).

My argument was that these metrics are toxic and invite misuse. A second factor was that this misuse would generate mistrust from engs against the platform and damage adoption.

Instrumenting an organization thought of as a system is fine. You want to see where are bottlenecks, you want to have measurable targets for how technical work is done and how it correlates to business goals/KPIs.

You want to offer teams metrics of their delivery my process so they can take the info and implement improvements whenever they see fit, and have a data driven conversation with the business (e.g. about the right setting for the carefulness knob)

But teams are the minimum unit of ownership, we stop the instrumentation there. Sure, a team’s performance ultimately links to individuals, but that is the manager’s job to figure out.

Interestingly:

* only line managers asked for this info, nobody in a director/vp/cxo role * the most annoyed by me saying no were engineers in the team who wanted to do these features


Agreed. And "personal metrics" can also end up having a lot of consequences that were not planned for. Incentives are tricky that way. There's always a long tail of things that need to get done on occasion that doesn't show up in these types of metrics and it becomes difficult to find takers when everyone is optimizing themselves around the core metrics as bonuses, promotions, and even keeping your job in today's climate could be determined by those. It also makes allocating load to play to contributors strengths (which is often what they enjoy most) far more difficult.


Yep, this is an important point.

We wanted the metrics to create the right set of incentives to make people improve the right parts of the system.

For example, we did present deploy frequency prominently. This gets people to see it, managers to want their team to be in the upper percentiles, etc. which drives a set of practises that, in general, and backed by research, are beneficial.

One of my favourite features was putting two graphs together: size of PRs vs. time to review. Time to review went up more or less linearly to size of PRs, but past a certain threshold (different per team!) time to review dropped sharply with larger size of PRs. This made for a good conversation to topic with teams that sets the right incentives for smaller PRs, iterative development, etc. (and it happens to correlate with deploy frequency).


Might also suggest a size limit for PRs. Theres always that Golden Child who gets away with things because they are prolific. But they tend to screw up architecture because they make too-big moves that discourage feedback and negotiation.


Yes, this is the way. It's better never to gather the information at the individual level.


This is a great comment. My thought, after reading it: Why do line managers want this info? Do you think they have someone in mind for promotion, and they are looking for metrics of accomplishment?

And, cynically, I would say that senior managers don't care... because to them, most hands-on engineers/devs are fungible. What is your view about why the upper levels never ask for it?


> Why do line managers want this info? Do you think they have someone in mind for promotion, and they are looking for metrics of accomplishment?

Nah, you don't need to assume malice :)

Most times it was managers with good intentions, not realizing that those metrics were either pointless (e.g. how much code / commits does $person do), or toxic, in the sense that it leads the team to game metrics, that it prevents the manager to actually understand why the values are what they are, that it opens the risk to link them to promotions and perf reviews etc.

Explaining them all this was generally enough!

> And, cynically, I would say that senior managers don't care... because to them, most hands-on engineers/devs are fungible. What is your view about why the upper levels never ask for it?

Actually that wasn't the case. AFAIK (I left at some point, but kept a bit in touch with former colleagues) upper management started using some of those metrics to set organizational objectives.

Again, same argument. You don't need to assume malice. Management has a legit interest in engineering productivity. What happens most of the time is that they don't know how to measure it in an effective way, or how to use it to drive organizational change. Providing guidance is part of your job as a Platform eng.


So, it's a bit more than that. Let's say I've identified someone that I can't figure out what they're doing. They're just going slower than I expect given what I understand of the work. (I was a professional programmer for over 15 years before management, so this is based on the expectations of a former practitioner.)

Now, why is that? The first possibility is that they ran into a string of tickets that were just harder than we expected at the outset, and it's a statistical anomaly. A second possibility is that this person takes the hardest of the tickets, and anyone would struggle. A third possibility is they are taking tickets past their capabilities as a developer and aren't getting the help they need. In this scenario, they're a capable performer, but are need help to get to the next level. Maybe they're junior, or they're a new employee. If they keep taking these tickets, they will grow and their performance will jump. In these cases, you just keep monitoring.

In the middle case, they may have transitioned to a team or project that is a bad fit, and would thrive on another team or an adjustment to what they're asked to work on. They may be a front end specialist that was expected to pick up back end tickets and struggled from the start. Or, they may be in a temporary dip due to personal circumstances.

Then, you have the negative cases. Their work ethic might be insufficient. They may just not be good at their job and may not ever be able to match the performance at the pay grade and seniority they were hired. Or, you may have a good bullshitter on your team and your numbers tell a different story.

The numbers will tell you if this is a temporary dip. The numbers will tell you if their current output is in line with the rest of the team. From there, the hard work starts.

If you don't have those numbers, it starts looking like a lot of status updates and micromanagement.


Note: remember I am condensing the real work into a few paragraphs. Of course I know it's much more nuanced than I am making it here. This is a surface level treatment.

Also note: Most managers end up doing a lot of part time jobs, of which performance management is a relatively small part. If my primary job was performance and task management, that would be a very bad use of resources for the company.


My cynical view is that's it's to find scapegoats especially in companies that have a lot of politics or a lot of PIPs.

You generally scapegoat one level down and then let that level push it further down if it can.

So the Directors need team level metrics to find which managers reporting to them to scapegoat and have data to "prove" it.

Then the Managers need individual level metrics to find which engineers to scapegoat and have data to "prove" it.


Yo dawg, I heard you like throwing people under buses. So I put a bus under yo’ bus so you can scapegoat people while you scapegoate people.


As a former line manager, there have been two cases when I use metrics: I'm promoting someone, and I like having numbers that back up how good they are, or I'm firing someone, and I like having numbers that back up how bad they are.

I generally agree with OP, but there are times where as a manager you know exactly what is going on with your team, but numbers are still helpful.


I’ve helped build out or steer these sorts of systems a number of times and usually management behaves themselves during the adoption and honeymoon phases but then erode the trust later on by trying to use the system to determine PIP or promotion.

Devs who have seen this behavior before tend to push back hard on adoption, and then invest the absolute minimum effort in using these tools. The tools tend to be built wrong often enough to encourage that slide into toxicity. There’s an amount of using a tool where it improves your work experience, and then an additional amount that improves the team experience, and then beyond that it’s doing your manager’s job for them and self-reporting/narcing.

I’m trying to build a tool at the moment that has had three focuses due to different sources of inspiration. First it was going to be Trac (a project management tool and wiki that is so simple it shouldn’t work, but somehow does) with bits of Jenkins. Bamboo has thousands of features and integrations where it should have hundreds. All those integrations make reproducing a failed build locally difficult or impossible. The bones of a build process should be in the build scripts and dependencies, so you can run them locally. The build tool should schedule builds, report status changes, collect some release notes data, and track trends with grafana charts and that’s about it. I also want something running in each dev’s machine to boost system performance and do some of the teamcity features for trunk based dev like push-on-green-build. I just miss how distilled Trac was, but it had some problems with plugins and git support.

That sat on the Some Day Shelf behind two other projects until I read Cal Newport’s Deep Work, and then Slow Productivity. Then it became more user oriented. Atlassian has about three per-user dashboards that I’m responsible for juggling all day, and that is tragicomically stupid. I’m terrible with this sort of juggling but have coworkers who don’t check the PR list ever - you have to pester them to look at PRs, week in and week out. If I’m doing deep work I don’t want preemptions except in specific circumstances (like I broke trunk). But when I come up for air I need a cheap gestalt of what I’ve missed and what people are expecting from me. Show me all the PRs, and my red builds and open tasks in a single view. Allow some low priority alerts through. And that can be facilitated by building a pomodoro straight into the dashboard and information hiding during deep focus moments.

I have some family that was recently diagnosed as neurodivergent, and the thing about YouTube is that your suggested videos get influenced by what other people in the house are watching (particularly if you’re not logged in on a device). ND people of all types have a lot of coping mechanisms to function within other people’s expectations (eg work and responsibilities) and to mask. They get both barrels when it comes to being judged poorly by toxic management tools because their variability is so high. Best performer one day, second worst the next. And this is nowhere more true than with ADHD. And the worst of it is that almost nobody will go harder and farther than an ADHD person during a crisis. They can hyperfocus during rare moments but most reliably due to an emergency (self created, like a paper due tomorrow, or externally driven, like servers on fire). They also innovate by drawing connections nobody else sees between different disciplines.

But they’re the first on the block when toxic metrics kick in. And the productivity tools they objectively need more than anyone else on the team seals their fates, and thus they either don’t use them or use their own, which are similarly poorly integrated, which leads to more plate spinning which they are terrible at.

So what finally got me ass-in-chair in front of a keyboard was realizing that if this is two tools instead of one, you can keep some of the productivity data on the employee’s machine where it can help them with time management and self-improvement but not self-incrimination. Then you can cherry pick and clean up the narrative of your day or week before you self report. Have private tasks on your machine that may look embarrassing to others (like remember to drink fluids, eat lunch, call the dentist, tasks you’re skunkworking).


Redmine is a superior trac I think and still under development, you might like it


"This scale – the scale of devprod, and in turn the scale of the overall organization, such that it could afford 10 FTEs on tooling – was a major factor in our choices"

Is basically the summary for most mono/multi repo discussions, and a bunch of other related ones.


It doesn't matter if you have a mono-rep or multi-repo, you will need engineers on tooling to make it work if your project is large. There are pros and cons to both multi-repo and mono-repo with no one right answer (despite what some will tell you). They are different pros and cons, but which is best depends on your particular context.


Yeah that was my point. In the end both approaches can be fine (depends on your context). The real difference is that whatever choice you take, it will need the right investment in tooling and support.


Multirepo also comes with cost overhead. I think people talk about it somewhat less. I’ve worked at multirepo and monorepo places, both, before. My current company has a multirepo setup and it sure seems like it comes with plenty of tooling to fetch dependencies. That tooling has to be supported by FTEs.


+1. I'd go as far to say that multi-repo probably needs as much, if not more effort to properly keep functioning, but all that effort is better "hidden" so people assume monorepos are more work.

With a monorepo, it's common to have a team focused on tooling and maintaining the monorepo. The structure of the codebase lends itself to that

With a multirepo codebase, it's usually up tu different teams to do the work associated with "multirepo issues"— orchestrate releases, handle dependencies, dev environment setup, etc. So all that effort just kinda gets "tucked away" as overhead that each team assumes, and isn't quite as visible


I couldn't agree more! At the company I currently work for I have seen this phenomenon time and again.


Internally, they definitely do. I worked at Stripe's monorepo many years ago, and I am working at a larger company with massive amounts of repos. The difference in pain has little to do with mono v multi, but with the capabilities of your tooling team.

If there's anything I'd say to low-level execs, the kind that end up with a few hundred developers under them, it's that mis-sizing the tooling team, in one way or the other, comes with total productivity penalties that will appear invisible, but will make everything expensive. Understanding how much of a developer's day is toil is very important, but few really try to figure that out.


Not sure.

I think a lot of this is just type of thing comes because with a monorepo you can actually see the problems to solve whereas you can easily end up with the same N engineers firefighting the same problems K times across all your polyrepos.


You have different problems with both. Some problems are hidden in one, but there is no one best answer. (unless your project is small/trivial - which is what a lot of them are)


“Anytime you apply a rule too universally, it turns into an anti-pattern“.

Quote from Will Larson found in another HN post (https://review.firstround.com/unexpected-anti-patterns-for-e...) right after checking out this one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: