Hacker News new | past | comments | ask | show | jobs | submit login
Code ownership, stewardship, or free-for-all? (bennorthrop.com)
44 points by spallas on March 3, 2022 | hide | past | favorite | 20 comments



>As a simple example, imagine a small team of, say, 6 developers: > >The team has created and now supports a mobile app, an Alexa skill, three separate web apps, fifteen microservices, and a partridge in a pear tree.

If a team of 6 developers created 15 microservices, I'm afraid that they don't understand the pros nor the cons of microservices and they should be fired instantly.


Too be fair I did the same thing 2 years ago and I'm sure I'm not the only one. Drank to much microservice koolaid and ended up with 12 services. Consolidated it down to 7 once initial micro high wore off


Counterpoint: anyone who suggests the number of microservices should correlate to the number of developers should be fired instantly.


Wait, what?

Are you suggesting that the number of developers does not place a maximum cap on the complexity of your infrastructure?

Because if what you are saying is that nobody should ever get near that cap, so it should be theoretical only, that still supports a correlation.


How does "number of microservices" correlate to "complexity of your infrastructure"?

I'd also note that the OP had no concept of time in it - it didn't include a timeframe, so we aren't capped by developer man/work-hours.

If we are talking instantaneous support, there is no telling how this correlates - one complex microservice may be too complex for many devs, or many microservices simple enough o be supported by just one.

If we are talking about about either development hours, or code-familiarity, then I'm not sure how we are capped - are we going to limit the number of LoC a dev can write, repos they commit too etc, as well?


> I'd also note that the OP had no concept of time in it

Oh, ok. Point taken. This alone invalidates everything I wrote.


Six full time developers who wrote 15 microservices that each do one thing well should not be fired. The burden of microservices lies in the underlying infrastructure required to host them. So, assuming we have a competent infrastructure engineer, devops engineer, security engineer, site-reliability engineer, and a handful of other hats being done well... your six developers can write and maintain far more than 15 microservices assuming that each service has a clearly defined single-objective scope.

Because infrastructure complexity <> Software development complexity.


A model that is common in open source but yet not in the corporate world is that of gatekeepers. It's essentially a form of strong code ownership combined with stewardship where a majority of contributions are not necessarily created by the owner/gatekeeper. Instead a gatekeeper works with a small group of others to accept, integrate, and coordinate change request from a larger group of people. Their job is to reject bad changes, and ensure integrity of the overall project is balanced with the needs and wants of those contributing code. For most contributors the repository they contribute to is read only. They have to fork it to even be able to modify it.

Linux is the classical example of this. But many open source git projects work like this now. Commit access on an open source project is reserved for people in a gate keeper role. Usually these people are also contributing significant amounts of code. But the whole point of being open is that others can look at your code and modify it. And of course there is no obligation to merge those changes. You have to ask for those changes to be merged. And that request can be denied for all sorts of reasons. To get changes in, you have to negotiate with the gatekeeper and work by their rules.

Somehow this model never really caught on in the corporate world. They'll use git of course and they do pull requests even. But it's quite common for all people to have commit rights and have just a soft rule that pull requests ought to be reviewed and merged by someone else. But most companies and teams end up cutting corners when they are under time pressure or when management throws its weight around. Once a pull request has work done on it, it just becomes hard to reject it without creating a lot of internal conflict. Unlike the open source community where you have to negotiate with the repository owners to get your change accepted, corporate gatekeepers, if they exist at all, typically lack the independence that e.g. Linus Torvalds would have to accept/reject changes. They'll bow to management pressures, arbitrary dead lines, etc.

Gatekeepers like that are essential to large scale software open source projects. The short term interests of contributors are balanced with the long term interests of the software product. Companies seem to have a much harder time doing that. I think that especially large companies should be adopting this model.


We had something similar to this in a previous job. It was a large organisation and each team owned their own components, they had merge access etc. If another team wanted to make a change they'd submit a pull request and the owning team would review it. That owning team would impose their own process and tooling on top of the standard stuff that was org wide. The big difference I suppose is that the team were the ones doing the majority of the work rather than purely gatekeeping. It did work very well on the whole though, especially as that team would then be responsible it in production.


I think this is largely a cultural thing. Where I work, if someone submits a pull request that doesn't improve the value of the product meaningfully, the reviewer just says so and the request gets closed and that's the end of that. No conflict. We're all trying to do the same thing: improve the product.

Of course, we also had to adjust the way we work to make this feasible. For example, we discuss design and strategy decisions early and often, so that bad designs or obviously wrong choices generally don't make it as far as becoming a pull request, but are rejected already at the idea stage.

Then when we are uncertain, we make the pull request from very rough draft code, and expect to have the big ideas reworked slightly in collaboration with the reviewer, so that once the final pull request is submitted, it's largely in the shape we want, with only minor comments remaining.

----

The same thing goes for what you say about time pressure, really. We know why we have the review process in place, and we have hard evidence that it saves us a lot of time and effort in the end, so we don't circumvent it even under pressure.

And we, the software engineers, are the experts who understand this. Management might not always understand it, but that's their problem, not ours. If they start to throw their weight around, that's bad management. Also their problem, in the end.


Yup - organizations benefit from the ability to communicate about the design before the coding begins. In my experience it’s much more difficult for a new contributor to an open source project to do this.

And, since the cost of change [1] is so much lower at the idea and design stage, most things probably aren’t reaching PR stage that would be likely to seriously fail review (tweaks and details notwithstanding).

[1]: https://www.mbejda.com/mneo/


> the cost of change is so much lower at the idea and design stage

That's one of those software engineering facts that people spread as absolute truths that are far from absolute in practice.

It's mostly true, but requirements gathering can be a very expensive process too, and you can lower that cost with a working example. Also, UX designing can get off-track very easily if it's done in a vacuum.

Also related, for some problems, having your software tested before it's even written can bring a lot of productivity to the writing stage. (For some others, it will completely ruin everybody's productivity, so the idea that finding bugs at the "testing" stage is simply "more expensive" fails here too, you don't look at an unmitigated disaster and calls it "more expensive".) For some problems bugs in production are just consequence-free. And in way too many cases you simply can't design some valuable interaction without having people using your ideas on the wild.

TL.DR. that link is a very good rule of thumb, but beware of the exceptions.


Sometimes they do have this, but upper management often requires anyone under them to be replaceable lest they have too much leverage in salary negotiation, which undercuts this. Hence it's not an official role, and can easily be broken by the usual biz processes that are insensitive to it. Until a methodology comes along with a specific method of hiring for such a role, this might not change. Corp modelling likes to treat developers as interchangeable commodity units - gatekeeper needs a lot of software-specific context that defies this model.


since the aim of this 'gatekeeper' (maybe 'weaver' or 'conductor' (the orchestra kind) is a better term? the word 'gatekeeper' implies withholding access from a certain group or class, at least to me) is to constantly improve their library and solicit meaningful contributions from teammates (and potential teammates), you could argue that the main skill someone in this role has to have is the ability to carefully listen to (and balance) the sometimes wildly varying needs of various internal and external parties.

for example Linus has to constantly monitor the current development of CPUs in the industry to be able to improve his kernel and keep it relevant, while leveraging and arguing for e.g. a radical change in direction or approach for the work he and his teammates might complete (again, to stay relevant and be able to complete meaningful/exciting work together).


A step back maybe, but that's a lot of different technologies and skillsets required for a small amount of people.

One thing to always keep in mind is the 'choose boring technology' mantra; prefer using existing technology (e.g. languages, API technologies, front-end libraries) over adding new ones. Every new tech adds complexity, or another node in a graph.

Also, avoid microservices. That is, don't build microservices to start off with, because you're most likely making the wrong abstraction, and you're adding a ton of overhead (inter-service communication, logging, tracing, envirionment setup, etc). Splitting off services is fine as long as you can bring a strong argument in favor of it (e.g. performance, if one aspect of your application is hit 10-100x as hard).

But in practice, in most applications, you don't need microservices. You will need scalable services, but those don't have to be 'micro' per se. Also don't get hung up on the 'micro' in the name, size should never be a reason to break up a monolith.


> e.g. performance, if one aspect of your application is hit 10-100x as hard

I'm not sure this one is a good argument for microservices. Having 100 copies of a microservice isn't any easier than having 110 copies of a monolith.

Any kind of competent code division is done because the divided interface is simpler than what you'll get without the division. So, go with the sibling here, and divide your code only if you have a very enticing interface that will be good to isolate and reuse. Doing high-level architecture decision because of operational details is almost guaranteed to lead you into disaster. You can do a few of them ever, so if you do it, it's better be one of your main differentials.


It's cheaper when you reduce resource requirements.

If you're loading a bunch of code in memory you don't end up using, you're wasting memory. Depending on how modular your code is, you may end up loading other external dependencies that don't get used (like initializing database connections). Likewise, for containers, you can usually make the container image smaller reducing data transfer, reducing Docker pull time, reducing host IOPs to unpack the image.

A lot of those problems can be optimized around, but sometimes it's quite a bit of work if you're already on a platform or framework that makes assumptions

If you're not deploying or scaling often and you have other mitigations like lazy loading to reduce memory, there may be no advantage.


I think you are repeating a strawman argument against microservices that doesn't always hold.

Microservices are about domains of logic and form nice clear boundaries. They don't necessarily need inter-service comms or much by way of environment setup/maintenance. We have an email microservice that was split up that way because the email logic was spread out everywhere and called from multiple apps. A microservice was a good way to encapsulate all of the email logic into a single place with a clean interface. We can make all sorts of centralised changes without affecting any of the apps. We can also scale it out as the amount of email we need to send increases.


Email just sounds like a regular service. Separate SMTP servers where you encapsulate all of your logic of email dispatch frequency, retries, etc has been pretty standard for a long time.

> encapsulate all of the email logic into a single place with a clean interface

You can encapsulate and put stuff behind an interface with libraries too. This is not the value proposition of microservices.


Parent said:

  don't build microservices to start off with
and you describe:

  because the email logic was spread out everywhere and called from multiple apps
so it doesn't sound like a counterpoint to me - you didn't start off with the email service, and only split it off after it was already established across multiple entities. Plus, you added a bunch of justifications for this, i.e you could "bring a strong argument in favor of it" - so where is the strawman in your examples?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: