It helps to read her previous post on the topic to understand her point. She complained about being accused of focusing too much on problems of scale as an ex-Google/FB engineer. And she suggests her colleagues don’t have the engineering chops or desire to dig deeply into a problem the way she does. I don’t think the author understands her colleagues motivations. It may be the case that some of them are “vendorops” coders but I suspect it’s far more likely that at a company with resources like FB or Google, which is to say near infinite resources, you can dig into such problems regularly. At a smaller company such technical deep dives may be intellectually gratifying for the coder but may not be worth the limited resources of the people who pay you. And you literally need to grab some off the shelf libraries and code on the company’s domain problem to save time & money.
Besides resources, for companies that do not operate at an planetary scale like Google, Facebook et al., I'd argue we have reached a point where a lot of the common problems have been thoroughly solved in multiple ways for a while now and you can pick one. So it is not just to save time & money. Outside the core business and if the evaluation of third party products has been done decently well (which I think is more often the actual problem) there is often basically zero need or benefit to come up with something custom.
This. She is used to working in a none (to a first approximation) resource constrained environment, in terms of time, money and probably most importantly domain experts that she can draw on to create from scratch the bits that she can't. That is not the case outside of planet faang.
From the outside, big organisations have massive resources and access to experts. But inside there is intense competition for those resources and people. The people are expensive and there are many opportunities so you need to justify spending their time, especially in the FAANG impact ecosystems. Despite this pressure people may misallocate their time and it can look weird if you don't hear the justifications but they are really trying hard to spend time on the right things, and they do feel constrained - there are so many rapidly growing areas that need investment, there's actually always a shortage if you think about what could be done.
Using a library versus building internally is a tough trade-off and can have long term consequences. I think everybody is comfortable with the downsides of building. But we sometimes forget the costs of dependencies. My framework is that introducing a new concept whether written internally or through a dependency has a cost. Each class hierarchy, external tool, build system, etc. has a cost and we must weigh this cost against the benefits. Dependency sprawl is expensive especially in languages like Python or JS where libraries break backward compatibility on a frequent basis. I like that Rachel is bringing up the longterm cost of dependencies, they can often be more expensive than just writing a simple function or HTTP call without supporting libraries. Building may be cheaper.
Some people just hate managing systems they can't fix. Other people enjoy the challenge of reverse engineering complex systems and squeezing the best out of them. It is only reasonable to ask for the job expectations to be explained upfront. Rachel's post about onboarding https://rachelbythebay.com/w/2020/05/22/boarded/ goes into this idea in more depth. There's so many differences even in Silicon Valley. Companies should share more honestly what the day to day will be and their appetite for building. This would restrict recruiting funnels upfront but make people happier.
The problem is that inside a company there are many perspectives. The representatives of the build faction may achieve a temporary ascendancy and will obviously try to hire then and paint a rosy picture, but then the buy faction will see the hires and become highly incentivised to point out the lack of immediate results and trammel scope, causing the conflict Rachel is writing about.
Yeah, I think that part of what makes a good engineer / good employee generally is the ability to recognise and work within the constraints of the organisation / team without spending all your time whingeing or proselytizing for the One True Way of doing x. This is quite difficult as developers as a breed have a tendency towards binary thinking and engaging in doctrinal disputes.
I actually agree that dependency sprawl / hell is a problem best avoided... which is why I avoid web development like the plague.
I think Rachel's broader point that the terms we use to describe developers / software engineers are very broad and sometimes unhelpful is right. I think she's being a snob in wanting to retain the high status term for what she does, while dismissing the stuff that she obviously struggles with as being not engineering, but I do think we shall take care to distinguish between high level engineering that largely consists of integrating existing libraries / modules etc. and lower level engineering that she engages in because they are different skills.
Some of us even have jobs that are a mix of high and low level engineering and we have to have the good judgement to choose when to do one or the other.
I think it's valuable in the sense that it creates value. It gets things built that work well, and fit into the bigger picture needed by the company. It might even be one of the essential skills in building a successful tech company.
But I don't think it's valued much on the open market.
As in, it's hard to be recognised for that kind of skill. Established, larger companies tend to have a structure in place and, at least on the open market, try to fit people into narrower roles. Even if there's a laundry list "tech stack", it's usually a narrow role.
As ritchiea said, it can seem as if the only place where the range is a good fit is at early-stage startups, with plenty to do and good judgement needed, but limited earnings on offer.
It depends? Certainly for some companies but I wouldn't be the right fit if you need a highly specialized infrastructure person at Netflix or AWS or something. Sometimes it can feel like I'm only a good fit at startups which isn't great for earnings potential.
Well said. This is exactly the crux of the problem. Sometimes it's appropriate to build in-house, and sometimes it's appropriate to glue together existing components. It really depends, and it's unproductive to pontificate about one being better than the other.
I think the OP has a fair point about one mode of development being substantially different than the other, but I don't think it's appropriate to partition them into separate roles. Part of engineering is knowing which path forward is most appropriate under uncertainty, given information about the team's skillset composition, available resources and time constraints.
A good software engineer is capable of holistically considering the options. If your third party library is considered like a paved road, this involves knowing when your vehicle can tolerate veering off the paved road, and knowing when you need to build a new road yourself.
Companies being honest about what a job will be like is so critical. I recently had a company tell me they wanted me to wear many hats, lead a team, talk directly with customers, respond to customer needs and build things I believe the customer wants/needs. When I actually started working it was months of doing very micro-managed pre-defined projects the founder/technical lead already had in the backlog. I brought up the discrepancy between the job I was promised and what the actual job was (straight execution in a feature factory that didn't even have decent text coverage!) and they told me they couldn't give me the responsibility they promised up front.
Now maybe you read that and think I hadn't earned the kind of responsibility I was asking for. But 1) I had it at other companies previously, bigger companies than the last job 2) again, it was the type of responsibility they teased in the hiring process. I ended up leaving after a few months for a role at a company that actually offered me a leadership role with autonomy.
I find the idea that "engineering" is really about BIG SCALING really tiresome. Sure, careless throwing together of random libraries, packages, and stack overflow snippets goes against sound engineering principles, but so does nearly everything Google does (I would say "everything" but they do employ careful engineer Junio Hamano, though I'm pretty sure he gets to set his own culture himself).
This is a byproduct of the engineering culture being co-opted by the cult of agile.
For some domains, the 80% solution is perfectly fine to launch with. For others, it is quite sloppy and you end up with paper cuts in normal use. Problems arise when you, as a consumer of a library, expect a 100% solution and discover it's only an 80% solution. Worse, most libraries do _not_ tell you what design decisions were made.
She already addressed what you said in her previous post. The original issue wasn't about "BIG SCALING" but just being designed and implemented safely and sensibly; the library used failed at smaller scales too, just with less frequency.
Having worked at a FAANG-level company in systems contexts, I think that most people with similar experience would say Rachel is spot on.
If I may provide what I believe is a different interpretation of what she is saying.
Yes, sometimes there are really solid libraries that solve the problem you are facing well, but often engineers will reach for a Swiss army knife when all they need is a screwdriver. It has a screw driver on it, but because there is so much else there (trying to be all things for all people) the screwdriver isn't even that good.
Moreover, solving a problem quickly is not the same as solving a problem forever, and we are responsible for every instruction we import, at some level. If the problem is small (see left-pad, etc.) the smaller total cost of ownership is often to write the specific implementation you need, if you are capable.
The screwdrivers on a Swiss army knife are of good quality, but you can only have so many sizes available and because of the way they're attached they are cumbersome to use in certain situations.
Multi-tools like Victorinox Spirit come with a ratchet and bits. Leathermans also have bits, but they typically attach directly to the tool and therefore suffer from the same limitation regarding ease of use.
This author consistently omits important context from their posts. When this is pointed out there is always a rebuttal (however unconvincing), never a change of perspective. When new insight is gained, it always confirms or elucidates an opinion that the author already held (as in this post). Introspection and reflection seem absent from this blog. Maybe that's intentional, I don't know.
Resources are one obvious constraint. The stage a company is at in its lifecycle is another. There was a time when Google had security issues so egregious that if you bring them up to current employees you will be accused of lying. Company growth rate is a constraint. Why solve a problem perfectly now if you know you'll have to solve it again in a year when the company outgrows the current solution? Employee workload and performance assessment is another constraint. Employees will only give the company so much time, and if the company doesn't allocate enough of that time towards keeping the house in order then the house will not be kept in order. Cultural stigma is another constraint. Sometimes people are not willing to say why they are doing (or not doing) something in a certain way.
If you file an internal bug, come away dissatisfied with the fix, and conclude that the fixer is some lesser-than-engineer who is lazy and doesn't care about doing things the right way you might be ignoring any of the above constraints or many more. Or they could be bad at their job. There are plenty of people like that in this industry, too.
Ghetto-izing code reuse in programming as “VendorOps” and associating it with the Peter Principle seems like a great way to discourage code reuse and reward people who would rather rewrite code unnecessarily than read someone else’s existing code (notoriously hard). Do you really want to make it a badge of honor to reinvent the wheel?
I also would venture this will further marginalize people in tech who are already marginalized. There’s enough hierarchy in programming already (and not particularly correlated with skill). IMO!
Also: the position (my summary) “People who disagree with my last blog post don’t REALLY know how to program and are covering up insecurity over their own incompetence” is an awfully convenient one to take. Maybe this is not as objective an assessment as it could be.
I had no problem with the concept with what she calls “VendorOps”. My last two jobs after moving up to a position where I was a developer but also either officially or de facto responsible for deciding when to build and when to outsource, my job is not to develop software. It’s to add business value. I am just as proud of the code that I had sense enough not to write as that I did.
> Do you really want to make it a badge of honor to reinvent the wheel?
The performance review processes of FAANG-like companies tend to vastly favor building new things over any other type of code contribution. If you stay in those environments for long enough it's easy to internalize the ideology via those "best" practices in a way that feels like it was your own independent decisionmaking.
"What do you mean, throw off center? That's ridiculous! You throw the ball straight at their head if you want to hit them in the head. Everyone knows that."
At the company I work the lead, who also writes the most code, is also first to recommend we use a vendor, like split.io or rollbar or sendgrid or otherwise. It's not that he's lazy or stupid, he's focusing on product, and leaving little code things to vendors. He's right too. If we had someone spend 10 days figuring out how to correctly push errors into elastic and then create a bunch of dashboards for it, we could simply save those 10 days and focus on product and instead put the saved time on a 5 year subscription to rollbar.
Just a quick edit - if we were working at Google, of course we wouldn't use another vendor. Everything within google has a google solution. Rollbar, splitio, sendgrid, Google literally has an internal product for all of these things.
I'd venture that if you're going to say, "suggesting this marginalizes people," you should back it up with why you think that. It does no good for anyone to say something like that (and in doing so, heavily imply that the person making the statement is some kind of icky paraiah) and leave it at that.
It’s clear the author of the original post is taking an elitist stance on what it means to work as a coder and trying to brand the work of others as less important & give it a different job title. Essentially saying “a real
programmer does the job in this specific way I say a real
programmer does.” It’s not the kind of attitude that benefits anyone. It’s the kind of attitude that does push people out in an industry already with a reputation for pushing people out.
The irony is that I’m sure the author experienced elitism and discrimination when she started her career in software development and now doesn’t realize she is propagating the same.
Yea exactly, she seems to have reached a level of comfort with her technical proficiency and years of experience that she's not even seeing how this kind of talk can be discouraging to a junior who doesn't have the depth of knowledge she has.
I don't think it is elitism according to the meaning of that word.
Elitism would be saying "this group of people should be in charge because they know what they are doing; that group of people should not be in charge because they don't".
I think the author is in fact correctly recognising that different people enjoy a significantly different kind of software work from each other, and are unhappy with the other kind, and both kinds are valuable.
Recognising when there are different kinds of work and allowing people to choose for themselves which one suits them is not elitism.
Imagine if someone on a factory floor said "I've just realised that welding and painting are different kinds of work and most people greatly prefer one over the other. Perhaps we should label the jobs and let people gravitate to the kind they like so people aren't as unhappy."
Should we be up in arms and call that elitism? No, we should ensure both roles have equal status and recognition, but not hide the differences.
However, "VendorOps" does sound like a slightly derogatory word, so a better word might help.
There is no equivocating here - the assertion is that the folks in this category are incompetent and, "almost rational," whatever that means.
> "I've just realised that welding and painting are different kinds of work and most people greatly prefer one over the other. Perhaps we should label the jobs and let people gravitate to the kind they like so people aren't as unhappy."
Except it's more like, "We hire people to design and assemble complex machines and some people make their own tools while others buy them. Anyway, the people who buy them are basically incompetent and when they claim to prefer focusing on producing the machines rather than tools for building them it's because they are scared of losing their jobs. Let's give them a new title, ideally that trivializes the work."
This was my take as well, the author immediately attacks people and not the arguments and classifies them as lesser. Frankly, I don't know what this person has contributed to software engineering that merits them speaking this way.
I try not to be in the business of deciding who has and has not earned a particular mode of speech, but I think the consistently-high volume of votes and discussions on Rachel's posts should show that the substance is there, many many times over :)
Votes only mean that they write things that agree with popular opinion. I find it funny that you say that accomplishments don't matter and then immediately use the amount of votes and discussion to give this person credibility.
Anyway, if someone writes an article being a proponent for elitism they should have accomplishments to back it up. This person has done nothing and they don't deserve to be writing this kind of article. Maybe if this came from someone like John Carmack I would listen but I don't think he would be dumb enough to write this.
When did I say that accomplishments don't matter? I worked in the author's sphere of influence at a previous previous company and witnessed them firsthand on a regular basis. I would agree that the corporate "fog-of-war" makes it easy for the accomplishments of very skilled people to disappear within the Googles and Facebooks of the world, but for certain people I imagine being away from the mouths of the outside world makes it worth being away from their eyes too. Popularity is just one of those things that helps me trust that something may exist even in cases where I can't personally see it. The crowd aren't always right, but I think their approval can often signify my need for a second or third look at something I've reflexively dismissed. Attitude is a tangential but separate detail that would be inappropriate for me to have opinions on, but I can at least say that if there is any it is deserved :)
My takeaway was that "software developer" is not a very useful job description anymore. CRUD, web dev, games, HPC, embedded and other fields require vastly different skill sets. Changing between these fields can be like starting your first job out of college, learning to use not just different tools but also focusing on very different priorities.
("$language developer" is only slightly more useful, especially for languages which are used across many different fields such as Python (web dev and science), C# (web dev and games), C (games and embedded), etc.)
The article revolves around the continuum from building everything yourself to importing a third party library whenever it can speed up delivering the current feature just a little bit. It's pretty clear by now that the many software development fields fall into different parts of that continuum.
> "$language developer" is only slightly more useful
I would argue that this is not a useful distinction at all. Any good developer should be able to pick up any language quickly. At the companies where I've worked, even the interns have no expectation of knowing the language before they're hired, and the vast majority of them learn quickly enough to ship useful code within their ~3 month internship.
> Any good developer should be able to pick up any language quickly.
I see this repeated very often without support, and I think it should be challenged. In my experience there are cases where it's true, and cases where it isn't.
Actual example: I have met highly competent software engineers who were (primarily) excellent Python programmers. They knew OOP well, they didn't typically over-abstract, but they could still leverage modular code for reusability, they tested early and often, they could dig into an existing codebase and maintain it, they could critically and constructively review others' code, they had a deep knowledge of the Python standard library and various domain-specific frameworks, etc. They were really good at all of this.
But by their own admission, they were crap at C++. They didn't really know memory management at all, and they didn't really get the whole "ethos" of C++. Moreover, they periodically tried (and failed) at picking up functional programming.
Now I believe if they were sufficiently motivated they could eventually pick these languages up. But they wouldn't "quickly" ramp up on a production codebase written in a C++ or Clojure. The idea that they're not a "good developer" because they wouldn't be able to quickly become productive in one of these radically different languages is therefore incongruous to me. I'm sure they could dive into a Ruby or JavaScript or even Java codebase. Or maybe C++ if it was strictly modern, 17+ and avoided raw pointers like the plague. But in general, no. It would
take a lot of time, because some languages come with a lot of baggage that aren't just about the syntax. For someone who had always relied on Python's PIP, for example, the process of vendoring and importing C++ header libraries is already an obstacle.
The reason I'm going on about this is because it seems like received wisdom and is often stated as an accepted aside, but I really think it needs more nuance. Otherwise we run the risk of defining "good" software engineers by what is perhaps just a no true scotsman.
I think we conflate the language learning curve with the problem domain learning curve. Is what throws people about C++ actually the language, or is it "thinking with pointers" and "concurrent programming with shared memory"?
My CS education taught me those things (at least to a beginner level) in C. I suspect this would give me a major advantage at "C++" over someone who had never seen them. Even though we are both newbies to C++ itself.
Dependency management is a chore. Chores are different across language ecosystems, but it doesn't exactly stretch your mind to learn how they're done in different areas. Whereas minds really do catch on problems like pointers, locks, recursion, etc. and we can legitimately wonder whether someone is going to become okay at them anytime soon, or ever. I saw friends wash out of CS over these topics.
> But by their own admission, they were crap at C++. They didn't really know memory management at all, and they didn't really get the whole "ethos" of C++.
That seems normal and expected. Why would these people know memory management and c++ "ethos" if they've never worked with c++ in a professional capacity?
> Moreover, they periodically tried (and failed) at picking up functional programming.
Presumably the reason they didn't learn FP was because there wasn't sufficient motivation and assistance. I work at a company that does all FP and primarily hires people with no FP experience. We have a week-long FP training and tech leads spend a few hours per week with each new person helping them get ramped up. Of the hundreds of people we've hired in the past few years, I've never heard of anybody who failed to learn the language and our codebase style. There are plenty of reasons that certain people haven't worked out, but this is not one.
Well what I was driving at (and again, by this person's self-assessment) is that you wouldn't learn all the meta of a language like C++ in a couple of months. I do believe anyone who can code is capable of learning it eventually, but I have never seen someone ramp up on that language from an interpreted one quickly. Which is not to say it's a badge of honor to code C++, it just is what it is.
I think if you talk about the narrow scope of learning a language's syntax for greenfield projects, it might be true that people who know one language well can quickly learn any other language. But some languages force you to learn so much other "stuff" before you can be professionally productive in it, and I think there's some danger in just repeating that any good developer can pick up any language quickly.
My bottom line point: if I saw a developer who I knew to be very competent fail to quickly ramp up on a very different language in a professional setting, I probably wouldn't revise my assessment of them being "good."
> you wouldn't learn all the meta of a language like C++ in a couple of months.
I think this reinforces OP' point though: what we really need are descriptions more specific than "software developer" but less specific than "$language developer"
> some languages force you to learn so much other "stuff" before you can be professionally productive in it
And it's exactly that stuff that should be the focus of describing the role. e.g. you're looking for a low-level network software developer with proficiency in memory management. If you were recruiting someone to help you with a Go or Rust codebase in that domain, you wouldn't pass over someone with a ton of relevant experience via C++ who hadn't spent much time with Rust/Go yet.
Focusing on the language rather than the skills/application (even when there's a heavy correlation between the language and associated skills) excludes good candidates and includes irrelevant ones
> if I saw a developer who I knew to be very competent fail to quickly ramp up on a very different language in a professional setting, I probably wouldn't revise my assessment of them being "good."
For me, it would depend on what support systems are in place. If they're asked to learn on their own with no assistance, sure this failure doesn't mean much. But if they're put in a team full of experts on the new language who are eager to help them learn, and they still fail, then yes I'd update my opinion.
The trick here is to know which skills you need and which ones cross over. There are lots of families of languages, and maybe that's a better way to evaluate. If you need a C programmer, then you want to know they can manage memory. If you want a Haskell programmer, you want to know they can handle difficult functional abstractions. So there are certainly cases where you need to worry whether they have the transferrable skills, but it doesn't necessarily mean they have to know your exact language.
That's true. A large proportion of people who are extraordinarily good at anything will never acknowledge how good they are but instead will talk about how much there is left to learn.
APIs are documented (hopefully) and can be rote-learned or looked up. If someone has the general skill of "learning APIs" then they're going to figure out Android or QT or Rails or Spring or whatever the problem requires. Not as quickly as someone who already knows, but I wouldn't doubt their ability to ramp up.
I'd be more worried about paradigm shifts like higher order functions, manual memory management, OOP, concurrency (also between "share by communicating" and "communicate by sharing"), glue code vs. algorithm code, monolith vs. microservices, embedded vs. server side.
There is a lot more that goes into learning an entire stack than “documented APIs”. Are you saying that because I know C# I could go out there and start developing games in no time since Unity is well documented?
So you would really hire me to be an Android developer who has only written 20 lines of production code in Java over 15 years ago and the only mobile development I’ve done is on ruggedized Windows CE devices almost a decade ago, over an experienced Android developer?
Let’s say I know AWS pretty well (which is the truth I work there as a consultant), and I do most of my AWS related scripting and development on top of AWS in Python. How useful would that combination of knowledge be if you told me to do the same type of thing on Azure or GCP? I wouldn’t even know where to start, I don’t know anything about either.
On the other hand, if you told me I had to build the same sort of solution in Rust, a language I’ve never even seen, I’m sure I could pick up the language in a weekend, include the appropriate SDK and be just as efficient as someone who has been working with Rust for years building the same type of solution on top of AWS.
>I wouldn’t even know where to start, I don’t know anything about either.
I'm willing to accept that your mind works very differently from mine, but this is extremely surprising.
I take my concept-map of AWS, superimpose it on GCP, and start connecting the most similar nodes. I go looking for the VM service, the DB service, the load balancer service, the cache service, the firewall rules. I read the getting started guides for each, try to do with them the things I would do in AWS, check the reference manual when you I get stuck. Of course I may get burned if the corresponding services turn out to differ in important but subtle ways, particularly "folk wisdom" ways like quality and maturity. But I can definitely get started.
>So you would really hire me to be an Android developer who has only written 20 lines of production code in Java over 15 years ago and the only mobile development I’ve done is on ruggedized Windows CE devices almost a decade ago, over an experienced Android developer?
If I thought you were better at CS fundamentals and at least average at learning things, absolutely. Not to start an Android team - I'd want someone who has already learned the undocumented gotchas to be reviewing your code. But if I think you have the general muscle for learning software ecosystems, whether it's yet been applied to this exact one is a detail.
I take my concept-map of AWS, superimpose it on GCP, and start connecting the most similar...
And then you would end up with a costly unoptimized solution that didn’t take advantage of what GCP had to offer just like the lift and shift “consultants” who first introduced me to AWS when I was a dev lead. They were a bunch of old school infrastructure guys who passed one multiple choice exam and thought they knew AWS (at the time unfortunately they knew more than I did).
They set up a few VMs and load balancers, and called themselves “moving us to the cloud” when in hindsight our product could have been much easier to maintain if they had any clue about how to leverage any of the native services that AWS had to offer.
(I’m talking way out of my league in the paragraph below).
If you didn’t have any GCP experience how would you know whether setting up VMs was the right answer or just use Google Cloud Functions? How would you know whether Firebase would meet their needs better? Would you know whether a certain pattern service met HIPAA compliance? Would you be sure that you set up your permissions securely? You went right to setting up VMs because that’s all you know about GCP. That’s all I know. Do you even know what you don’t know? Can you be sure that your decisions won’t cause a security breach? Cause the company to spend more money than necessary? Of course as I said before when it comes to GCP/Azure, I don’t know what any of the services do.
I’m sure someone who knows GCP/Azure could look at a problem and tell them a better solution than “set up a few VMs”. Just like I could with AWS.
Of course I may get burned if the corresponding services turn out to differ in important but subtle ways, particularly "folk wisdom" ways like quality and maturity. But I can definitely get started.
Knowing “where to get started” is not good enough when you have to worry about all sorts of compliance issues..,and your (theoretical you, I have no idea what you know or don’t know) solution would end up costing more just like every “lift and shift” solution with no optimizations seem to.
If I thought you were better at CS fundamentals and at least average at learning things, absolutely...
So instead of hiring an experienced Android developer - there are plenty - you’re going to trot someone in front of a whiteboard and have them reverse a binary tree?
When I was either responsible for hiring or could give the thumbs up or down in the real world at small companies where each IC was expected to hit the ground running, we weren’t going to spend six months letting them ramp up when we were growing fast and introducing new features (microservices) that could bring in revenue.
They just meant in terms of "x language is typically used for tasks y, z".
Of course you can do most all things in most all languages, but often times you can infer CRUD/web dev from PHP, for example, and would be far less likely to come across a PHP game developer.
Yeah, imagine if you were a biologist who writes their papers in english and someone expected you to do a little meteorology because they're both natural science and english is a commonly used language in both disciplines.
> Changing between these fields can be like starting your first job out of college
I don't think that's true unless you've done a bad job at learning in your previous job. There are lots of thing you should be learning that aren't directly related to the domain of your code: how to debug existing codebases, how to read other people's code, how to do code review, general coding patterns, architecture principles, how to prioritize your work, how to work with other people, etc, etc, etc.
All of my jobs have been in very different domains, but the experience at each has made me much better at the next.
The title says "spectrum," but a lazy read of the content could suggest that it is a dichotomy that is being described.
The situation described by the author is definitely a spectrum, however. I currently find myself in a position that is a bit far for my liking toward the "VendorOps" end of the spectrum, and I find the name given to this spectrum by the author to be immensely useful.
I've previously defined the spectrum to myself as a "build vs buy" spectrum.
My current job is not exactly "VendorOps" in the complete sense described in the article, but it is a bit far in that direction along the spectrum for my liking, I write and architect code projects, I write unit tests, build and deploy it all within CI/CD pipelines that I myself have built and designed.
But it's all in the service of gluing 3rd party software and vendor implementations together, and I do most of this work myself, with little interaction or overlap with the one other developer that is employed full-time at our company.
Much of my previous experience building what I would describe as "Building Proper Software Projects Together With A Team." And I miss that, and will seek that out in future roles. And having a proper (and publicly defined) name for this concept will be immensely useful.
The problem is having a proper and publicly defined name for the concept could piegon hole you. It might make it harder to escape in this world of hyper-specialism.
One of the primary reasons I'm trying to move on from being a software engineer is the expectation that I'm supposed to be fluent in dev-ops type work as well. That domain, for me, is something I have absolutely no interest in, and not much of an aptitude for.
The problem is that managing deployed environments is a completely different skill set and problem domain. Is it complementary? Sure. Useful to have? Yes. But it's ultimately orthogonal. I can still write great software without having to give a shit or know much about how our AWS environment is configured, much less do that work myself.
Being a programmer is a spectrum, yes, but it's multiple spectrums. Build vs configure is one. Product focus vs technology focus is another, one I happen to land much closer to the product side. Unfortunately, I think the pendulum is swinging the wrong way for me on that one, and it's soon going to signal the end of my programming career.
There was another vision of devops where it wasn't orthogonal: software engineers would write software to manage itself. Deployment, monitoring, resiliency, etc. would be woven into the application instead of being someone else's problem. Management would be done through APIs instead of config files and shells.
But containerization kind of cleaved the world between "making containers" and "running containers." Even if the same person does both, they don't look much like each other. Distributed systems problems are hard, and usually get outsourced to battle-tested commodity services (k8s, zookeeper, consul, etc) which need their own management. Knowing your own code inside and out doesn't really help you debug a loss of quorum; the consensus protocol is "over there." And the APIs turned out to be so complicated that we built config files and shells for them anyway (e.g. Terraform).
Do to a reorg, one of our best software engineers has been placed in an architect role. He’s doing a fantastic job but I wonder if this is the type of work he wants to be doing (to my knowledge he did not seek out the role). The role is designed as a split architect/dev role but he hasn’t written a line of code in months. I’ve been meaning to reach out and see how he’s feeling about things and to make sure we address any concerns (this is somebody we absolutely want to keep on the team) but just haven’t made the time to do so. I now have a sticky note to reach out on Monday, thanks for the reminder!
There are plenty of other interesting computing domains that are technically demanding and meet real needs.
But, web dev seems to suck up all the air in the room, making you think that all jobs are that.
I focused into compilers, systems programming, and security. It took awhile to shift, find like-minded people, and know where to look for work. But now I see multiple avenues forward. Before, it looked like I had to leave tech because I couldn't stand web dev.
I've been to several DevOps conference, and I was particularly struck speaking with people in the audience saying such things as:
• "I work in DevOps managing a pipeline. I don't know how to program, but I was thinking of learning a language."
• "I write Python as a senior engineer and work closely with my company's devops systems. You write C? Wow, that's supposed to be really hardcore isn't it."
There clearly is more diversity in these lines of work than I would have guessed before attending.
I think I'm closer to you on the spectrums you mentioned. While I find the author's actual writing to come off as narrow-minded and self-aggrandizing, I do agree with the title itself.
There are some companies and teams that are looking for product focused devs that don't need to work on devops. If that's what you enjoy, look for that. I know finding another job isn't the most fun thing but it might be easier than giving up on your current profession entirely.
I just took a job that should put me on a management track, but I had been fishing for product jobs as well. I think one way or another this will be my last programming job.
If the mgmt thing doesn't work out, (the opportunity I'm expecting doesn't open up, or I just find out it's not a good fit for me), then I'm not sure what I'll do.
I once told a friend of mine that I find "high level thinking" easy. He scoffed at me asking: "how is that easy?"
In my job, high level thinking is that you look at a diagram with at most 10 different concepts that you already vaguely know, and on a high level that is good enough.
For example (I'm making this up):
- You have a client sending a request
- Which has a server receiving it that relays the request to the specific service
- But before that happens it is sent to the auth service and the auth service responds back to the server that the user is logged in
- The server then relays it to the specific service that the request needed to go to
- That particular service needs some info from yet a different service
- That different service sends it to the service
- The service sends everything back to the service
- The server does some final serialization (e.g. no silly fields should leak to the client) and sends it to the client
This is the most elaborate example I could think of, most of the time it's a lot easier, but I also find this type of thinking easy.
He responded with: oh that's not what I meant with high level thinking. In our case we need to know specifically: what protocols are being used (is it TCP or UDP, why?), how do services make each other known (e.g. broadcasting?), what component lives where in the world? And so on.
It turns out that my thinking was too high level and relatively useless in the field that he works in (he doesn't have anything to do with web dev and does something more low level, more performance related and more network related).
So that was quite an experience for the both of us that we meant something different with "high level" and that statements such as "I find xyz easy" can come across as very arrogant to the other person because to them you're kind of saying something impossible.
Yes it is also interesting that the same "underlying" physical layer can be abstracted (cut up, aggregated) in different ways. They are all different ways of seeing the same thing.
That statement cuts so deep. In my experience, this is true for all communication. I've experienced multiple times that I'm not speaking the same Dutch (native) or English (second language) as the other person and it's always quite an upsetting realization.
If it is a spectrum, why keep insisting to put arbitrary labels on parts of it.
Every job has a different combination of tasks that make sense to be externalized and those that don't. Even the most NIH people will draw a line somewhere and don't do everything from scratch. And the combination changes for the same job over time as the product/project/company evolves.
Trying to capture the exact breadth of a given job in the title alone is in my opinion bound to just lead to endless bikeshedding.
But I'm wondering, if, long term, it ceases to be a spectrum and becomes clearly distinguishable areas of people writing the infrastructure, libraries, frameworks etc. and those that combine them to the final product.
I never comment on Hacker News posts but I found this blog post to be so insightful that I had to give my two cents. VenderOps really is a different job and as a "senior" dev in my career I can say that it is a job that I would happily take at this point. I work for a small startup and there is no end to the AWS, Docker and K8s shenanigans we are called on to do day in and day out. The problem is no one wants to volunteer to do this kind of work so the CTO (the most experienced engineer on our team) ends up doing most of it himself. There are paid consultants who will gladly come in and Terraform all your shit but we don't hire them. While some of us may not consider that "real engineering" these professionals are essential to the success of your startup. Once they put the pieces in place then your devs can run with a fully automated CI/CD pipeline and deliver the features your customers want. Then as adoption starts to take off you get into observability, Prometheus, Grafana and the five nines. This is the point when the agile rock stars start to burn out or bail and the need to bring in experienced SREs becomes inescapable. This is the VenderOps state and it is the pit of success you want your startup to be in even if you yourself have no interest in partaking in it. There will always be a place for brilliant people who need to work and the hardest problems but let's face it a lot of us are not that. I know I am not. A lot of us are journeymen and women who just want put their kids through college and hopefully retire comfortably.
I don't quite understand why we need to invent an new word for it? There are people who program and optimize all day and write clever one-liners and create libraries and frameworks only they understand (programmers), and people who collaborate & have a holistic view on software development, and as a result actually get stuff done in a solid way (engineers).
Yes, you can tell what I think about this article (and the one before it).
If your goal is to ship a product, there is no reason to reinvent anything that is not your core business. I don't see the author of this article writing their own OS from scratch, so why stop there? Why should all the auxiliary libs and frameworks be created from scratch? Because they don't like all the communities around the existing ones?[0] If everyone around you is the problem...maybe its you who is the problem. I've never had any pull requests with fixes or minor features rejected from upstream, so my view may be biased.
I prefer hiring and working with people who understand the value of contributing to OSS instead of building their own little castles.
> and people who collaborate & have a holistic view on software development, and as a result actually get stuff done in a solid way (engineers).
Part of "holistic view on software" is recognizing how much garbage and reinventing the wheel there is out thee. Part of being an engineer is recognizing trade-offs: dependencies come with costs of their own, so it only makes sense to use them if you expect to get more from them than you're going to pay for them. And pay you will - dependencies have to be understood, managed, upgraded and deployed.
Rachel writes from the POV of reliability engineering; in her story, her problem is as much with a shitty dependency being used as with in-house people who consider vetting their dependencies and collaborating with dependency vendors to be outside their job description. But both are, in fact, a part of good engineering, and are a part of the price you should be paying when you link against a third-party library.
Couldn't one argue that most open-source projects are born as someone's "own little castle"? If people didn't build their own little castles, we wouldn't have, for example, UNIX.
I agree about hiring and working with people ("engineers") who take full advantage of existing tools and libraries; and can contribute to well-documented and battle-tested libraries in the community.
But, as someone who (also) fits the description of a "programmer", I see the value in building our own little castles for fun and profit, to create libraries and frameworks for our own purposes. If existing solutions don't quite do the job, someone has to start these well-documented and battle-tested libraries to benefit the community - even if "only they can understand" it at the beginning.
Perhaps the contrast and tension between these approaches are about the inherent risk of innovation. In a team environment, we don't want people inventing their own language or operating system from scratch - unless it directly contributes to the core business, which is rarely the case.
For the largest tech companies, they can afford to risk the investment into innovation, to develop and maintain their own languages, frameworks, and libraries - if they bring competitive advantages. This caveat is often unclear though, whether this (re)invented thing actually delivers.
Absolutely. Doing the right calls here is very hard, and takes a lot of experience.
Btw - I'm not against building your own castles (I love doing that, esp. in my free time) - I am just a bit triggered by the previous blog post arguing unconditionally for it, and now the next one even saying these are two (or even more) different professions. I don't think that is the case. I think we need to understand the contexts, to know when each of the approaches is a likely better fit. Sometimes its very clear, and sometimes you won't know until much later. That's ok.
I think it is important to document why a route was chosen, and evaluate if it is still the correct one, once more facts are available, or the environment changes.
What I really don't like seeing is ranting about a chosen approach, without informing oneself first about how the decision was reached, and broad generalisations.
Writing your own software, or just importing a lib, and the resulting side effects such as huge amounts of code to maintain or dependency hell, are tradeoffs - they are neither good nor evil - it all depends on the context.
I'd love to see an article from the author drawing from their experience on making that kind of decisions.
> I'm not against building your own castles (I love doing that, esp. in my free time) - I am just a bit triggered by the previous blog post arguing unconditionally for it..
Yeah, I totally hear you. God knows I've built not only "castles" but immense, complicated and unsustainable architectures (or balls of spaghetti); deadend frameworks that started deprecating the moment I wrote it; under-documented or -tested (or not at all!) libraries with unclear interfaces..
As much I love creating things from scratch, experience has taught me to be cautious and conservative when deciding to do so - especially in a company/team environment. It's almost always better to research first, study what exists in the ecosystem/community, and use available tools and building blocks.
Many articles get posted on HN that describe how and why a company decided to build their own database, operating system, language, framework, or library.
It's not as exciting to read or write about using "boring technology" to build stable, reliable systems with a decade-old, tried-and-tested framework. But this latter is, and should be, the standard approach: mostly "engineering", with just enough "programming" to glue and orchestrate it all together.
>If your goal is to ship a product, there is no reason to reinvent anything that is not your core business
As long as the priorities of your product align with the priorities of the library. If the product you're trying to ship is a critical control system and the only relevant libraries are written with a webdev approach of "move fast and occasionally crash" you might be done much quicker if you write your own version.
Agreed! If your priorities are e.g. high reliability, it is a good idea to vet any potential software that you are planning to use, and maybe it will have to be written from scratch if existing solutions are found to be outside of the expected specs.
But even with such a product, you will have other pieces and parts: think about collecting metrics, marketing tools maybe, obscure internal admin UIs...selecting software for that probably does not need to be evaluated under the same extreme scrutiny as the "critical control software".
I mean...maybe...so long as the critical software is sufficiently isolated from anything not appropriately vetted. If the untrusted software shares _any_ hardware with your critical system you're setting yourself up for a bad time.
> If your goal is to ship a product, there is no reason to reinvent anything that is not your core business.
It's certainly not that clear cut because most of the largest tech companies primarily ship products but also have invented their own programming languages.
I think you hit the nail on its head. There's also the fact that there's no hard line between the "vendor-ops" and the "real programmers". Rather, it's a continuum, and one crucial skill is knowing when to use someone else's tools and when to write your own. And when you do, the experience of having used external libraries and frameworks can be very useful.
Otherwise if you want to write everything from scratch, you can always write embedded firmware for 8-bit MCUs. It better be in assembly, because otherwise you're just doing "vendor-ops" with a external IDE and some company's compiler. Although technically, you're doing "vendor-ops" for a chip made by someone else, so maybe you should make your own soft core in VHDL and use an FPGA... At some point either you'll be making your own transistors because that's the only way to be a Real Engineer, or you'll accept that there's no problem depending on external vendors as long as you understand how to use the tools you have.
I guess I am stuck trying to understand if the assertion being made here is that folks who use a library to connect to PostgreSQL are not really writing software, but maintaining vendor garbage.
Worked with a guy who made pretty much exactly that assertion.
We discovered in the course of his time on the project that he was physically incapable of maintaining other people's code (which he knew, going in, that he'd be required to do).
So he constantly argued that code that existed was fundamentally broken, and needed to be rewritten.
He wasn't on the web team, which I ran, but he often gave me crap about using Rails when I could just "write it all myself and it would be so much simpler". I did a class for the group describing Rails, and what it gives you, and why you'd use it.
His response? "You're not programming, you're just configuring it!". Sigh.
I'd argue when a developer moves toward the mid-career mark, some will discover that they no longer want to be gluing code together all the time at their job. That is expected and healthy for some. They're not better devs like the article implies. They just need something different.
We don't talk about this enough because it feels taboo to even imply that this overly "communal" form of development isn't cutting it for some. We're so hyper-focused on vague values of community and accessibility when career development is intensely personal.
I said no to webdev in 2012. And it has paid off significantly for me.
This is a really helpful distinction. I have worked across a few different jobs with differing expectations across this spectrum. The really lean startups want to get places quickly and blending together a dependency soup is usually the most efficient way to get there. More established companies sometimes have the resources to allow more R&D and home-grown libraries and tools. They are really different jobs in a lot of ways but we haven't done a lot to distinguish them in how jobs are presented.
I wonder if this distinction is going to be even more helpful as the low-code/no-code trend takes off and more jobs can be done by those with fewer skills.
> The really lean startups want to get places quickly and blending together a dependency soup is usually the most efficient way to get there. More established companies sometimes have the resources to allow more R&D and home-grown libraries and tools. They are really different jobs in a lot of ways but we haven't done a lot to distinguish them in how jobs are presented.
IMO the correct thing is to hire people who are good at both and know when to use which approach. Otherwise the learn startups will never be able to transition to established companies. And likewise, the established companies have many problems to solve that don't require home-grown tools, so you don't want people who only know how to start everything from scratch.
Maybe seed round/earlier you can’t hire those folks? In my experience, the engineer who liked being on an early team does not enjoy being part of a 100+ eng org. So dependency soup engineers churn and you end up with a different type of engineer when they transition to a real company.
(Clearly this is one perspective. It would be great to hire the best engineers from day one but sometimes that’s not practical.)
I agree with her that lots of libraries on the internet are bad. But expecting companies to write their own is worse. I remember years ago that everyone wrote their own C string libraries. There were hundreds of them. Most were bad. Then C++ matured and provided a pretty good (and safe) standard string class. Having that standard library was way better than everyone writing their own (mostly bad) string solutions.
So it is OK to use libraries, but use well regarded ones that have large, established communities. And don't use libraries for simple small things that your team can do themselves.
Her point about depending on others to get your company's work done is a valid one and should be minimized as much as possible. This is why I prefer open source, standard languages and why governments still use C and C++. They are ISO standards that will be around and stable for decades.
I think what it comes down to is the way these libraries are used.
In my experience, the trend of software development over time is division of labor into specializations for hard engineering problems, while creating paved roads and happy paths for good generalists. I use the term "generalist" charitably here, whereas the author's description feels (to me) to be a little disparaging.
To your point, I think it's good we have this trend in software development. Having all these libraries is a net good which saves time and results in fewer footguns. I believe the problem she is describing is more productively considered by looking at it as a problem of using a vendored library in a way that it isn't designed to be used (i.e. "driving off the paved road"). There are often good reasons to drive off the paved road, but you need to really understand your problem and the solution space to pull it off competently. I think the author's experience is therefore caused by two things:
1. Businesses which are not at Google and Facebook scale, and thus can't afford to build much in-house from scratch.
2. People who try to duct tape APIs together without a strong understanding of the underlying limitations of each library, and the design boundaries of each.
As one of the generalists, I'd like to add to this comment. I see two kinds of problems: open and closed. Closed problems are sort of "traditional" engineering problems. They're math heavy, they have a precise, indisputably correct answer, and they probably require some kind of algorithm to solve. These are very challenging and interesting problems to dive into, and I certainly have some interest in them, but they are not my primary interest. I like to use software to solve business problems, which sometimes involve a closed element but more typically are open. They involve human beings and have fuzzy definitions of correctness. I like understanding my users and the weird little worlds they inhabit and how I can make their day a little better. To do that, I need the sorts of tools that the closed problem engineers provide. Now you can absolutely be a great open AND closed problem engineer. I'm studying some more math and algorithms to try to improve at that. I'm also willing to dive in and debug some complex messy library code when I need to, and I do agree it's my job to evaluate the maturity and reliability of a library before I add it as a dependency. All that said, I take issue with the notion that I'm not a good engineer if I choose to outsource some of my problems—especially these closed problems that require a lot of work and experience and attention. Software is not special. In business, every decision involves tradeoffs and opportunity costs. It's completely unhelpful to take potshots at engineers making a conscious choice not to tackle some problem when they can have it solved by someone else.
All of that said, you're obviously correct that some people just paste together unstable things they have put forth zero effort to evaluate, and that's a problem.
Having worked in a FAANG and in consulting for normal companies. Yes, vender ops is the mainstream model that bings most value for normal companies. However, it's a bit disenginous as even in FAANG stuff was mostly built from infra Lego sourced from someone else's engineering. So there was a quite a high amount of internal vendor's too, but the quality was much less hit and miss. But it's really only a path the FAANG can walk, being absurdly expensive. I am comfortable in both worlds, deep engineering is fun, but sharing the powers broadly has more positive impact overall I think.
So instead of engineers making informed decisions on buy-vs-build taking into account uniqueness of requirements, maturity and stability of the ecosystem, and resourcing constraints, we should say one group doesn't know how to do the latter and should stay in their lane and only take jobs where they are guaranteed not to have to build anything?
A role encompasses a set of responsibilities. When we try to make the responsibilities too narrow it creates lots of space for myopia. Narrower still, and the person starts to guard it jealously, because it’s all they have.
We got rid of most of the DBAs, Architects, and QA. Any of those would be better than having someone whose only job is wrangling third party code.
As a Nix user, it's depressing to see that the rest of the industry due to terrible package managers has fallen into terrible trade offs between "not writing the same thing again and again" and "understanding what the hell is going on and not being an alienated pretender".
Really a bummer; this is the rare case of a social issue that does have a technical solution!
Interesting thought. I think this was also true in the past for many people: Oracle Sysadmin, SAP System Consultant,...
You can build your whole career specialising in the stack of one vendor. Maybe today and in the future there will be more jobs like that: AWS Solution Architect, Salesforce Technical Consultant, ...
I can imagine a big Rust project where there is person who does nothing but updating dependencies. Trivial changes in the code he will do but most is delegated to the developers. The job includes:
* Analyze dependency updates for breaking and performance impacts
* Coordinate the cleanup of deprecated API use
* Feedback to upstream
* Resolving integration issues (usually by delegation and controlling)
This guy does not write significant amounts of code but will still create lots of commits.
As near as I can tell, the post is a response to comments in reply to a previous post which essentially states that the _existence of cargo_ is bad because there are bad libraries available.
This post further asserts that folks who like the fact that libraries are easily available are "not very good at something being discussed" and have "Peter Principled" themselves. It also states that they are, "almost rational."
What you're describing is an under-performer, or perhaps someone who is working specifically in a maintenance capacity (likely with a title that reflects that!). I don't think that is what the original post is talking about, but perhaps I have misread.
I guess this comment being downvoted indicates that this perspective is more widespread than I thought - can someone please explain to me how you would go about structuring a project with zero software development using libraries available via your preferred programming language's package manager?