The clearest explanation of why this happens is at the end:
Before, admins would try hard to prevent security holes, now they call themselves “devops” and happily introduce them to the network themselves!
1) The merging of devs into the sysadmin role was a product of:
the work of sysadmins (particularly systems change control and security compliance) not being valued in our culture.
2) Devs delighted to be free of the shackles placed upon them by sysadmins who were encumbered by the concerns expressed in this article.
If you were a devop who resolved to fix the problems bemoaned in this article, my guess is you would turn around in 60 days to discover you'd become a sysadmin.
The stated goal of putting both systems administrators and software engineers on the same team is to reduce friction and increase communication. One of the worst, productivity-killing situations you can find yourself in when developing network software and services is caused by the traditional "old school" mentality of separating the two camps. When your software developers operate independently of your systems engineers and administrators they're forced to make assumptions about infrastructure, operations, and compliance goals. Both teams have the same goals so why are they not on the same team? I think some "old school" system administrators don't realize how costly such communication mistakes are. Getting 6 months into a development project to be told you cannot have a critical piece of infrastructure _for reasons_ is a costly, costly mistake.
Containers are a smart solution to the build problem. Don't build your containers from public, un-trusted images! Build your own images. Run your own, protected, registry. You still have all of the compliance and validation necessary and you don't end up debugging failed builds because one machine out of a thousand is running on some minor shared library version not supported by your software.
>Don't build your containers from public, un-trusted images!
The author is complaining that you can't build these private trusted images. Software developers have got it in their head that containers are a way to package & distribute software. They're not, that's what the OS's package managers are for. If your software requires Docker as a build dependency, you have failed to properly package your software.
As a concrete example look at Ubiquiti's UNMS.[1] Their package consists of downloading & installing Docker binaries on your system, not tracked by the OS package manager, and then running a bunch of containers built from these public un-trusted images you just told me not to use.
They also conveniently ignore the fact that I already have a Redis server, I already have a PostgreSQL server, I already have an NGinx proxy. (Plus I guarantee my database servers are better tuned for my hardware than some random image from Docker's library.) It is not up to some random software developer where I should be drawing the isolation boundaries on my infrastructure. They also make the big assumption I want to use Docker to manage my containers in the first place. Perhaps my company already uses Solaris LX-branded zones, or LXC, etc.
Now imagine if instead of spinning up a PostgreSQL database container, it used MS SQL as it's database of choice. You think I'm going to let some random developer dictate whether or not I should spin up another SQL Server instance and pay MS for another round of cores / CALs?
Yes - you can build your own containers, and they're fantastic - if software developers properly package that software for ease of installation & configuration. Software developers should not be dictating what container/virtualization framework I use, what configuration management I use, etc.
There are public trusted images, like the so-called official repositories on Docker Hub [1]. As long as you build your images based on official repo images, you're probably fine. Just don't depend on untrusted images; instead get their dockerfile/config files, and build the images yourself.
To me, a Docker image seems like an ideal way to distribute some proprietary device management web software like Ubiquity UNMS, rather than requiring some obscure version of some database or whatever other dependency actually be installed and maintained by their clients. You can spin that image up on a server or group of servers, or on Amazon ECS or a bunch of other providers in a matter of minutes. With enough motivation, you could even export the image and manage the environment manually.
This comment makes way more sense to me that the original blog post. Yes, nobody should be relying upon docker as their distribution platform. That's pretty terrible. Ubiquity I've observed seems pretty uncomfortable just supporting the major distros, I actually wrote some docker stuff to pull down their .debs, crack them open and install the binaries inside on a fedora/centos system. That's closed source for ya.
> I actually wrote some docker stuff to pull down their .debs, crack them open and install the binaries inside on a fedora/centos system.
Why would you want to do that though? Treat the whole thing as a black box running inside docker and be done with it. The second you crack it open, you get to support it. Let Ubiquiti support it, after all that's what you are paying them the big bucks for.
because....they only offered .deb files and I wasnt running debian or ubuntu (nor do I like to bother with it in containers I'm building myself b.c. i have no clue about debian)
the package in question has since finally offered .rpms but i haven't had time / interest in updating it. this is wifi software I'm running personally, ubiquiti only supports the windows/mac versions of it in any case.
Ubiquiti has always done this, even before containers were "hot."
If you install their rpms or debs for any of their properties, you're almost always getting a copy of Mongo or some other dependent service... and it is probably going to be incompatible with whatever version your package manager has or you're already running (version-constraints-wise, not actual compatibility-wise).
This is an indictment of Ubiquiti, not containers in general. If their software were properly built, they'd be shipping you a docker compose setup or something with N different containers that you could substitute out (at a network level) for your own.
I once worked at a company which separated IT into 3 teams: developers, DB-sysadmin (ops), and QA (who also managed deployments). Releases were supposed to go in a waterfall model from the Dev group -> QA group -> Ops. QA wanted Dev to submit Word documents for each release with blanks to be filled in with server names. However Ops was so distrustful of Dev that it was not enough for them to lock us out of Prod using regular security tools, we were also not allowed to know the NAMES of servers in Prod or how currently deployed systems were grouped.
Every release was an Abbott-and-Costello "Who's on first?" routine. Do you have any idea how hard it is (especially in computing) to ask for something without being able to utter its name?
QA: "You left servername blank on this deployment document."
Dev: "I know; Ops won't tell me. Just ask them for where the service is currently."
QA: "Ops says there's 5 unrelated legacy services with that same project name, on different servers."
Dev: "5? I only knew about 3. You know, if I could query the schemas of the Prod DB, I could tell you in a jiffy which one it is."
Ops: "Pound sand. If you want look at databases that's what the dev DB server is for."
Dev: "Erm, OK well can I give you a listing of the Dev DB schema and you tell me if it looks like the one the Prod service is talking to?"
Ops: "Oh I see you want us to do your job for you now? You can compare the schemas."
Dev: "OK..."
Ops: "Just tell us which DB server you want the schema pulled for."
Dev: "But you won't tell me the server names."
Ops: "No."
My point is this is how bad communication can be when ops and dev are not on the same team.
Devs hardcoding things in their software in a rush making the software tougher to deploy and operate causing greater incident rates and therefore page-outs. Devs interested in greater resilience and stability in their software should be opting for dependency injection of pretty much every damn thing in the world around them whether it’s a network service or file system location. Otherwise, presume that it can go away at any time. A common pattern among developers trying to save time that costs more in the long run is to hardcode a path to an executable. A simple /use/local/bin/ buried in an infrequent job that is installed on developer machines but never in prod is all it would take to cause an incident in prod that costs the company millions. I say this both as someone that has written this and had to fix others committing the same error in their code and QA passing it along.
Ops tends to be where the brunt of technical debt is truly buried. Bad code is one thing but seeing the code in action with real world data is a different beast altogether.
The thing is that any separation in the roles in ineffective. Things shift around some if you embed an ops guy into the dev team directly, but it doesn't resolve the core problem. This applies to DBAs as well as ops or any other software-side segmentation as well.
The core problem is that there are "ops guys" and "dev guys". That creates conflicting incentives, even within the same team. It creates tension and a dynamic centered around bandying work around so that it's "the other guy's problem" in some situations, and hoarding logic onto the one segment so that there isn't an "obstruction" in getting things done in others. Moving the "segmented" guys directly into your team just makes these politics closer to the heart, which is not always an improvement.
Teams should be comprised of whole-platform "generalists" (in quotes because they really should be good at stuff, whereas "generalist" implies they aren't; here I just mean a competent non-specialist), where any single individual would be comfortable/capable performing any particular task that may come up. Of course, each member will have preferences and habits, little "skews", but it is important that these skews are controlled and used for mutual education, and not allowed to "flandersize" someone from "the guy who knows SQL better than the rest of us" to "full-fledged DBA who hasn't committed any C# for 3 years".
The right axis for separation is hardware v. software. If it's software-related, your dudes should be equally yoked, such that any SQL ticket would be assigned to any member of the team, or any "devops"/sysadmin/deployment ticket assigned to any member of the team.
These systems, from the OS up, are all part of the same thing, and they're all tightly integrated. Making the workload of the individual people on the team also tightly integrated is the only way to make sure that incentives align properly and that the most effective technical decisions are made, instead of decisions motivated, consciously or not, by offloading blame or other political/effort/convenience considerations that cause the overall system to suffer.
If you get into a sticky situation that requires specialized help from someone who has lived and breathed MySQL Server night and day, well, that's what consultants are for. Consultants would also be useful for inspections/sign-offs. But your core team can't tolerate being segmented out by component/implementation detail.
> Containers are a smart solution to the build problem.
Linux "containers" are a variety of things. True OS "containers" don't exist on Linux, but there are some rudimentary approximations. A Docker image is essentially a zip file, and sure, zip file-ish things may work fine for uploading artifacts to systems. Dockerfiles are unequivocally terrible, however.
I agree with parent, but I think you're taking it too far. I don't think there are enough skilled generalists to pull off your ideal, and I think software/infrastructure is too complex to allow for generalists in the breadth you describe.
I'm a security person who knows pretty good Python and simple database stuff (SQLite). I think I'm in the top 50% (humbly) of my field, probably higher.
But I don't know front-end, containers/CICD, or disrtibuted systems worth a damn.
I do believe parent, which is the idea that teams should have embedded resources. A "VM security team" operating firewalls and infrastructure and policy auditing should not only have security experts, but their own devops group that automates the crap out of everything, using 2018 best practices. Currently, my team's "dev" group is a separate team in another area whose work queue is fed by multiple, distinct teams. It makes learning and understanding our requirements really tough for them.
Phew, this has been a good exercise. Let me clarify the thesis.
The thesis is NOT that a crew of superhumans can supersede all DBAs, security engineers, and infra people in the world.
It is rather that you can be a great software-side engineer, and that you can skew/focus on a few primary concerns, and develop and maintain a working knowledge in the others, sufficient to service your core project's needs.
Specialists can be called in as spot checkers, auditors, or short-term implementers, but they shouldn't be needed for the day-to-day of building, maintaining, and deploying your software.
In software, everything goes down to the same place: the system hardware. And these days at least, this is pretty much homogeneous between software segments. If you know how this functions, the differences are in the modes of expression and the conventions, not really the principles. We can learn the varying conventions well enough to be serviceable in all the elements that we send down to hardware -- not necessarily expert, but good enough for day-to-day work.
I'm not saying that everyone on the team should be better than the best DBA guy you've ever met. I'm saying that everyone on your team should be reasonably confident with SQL. Specialists have a place in your friendly local <security/database/whatever> consultancy.
> In software, everything goes down to the same place: the system hardware. And these days at least, this is pretty much homogeneous between software segments. If you know how this functions, the differences are in the modes of expression and the conventions, not really the principles.
Interesting that you mention this, since I think it's become something of a self-fulfilling prophecy, especially with giant cloud IAAS providers making one-size-fits-all choices of hardware to sell.
I certainly agree with you that that the basic principles are certainly the same, but that ignores the performance (and, arguably, reliability) possibilities that open up when not limited by the hardware (including network) choices of others.
> But your core team can't tolerate being segmented out by component/implementation detail.
And yet tolerate it will, because it is somewhat impossible to hire a team composed entirely of people who are each experienced and competent in writing and designing frontends, writing and architecting backends, deploying and maintaining whatever backing services you've using, build and release engineering, Linux, networking, etc. And what are junior developers supposed to do?
> And yet tolerate it will, because it is somewhat impossible to hire a team composed entirely of people who are each experienced and competent in writing and designing frontends, writing and architecting backends, deploying and maintaining whatever backing services you've using, build and release engineering, Linux, networking, etc.
You're right that everyone is not going to start out knowing everything. No matter how senior you get, there will always be areas you know better or areas that you prefer, which are the "skews" I referred to in my original comment. When a new framework or technology or whatever is introduced, only one or two people will know it. That's all fine.
Docker is the epitome of the broken segmented model. Devs hate and resent ops telling them they can't do things. Docker promised devs that if you spend a half-hour writing instructions to build an archive that contains your app's file tree and to pull in a completely untrusted OS userland `nice-mans-alpine:4.x.malware-free`, those annoying ops people will get out of your hair, and you can go ahead pulling `bad-actors-handy-4line-totally-safe-lib` from npm to your heart's content. No more complaints about that package not being approved, or the dependencies not installed, or the runtime too slow, ha!
The whole comment thread on the original article is a case in point. Someone who is responsible for the whole software side of real systems will be horrified at the suggestion of such recklessness. However, developers who're only accountable for pushing "at least one commit per day!", and consider security and performance someone else's problem, will be thrilled at the prospect of "tearing it up with some 10x coding" while they silence "the Luddites". (who, sidebar, were too dumb to see the beauty in JavaScript back in the 90s! Pshaw!)
Which dynamic do you want to encourage?
> And what are junior developers supposed to do?
The same thing that everyone else is supposed to do: learn it, gradually, as needed. Read the docs. Seek mentorship from team members who have that "skew" (formalize this process if necessary). Read the changelogs. Read the code. Figure it out!
Many will protest and say it's outside of their comfort zone. Some will protest and say this is inefficient. That may be true in the short-term, but the system will invariably suffer if you do hard segmentation on the software work, because the falsely-separated concerns won't understand each other and end up setting up territories.
People will hate the DBA because they won't understand why he cares about "boring crap" like "normal form". People will hate the sysadmin because they won't understand why he cares about "boring crap" like "not being woken up at 3am". Your front-enders will be more gregarious and have better haircuts, leading to prioritization of front-end concerns.
Essentially, the project becomes driven by blame-shifting, protectionism, and which software-side segment has the more attractive people, because the concerns are fungible enough that any side could potentially handle them. That makes it a political competition. The project is no longer driven by technical prudence or efficiency. It's no longer about the tradeoffs involved in solving the problem at layer X instead of layer Y.
The dividing lines from OS up are arbitrary. We can't all be experts in all of it, but we can all have the expectation that we need a basic grasp over the whole system, by which I mean the WHOLE SYSTEM, and that we should become competent in the major elements used to build it, and patiently nurse this competence over time.
One team member should be able to handle 90% of the tickets that come in independently, whatever elements of the stack are affected (sysadmin, application code, database, frontend, etc.), and when they hit one of the 10% they can't do independently, they should consider it their responsibility to seek mentorship and learn the skills so that after several such rounds, they can do it independently.
The only real question is whether there are enough people capable of this out there. I think there would be if we set it up as a general expectation. I'm not sure if there are when we've already accepted the segmentation as a fact of life.
>The only real question is whether there are enough people capable of this out there. I think there would be if we set it up as a general expectation.
That strikes me as merely wishful thinking. It's not as if there isn't already research on human cognitive abilities in general.
Do you have any scientific basis for thinking engineers are merely being held back by our acceptance of specialization, rather than by inherent cognitive limitations?
Once the downvotes start coming in, people read comments uncharitably, and the thread gets lost, but to be clear, I'm not advocating for anything that is beyond the cognitive capacity of typical software developers.
One and two-man startups provide ample evidence that working knowledge of the whole platform is not beyond human cognitive scope, even if getting this to be accepted at large requires some extra cultural encouragement and support, and some professional management of individual "skewing".
Once more, it's not that everyone has to be a hardcore expert in everything all at once. You don't want them to be.
You just want your main people to know each platform component well enough to be able to make a reasoned decision about the trade-offs involved in using one or the other for a specific task, and then to be able to own that decision as a group.
If they can't or won't do that, the platform decisions become political instead of technical. I've seen this over and over again, where massive technical problems are routed around because the Java developers have been told they can't touch Ruby, or the C# developers have been told they can't touch SQL, and the real problem never gets fixed, because we only recognize naive, scared "specialists" who insist that they can't learn Python because they're just a PHP developer, so they can't look at that piece of Python that's holding up the thing, instead of rounded, capable "generalists" who can be trusted to call in help when they're getting in over their heads, and may take an occasional "inspection" or two to make sure they're aligned with best practices.
General contractors are not electricians, but they can do a lot of routine work that involves electrical fixtures, sockets, and outlets. You call the electrician for the face-melting stuff.
General practitioner MDs are not dermatologists, but they can do a lot of work that involves routine skin disorders. They'll prescribe creams for fungal infections, rashes, acne, etc. They'll let you know you need to call in a dermatologist for the "skin-melting" stuff.
In software, we don't say "call the DBA for the database-melting stuff." We say "the DBA will write all of the SQL for you." It just doesn't seem to comport to me.
I apologize if I seemed particularly uncharitable, and I think you may well be right that I thought you were advocating for greater depth of knowledge than you were.
However, I still disagree with your premise that it's merely our attitude at large somehow holding people back. Startup founders don't refute my suggestion that there's a cognitive limitation involved, since they're relatively rare and may well have greater capacity to be the generalists that you're proposing. I'm also not convinced that, even among founders, they're as broad generalists as you're suggesting.
You go on to give non-computer examples of generalists and specialists, yet you don't address how it is that specialists are (admittedly only imoplicitly) ok there but not in computer tech.
To reiterate my point about cognitive capacity, if true specialists are desirable, then I allege asking them to be more of a generalist makes them a less competent specialist and therefore less valuable on the market. That's an alternate explanation for extremity of specialization than preconceived notions.
Now, personally, I share your desire for greater breadth of knowledge among all technical professionals, if for no other reason than they might have a greater appreciation for my own specialization. I just don't think it's realistic.
> I apologize if I seemed particularly uncharitable, and I think you may well be right that I thought you were advocating for greater depth of knowledge than you were.
No need, it wasn't really meant to be directed toward your comment specifically. I just referenced that negative misinterpretations are inferred when the comment is grey as a way to remind people that it's not likely someone would advocate such caricatures.
> Startup founders don't refute my suggestion that there's a cognitive limitation involved, since they're relatively rare and may well have greater capacity to be the generalists that you're proposing.
You're right, and I thought of this when I used that example. But by the same token, we can take it out a level further: professional software developers have already shown themselves as having higher-than-average cognitive abilities, because the truth is that the average human doesn't have the cognitive capacity to become a professional software developer. If they did, we'd all be paid much worse.
How far off are founders from professional software engineers? How far off are professional software engineers from the median of adults? How much additional cognitive load is required to be operational in a handful of extra platform components, especially if all those components run the same type of hardware? All good questions that I don't think either of us have ready answers for.
The other thing is that even if this is out of reach for the "average developer", it wouldn't mean that it's not an ideal to strive toward, or necessarily even unrealistic in all cases.
> You go on to give non-computer examples of generalists and specialists, yet you don't address how it is that specialists are (admittedly only imoplicitly) ok there but not in computer tech.
Specialists should exist -- as external reference points in consulting groups.
If you want your life's mission to be building SQL queries, join a database consultancy and deal only with the SQL problems that your clients couldn't figure out on their own and decided they needed to pay $$$ to solve. If SQL and database design is truly your passion, you'll be much happier this way than you would be as a staff DBA redesigning the same rote EAV schema for Generic Business App #29, working slavishly to finish the code for that report that Important Boss #14 needs on his desk ASAP.
Creating a referral-style economy creates a lot more room in the marketplace for specialist consulting groups and gives more specialists greater reward (monetary and emotional). It simultaneously allows "generalists" to stay focused on the big picture of building and maintaining a robust and prudent system overall.
I think it's worthwhile to consider how generalist v. specialist operates in other knowledge fields, and what lessons we can take from that.
I am confident that a generalist ethos is for the best, but I'm not sure we'll get there without better cultural underpinnings, so I'm not making these statements purely out of self-righteousness (maybe only like 80% ;) ).
This dialogue has already been informative and has helped me refine my ideas and hopefully learn to present them somewhat better. Thanks! :)
The thing is that any separation in the roles in ineffective. [...] The right axis for separation is hardware v. software.
Any separation is ineffective except along this one particular completely arbitrary dividing line? If that were true we'd still be hunting and gathering and nothing else.
> Any separation is ineffective except along this one particular completely arbitrary dividing line? If that were true we'd still be hunting and gathering and nothing else.
Hardly arbitrary -- hardware is fixed at the time of manufacture. Hardware engineers should be well-acquainted with software concerns and needs, but the years-long feedback cycle and real expenses associated with hardware development creates a natural barrier for work separation, requires different work cadence and much more stringent processes, etc.
This is not to say that a good hardware engineer shouldn't contribute to software and vice-versa, but it is to say that the roles are sufficiently divergent that it makes sense to place them in different segments. That is not the case with anything this side of the operating system, as far as I'm concerned.
It's arbitrary when you claim there are no sensible divisions in software. I think your entire lengthy argument is a sort of elaborate fantasy about how much better the world would be if everyone was just like you or at least, just like you imagine yourself to be. It's fun but not a particularly realistic or constructive way to look at the world.
> It's arbitrary when you claim there are no sensible divisions in software.
It's about the fungibility of the problem space. I don't know how you expect your core team to make reasonable decisions about the tradeoffs if they a) don't understand more than one of the platform elements; and/or b) don't have any responsibility or accountability for the tradeoffs that get made, because now it's another segment's problem. Indeed, when I've been on teams primarily comprised of non-generalists, these decisions were almost always a matter of bureaucracy and politics.
> I think your entire lengthy argument is a sort of elaborate, lengthy fantasy about how much better the world would be if everyone was just like you or at least, just like you imagine yourself to be.
I've worked on teams that were mostly "generalist" and teams where the "generalist" type was either absent or artificially constrained. My perspectives are drawn from those experiences, and have developed based on a hard-earned worldview that says people reliably act in favor of their own expedience. Doesn't seem very fantastic to me. ¯\_(ツ)_/¯
I don't know how you expect your core team to make reasonable decisions [...]
That's how most everything is made, not just software. In the case of software, Fred Brooks added an essay titled "Parnas was right, and I was wrong" in the 20th anniversary edition of The Mythical Man Month about this topic. Itself published over 20 years ago.
> Don't build your containers from public, un-trusted images! Build your own images. Run your own, protected, registry. You still have all of the compliance and validation necessary and you don't end up debugging failed builds because one machine out of a thousand is running on some minor shared library version not supported by your software.
You have just lost all the speed to production advantages of containers.
"speed to production" is not meant to be the primary advantage of containers.
"knowing exactly what you're running and being able to reproduce it" is meant to be the primary advantage of containers.
What you're basically saying is "if your container system admins do their job properly rather than throwing security and reliability out of the window, it can take a bit longer than not bothering". This is trivially true, but not really the point agentultra was making.
That's how I always did it (building containers ourselves), and once the pipeline is in place, it's barely more work than pulling public images.
Speed of production advantages are absolutely not due to pulling untrusted containers. If anything, it makes your life harder.
Hard to imagine any serious production setup not doing this... In most cases, you need to modify the containers anyway to suit your needs, and how else are you going to rebuild them all when the next OpenSSL update comes out?
Have you really? Building a base container to base all further images off of takes about a half hour with our build system. Fuether app builds are down to 10 minutes at a max and can honestly still be optimized. How exactly are you losing all the speed advantages?
Well, potentially unpopular opinion here, but an awful lot of sysadmins brought their looming obsolesence on themselves. I'm an app (as in "a program that runs on a computer", not an iOS add-on) developer, always have been. I get requirements from the business types, code it up in vi or Eclipse or whatever, get it working, and then they (the business) want to deploy the working app out to production so people can use it and the business can make money off of it. And, for decades, sysadmins have been a brick wall of pure hostility. They're not all like this, but a lot more are than aren't. Like, I get it - you're overworked and the demands on you are unreasonable. Yeah, me too. But I just work here, man. You're right, I don't know how to do your job, that's why I sent you an e-mail asking you what steps are needed to deploy an app into production since it's not documented anywhere. But rather than just tell me what you need so I can go gather that up, you're going to unload on me because you feel overworked and unappreciated, but you're sure as hell not going to unload on a manager or somebody with actual power, you're going to take it out on the developers who have no pull or voice.
Actually, as a sysadmin, I sympathize with you, since I consider that kind of situation to be a sign of, essentially, bad system administration. It also sounds like it might be at a larger company.
Personally, I've always considered it a significant part of my job to make developers' jobs easier, especially with something like deployments and dependencies.
As such, I disagree that we've brought our own "obsolesence" on ourselves, but I do agree that those of use who have perhaps forgotten that ours is a service profession have hastened its demise.
I feel like there has always been a contingent of sysadmin / ops folks who preferred the "Better to ask for forgiveness than permission" model. They still hate when things break, (so not quite fans of developers with a "move fast and break things" philosophy) but they care more about big picture improvements and ease of upkeep than enforcing any particular process. Detecting problems and being able to roll back is typically more valuable than preventing mistakes in many cases. It may be somewhat driven by laziness, but it actually works out pretty well for collaborating with the fast-moving developer types. It also does depend on being in an environment that is tolerant to occasional mistakes or outages.
It makes sense that these types naturally gravitated towards the devops models. I'm really not sure where this leaves the more compliance-minded systems folks though.
> It makes sense that these types naturally gravitated towards the devops models. I'm really not sure where this leaves the more compliance-minded systems folks though.
Working for profitable businesses where stability is valued over velocity.
It's rough because, like many backend type jobs, the best thing that can happen is nothing breaks. Incremental improvements in stability or scalability will not be noticed, but every single change you make is a massive risk of a page at 2am, all-nighters trying to fix things, outage reports, incident reports, root cause analysis reports, etc. You're stuck between process and outcome.
You have to constantly fight the urge to just never touch anything.
Before, admins would try hard to prevent security holes, now they call themselves “devops” and happily introduce them to the network themselves!
1) The merging of devs into the sysadmin role was a product of: the work of sysadmins (particularly systems change control and security compliance) not being valued in our culture.
2) Devs delighted to be free of the shackles placed upon them by sysadmins who were encumbered by the concerns expressed in this article.
If you were a devop who resolved to fix the problems bemoaned in this article, my guess is you would turn around in 60 days to discover you'd become a sysadmin.