I agree. I’m afraid I’m one of those 00s developers and can relate. Back then many startups were being launched on super simple stacks.
With all of that complexity/word salad from TFA, where’s the value delivered? Presumably there’s a product somewhere under all that infrastructure, but damn, what’s left to spend on it after all the infrastructure variable costs?
I get it’s a list of preferences, but still once you’ve got your selection that’s still a ton of crap to pay for and deal with.
Do we ever seek simplicity in software engineering products?
I think that far too many companies get sold on the vision of "it just works, you don't need to hire ops people to run the tools you need for your business". And that is true! And while you're starting, it may be that you can't afford to hire an ops guy and can't take the time to do it yourself. But it doesn't take that much scale before you get to the point it would be cheaper to just manage your own tools.
Cloud and SaaS tools are very seductive, but I think they're ultimately a trap. Keep your tools simple and just run them yourselves, it's not that hard.
Look, the thing is - most of infra decisions are made by devops/devs that have a vested interest in this.
Either because they only know how to manage AWS instances (it was the hotness and thats what all the blogs and YT videos were about) and are now terrified from losing their jobs if the companies switch stacks. Or because they needed to put the new thing on their CV so they remain employable. Also maybe because they had to get that promotion and bonus for doing hard things and migrating things. Or because they were pressured into by bean counters which were pressured by the geniuses of Wall Street to move capex to opex.
In any case, this isn't by necessity these days. This is because, for a massive amount of engineers, that's the only way they know how to do things and after the gold rush of high pay, there's not many engineers around that are in it to learn or do things better. It's for the paycheck.
It is what it is. The actual reality of engineering the products well doesn't come close to the work being done by the people carrying that fancy superstar engineer title.
You know the old adage "fast, cheap, good: pick two"? With startups, you're forced to pick fast. You're still probably not gonna make it, but if you don't build fast, you definitely won't.
For simplicity, software must be well built. Unfortunately, the software development practice is perpetually underskilled so we release buggy crap which we compensate for in infrastructure.
> Do we ever seek simplicity in software engineering products?
Doubtfully. Simplicity of work breakdown structure - maybe. Legibility for management layers, possibly. Structural integrity of your CYA armor? 100%.
The half-life of a software project is what now, a few years at most these days? Months, in webdev? Why build something that is robust, durable, efficient, make all the correct engineering choices, where you can instead race ahead with a series of "nobody ever got fired for using ${current hot cloud thing}" choices, not worrying at all about rapidly expanding pile of tech and organizational debt? If you push the repayment time far back enough, your project will likely be dead by then anyway (win), or acquired by a greater fool (BIG WIN) - either way, you're not cleaning up anything.
Nobody wants to stay attached to a project these days anyway.
There's an easy bent towards designing everything for scale. It's optimistic. It's feels good. It's safe, defendable, and sound to argue that this complexity, cost, and deep dependency is warranted when your product is surely on the verge of changing the course of humanity.
The reality is your SaaS platform for ethically sourced, vegan dog food is below inconsequential and the few users that you do have (and may positively affect) absolutely do not not need this tower of abstraction to run.
We had FB up to 6 figures in servers and a billion MAUs (conservatively) before even tinkering with containers.
The “control plane” was ZooKeeper. Everything had bindings to it, Thrift/Protobuf goes in a znode fine. List of servers for FooService? znode.
The packaging system was a little more complicated than a tarball, but it was spiritually a tarball.
Static link everything. Dependency hell: gone. Docker: redundant.
The deployment pipeline used hypershell to drop the packages and kick the processes over.
There were hundreds of services and dozens of clusters of them, but every single one was a service because it needed a different SKU (read: instance type), or needed to be in Java or C++, or some engineering reason. If it didn’t have a real reason, it goes in the monolith.
This was dramatically less painful than any of the two dozen server type shops I’ve consulted for using kube and shit. It’s not that I can’t use Kubernetes, I know the k9s shortcuts blindfolded. But it’s no fun. And pros built these deployments and did it well, serious Kubernetes people can do everything right and it’s complicated.
After 4 years of hundreds of elite SWEs and PEs (SRE) building a Borg-alike, we’d hit parity with the bash and ZK stuff. And it ultimately got to be a clear win.
But we had an engineering reason to use containers: we were on bare metal, containers can make a lot of sense on bare metal.
In a hyperscaler that has a zillion SKUs on-demand? Kubernetes/Docker/OCI/runc/blah is the friggin Bezos tax. You’re already virtualized!
Some of the new stuff is hot shit, I’m glad I don’t ssh into prod boxes anymore, let alone run a command on 10k at the same time. I’m glad there are good UIs for fleet management in the browser and TUI/CLI, and stuff like TailScale where mortals can do some network stuff without a guaranteed zero day. I’m glad there are layers on top of lock servers for service discovery now. There’s a lot to keep from the last ten years.
But this yo dawg I heard you like virtual containers in your virtual machines so you can virtualize while you virtualize shit is overdue for its CORBA/XML/microservice/many-many-many repos moment.
You want reproducibility. Statically link. Save Docker for a CI/CD SaaS or something.
You want pros handing the datacenter because pets are for petting: pay the EC2 markup.
You can’t take risks with customer data: RDS is a very sane place to splurge.
Half this stuff is awesome, let’s keep it. The other half is job security and AWS profits.
The funny thing is a lot of smaller startups are seeing just how absurdly expensive these service are, and are just switching back to basic bare metal server hosting.
For 99% of businesses it's a wasteful, massive overkill expense. You dont NEED all the shiny tools they offer, they don't add anything to your business but cost. Unless you're a Netflix or an Apple who needs massive global content distribution and processing services theres a good chance you're throwing money away.
I am 10s developer/systems engineer and my eyes kept getting wider with each new technology on the list. I don't know if its overkill or just the state of things right now.
There is no way one person can thoroughly understand so many complex pieces of technology. I have worked for 10 years more or less at this point, and I would only call myself confident on 5 technical products, maybe 10 if I being generous to myself.
Not really, it's just like counting: awk, grep, sed, uniq, tail, etc.
"CloudOS" is in it's early days right now.
You need to be careful on what tool or library you pick.
No, not at all. Maybe baffled by the use of expensive cloud services instead of running on your own bare metal where the cost is in datacenter space and bandwidth. The loss of control coupled with the cost is baffling.