Oh gosh, I have all of this going on right now. My CTO read about microservices and now everything has to be in a microservice.
Nothing wrong with that, but for a small development team there is a lot to make that work (infrastructure as code is another requirement), especially when the project timeline is tight energy when building as a monolith.
Things out of my control. Spending days tracking down a bug only to find that there is no mitigation other than modifying some upstream library. Bonus terror: deploying the changes would involve numerous clients also out of the team's control. Cue weeks and months of begging to make the upgrades.
(Edit: one more:)
Deploying changes that don't have an easy rollback mechanism. E.g. a risky change involving apps or browser cookies, and both deploy and rollback take e.g. a day.
"The ex-grad student had poisoned the compiler to poison itself when it was recompiled"
I had something similar to this story happen to a PHP site on shared hosting in the early 2000s that at the time seemed like some kind of cursed magic it was obfuscated so well and reasserted itself every time I tried to edit it out. Luckily replacing all the source files solved it (though the breach itself was a deeper issue). I can't imagine if the problem had gone to the level of the PHP interpreter itself or even deeper how long it would have taken me to find it back then.
Compiler tampering is a species of malevolence I try not to think about so I can continue to code in a somewhat sane matter, lest I crumble like a lovecraftian protagonist
How can we be sure our compilers are safe when they compile all of other tools. Imagine smart compilers that compile decompilers to omit the secrets of our compilers.
A: Inheriting a mission-critical legacy code base that is written by some cowboy coder who has left the company and who is in no way available to answer your questions.
Then your manager and skips question why everything moves so slowly in development time, you always repeat the same answers. You start to die a little inside from repeating yourself and they start to be suspicious because nothing changes.
Meanwhile the codebase takes 12 minutes to compile a 1 line change. Each time you compile you forget why you liked programming to begin with and you spiral into an abyss where you think the only solution is to quit and start a hobby farm.
I like being told what to make. Clearly defined requirements of a problem that I have the skillset to solve - that's my dream, not my nightmare.
My nightmare tends to be times when I'm told to make something that includes x, y, and z technologies, but just make it flexible enough to handle whatever problem we think we might want to solve with our vague mission statement. (Oh, and hurry up!)
Imagine working without any requirements at all. I have to open the figma to know what I'm supposed to do, oh and this is backend development. Also, they might decide to change how things work mid sprint, multiple times.
Devs/engineers' skills are technical. we can design and build you something. Why are our managers and directors okay with Product Owners or Program Managers farming their jobs onto us?
Prioritization, feature design, product design, writing product specifications, writing acceptance criteria, creating stories... why is a firmware engineer assigned to these things? bruh, what do these people even do anymore?
If you dont know what we want, lets create a plan to figure it out. Writing 2 sentence AC and rejecting iterations until time runs out is NOT A PLAN.
oh to return myself into the comforting cloak of nightmares, soothing in their unreal fantasy, but for the ringing pager of a 24 hour on call rotation after product rushed a feature out last Friday
One company I worked at literally gave every customer (high-touch B2B, not an untenable customer count) their own database to reduce that risk and serve as a crude sharding strategy for horizontal scaling.
That is a legitimate strategy. It's being done in many products, often with SQLite or similar embedded databases, synchronized between devices and backend systems. It has the wonderful property that some fault of one database does not impact others.
I recall some major GitHub outages not too long ago and learning that they'd built the system on one globe-spanning distributed MySQL database, that fell over a couple times. A GitHub designed around separate databases for projects wouldn't have that failure mode. Obviously that would create problems on the backend for aggregation, billing, etc., but those are mostly "you" problems, as opposed to the "me" problem when you topple the monster database and take away my repos.
They've since had to isolate problem children to separate databases, away from the hoi polloi in the big "primary" database[1]. So I suppose I wasn't wrong.
Yeah, definitely. Sorry if my post came across as saying anything else. The word "crude" simply referred to the facts that it's not complicated (which is good, one of the technique's strong suites tbh), and it's not complicated in a way that throws away parameters you might actually care about and which a more advanced strategy could consider (e.g., if you have one really big customer then you have to amend the sharding strategy).
It introduced its own set of costs (those "you" problems you mentioned, like aggregation and billing), but I think it was the right choice.