> I'm seeing no compelling reason for a developer to bother to learn or use z/OS.
I agree.
> And I know you could say "well Linux is just as complicated."
I've written COBOL on z/OS in the past (nineties). There's still COBOL used today. But there's a reason none of Google, Amazon, NVidia, Tesla, Meta, Netflix, etc. were built on mainframes zOS / COBOL / JCL / etc.
Yet billions (tens of billions?) of devices are running on Linux today. So saying "well Linux is just as complicated" would be actually quite stupid.
Something could be say too about the virtualization / containerization of all the things and ask how many VMs, containers and hypervisors are using Linux.
So, complicated or not, it actually makes sense to learn how to use Linux.
> But there's a reason none of Google, Amazon, NVidia, Tesla, Meta, Netflix, etc. were built on mainframes zOS / COBOL / JCL / etc.
The main reason is that Linux is free of charge, and that Unix happened to be more used in academia. It has little to do with underlying technology.
> So saying "well Linux is just as complicated" would be actually quite stupid.
It is just as complicated, if you are looking at feature parity. Maybe there is less historical baggage but that comes with complications as well (think of the grumbles about systemd).
> The main reason is that Linux is free of charge, and that Unix happened to be more used in academia. It has little to do with underlying technology.
That's not true, or at least there certainly isn't a consensus about it. One of the narratives that is associated with the rise of Google is the use of commodity hardware and high levels of redundancy. Perhaps this attitude originated from some cultural background like linux in academia, but their rejection of mainframes and the reasoning surrounding it are extremely well documented[1]: "To handle this workload, Google's architecture features clusters of more than 15,000 commodity-class PCs with fault tolerant software. This architecture achieves superior performance at a fraction of the cost of a system built from fewer, but more expensive, high-end servers."
Name one business started this century that uses mainframes. If there were any compelling reasons to use them in the modern times, there would certainly be some companies using it. Mainframes are legacy cruft used by businesses that had them decades ago and are too cheap or entrenched to modernize their systems.
"Legacy cruft" can be code that's been providing business value for 50 or more years. The mainframe may be expensive, and IBM may love to nickel-and-dime you for every available feature, but it might still make business sense to keep using it. What's the point in rewriting all your code to move it off the mainframe if that will cost twenty times as much as maintaining the existing code while vastly increasing risk? While you may achieve cost savings by moving off the mainframe, they might take so long to accrue that it doesn't make business sense.
If there are any, you can be sure that they're not using z/OS. More likely would be one of those rack-mounted models that only run z/Linux (and possibly z/VM).
Z/OS systems are rackmount nowadays too, they just take up the full 42U.
At least back in the POWER8 days those z/Linux systems were the fastest you could buy, and IBM was super happy to let you overclock them: their reps told me that was just more CPU sales for them.
My previous company had a large estate of linux and mainframe applications. While ensuring that the disaster recovery is implemented in linux applications was a nightmare with different standards and different ways of doing things, in mainframe it was inbuilt.
While the mainframe may be old and out of fashion, it did have the capabilities that we are rediscovering in Cloud, containers, VMs and all...
I've wondered what might happen if IBM lowered costs on this hardware... If they offered a compelling deal, it's conceivable a startup might choose them over Linux. As it stands now, I find it near impossible to imagine any organization starting with a clean slate choosing a mainframe for anything. Cost combined with the work required to make the thing do the things is just way too much of an investment.
Many comments tout the uptime and reliability of the "mainframe", I'd argue we have many counterexamples built on Linux. Building a product with this level of reliability on Linux is expensive but still cheaper than a mainframe and the various support contracts, IMHO.
I started out working with an IBM AS/400 in college and eventually worked for a company that ran their business on one. Eventually market pressure forced that company to move to Windows servers accessed through Citrix from thin clients. In my opinion, this didn't make much material difference: it was still complicated, people on the floor complained about Citrix+Windows just as much as they did the old IBM terminals. Hardware costs and support contracts were less expensive but the cost of the software was much, much more expensive and the organization no longer had the ability to change any substantive functions. Just sayin', moving away from a mainframe isn't necessarily a clear win either.
I agree.
> And I know you could say "well Linux is just as complicated."
I've written COBOL on z/OS in the past (nineties). There's still COBOL used today. But there's a reason none of Google, Amazon, NVidia, Tesla, Meta, Netflix, etc. were built on mainframes zOS / COBOL / JCL / etc.
Yet billions (tens of billions?) of devices are running on Linux today. So saying "well Linux is just as complicated" would be actually quite stupid.
Something could be say too about the virtualization / containerization of all the things and ask how many VMs, containers and hypervisors are using Linux.
So, complicated or not, it actually makes sense to learn how to use Linux.