Hacker News new | past | comments | ask | show | jobs | submit login
IBM launches Linux-only mainframe system (reuters.com)
178 points by jeo1234 on Aug 17, 2015 | hide | past | favorite | 122 comments



By trade, I am a mainframe systems programmer at a large financial institution. My group is responsible for mainframe operating systems: z/OS (it may also be referred to as MVS), z/VM (the mainframe hypervisor), and z/Linux (Linux on a mainframe) and many of the software components that go along with these.

I'm really excited about this announcement. I think IBM finally realized they need to be more open if they want to grow the Linux on z community. I just hope they are not too late to the game.

There's a lot of mystery and misconception behind mainframes, so I am happy to answer any questions about mainframes that I can.


What kind of demand do you think there is for a Linux-only mainframe?

My limited understanding is that most mainframe customers are locked-in, e.g. they have legacy COBOL code running their ledger system and the expense to switch off of it is simply prohibitive. That plus the fact that the system is reliable, low-maintenance, etc. preserves the status quo, despite the fact that if you were to write the same applications today, you'd choose a newer platform because it would be more cost-effective.

As such, IBM has historically offered z/Linux and co-processors only to hold onto data and processing that was being pulled off of the mainframe because it was too expensive/too onerous to do on the mainframe using z/OS and the like. So customers, unable to completely shut down their mainframe, could kind of make the best of a bad situation and at least get some cheaper Linux cycles out of their expensive iron.

If the above is true (and correct me if it's not) what is the appeal of a Linux-only mainframe? Or is it only interesting if it is radically cheaper than a z13?


> What kind of demand do you think there is for a Linux-only mainframe?

I think the answer really depends on IBM. Technically speaking, the hardware is rock-solid. There are no single points of failure in mainframe systems. But there are still some obscurities with the platform. There's a lot of packages you'd expect to be available in your s390x distribution of RedHat or SUSE that missing. And that's probably because the average open-source developer/maintainer doesn't have the means to develop/test on a mainframe.

IBM needs to realize they have an open operating system on a closed platform and the two don't mix. They are taking steps in the right direction, but time will tell if they get there. And they have to prove they can compete with the x86 guys on price.

> My limited understanding is that most mainframe customers are locked-in, e.g. they have legacy COBOL code running their ledger system and the expense to switch off of it is simply prohibitive. That plus the fact that the system is reliable, low-maintenance, etc. preserves the status quo, despite the fact that if you were to write the same applications today, you'd choose a newer platform because it would be more cost-effective.

You are mostly on track there. Mainframes haven't died because of many of the reasons you listed above. But I'd have to disagree with last statement about choosing a "newer" platform. Mainframe hardware is modern hardware, in the sense they are updated every one to two years. Mainframe operating systems are modern operating systems. They also get updated every one to two years. If you look at every industry that has been around for more than 30 years, almost all of their mission critical workloads are done on System z. Cheaper isn't cheaper when you have downtime.

But for some workloads, you are right: companies are only on mainframes because migrating to a different platform would be too costly. Even over 10, 15, 20 years.


OK, I'll start. I'm a former IBM-er (IBM Cambridge, Lotus building what up!). I worked mainly on the software side of things, and had some intermittent exposure to AIX and WebSphere which I found to be fascinating both from the historical evolutionary perspective as well as the functional perspective. I'd consider myself competent if someone were to call me in and save an iSeries POWER setup from complete meltdown. If I wanted to, I could get an old POWER5 on eBay with AIX 6L for less than a grand, or better yet rent a VM to learn on for $100/mo.

It seems like, except for "Cracking the Mainframe" or whatever, there's no easy (or even moderately accessible) way of simulating a mainframe setup to learn. Again, I love this stuff. I read Redbooks in my spare time. I have Hercules and z/OS setup, and that took a long time to setup compared to an Arduino or firing up a Linux VM. An average HNer is probably like me -- he/she might want to fire up a z/OS instance and play around with it. But he has no way of doing it though.

These are the tinkerers who end up deciding what platforms to use down the road. The high school kids playing with those free STM32 micro-controllers that TI gave out was a brilliant move. When they choose to do their semester project junior year, they might stick with TI because that's what they know. AVR was lucky Arduino took off too for the same reason. 10 years from now, those high school kids are going to be choosing what to buy 10k units of to throw into the pick'n'place machine.

There are some interesting big-data cloud IaaS/SaaS offerings you've put out to keep up with the times but outside of alliances with large incumbent vendors (say, SAP in ERP; EPIC in Healthcare) to sell large modules, you're not going to see much traction.

Offer something the tinkerers can play with. Make that the gateway drug. Amazon did it perfectly with AWS - easy to roll out, pretty predictable pricing schemes, pay for what you need, and scale up (more VM's/larger VM's) or out (other products within AWS). Every other vendor is chasing their tail trying to capture that market.

Linux on the z community offers me nothing as a decision maker. If I was already vendor-locked into you guys, then the prospect is appealing. But even with low latency (operating within the u-seconds, m-seconds and people start losing jobs/panicing), five-nines SLA (healthcare, HFT prop trading) requirements, what does the Z platform have to offer? How can I even evaluate prospective costs when pricing this out to pitch to the (hypothetical) board?


You are preaching to the choir! I am not an IBMer, so I have the same gripes.

Mainframes gradually exited academia in the mid-80s. It was a terrible mistake by IBM, because they essentially eliminated the next generation of mainframers. They've since come to their senses with a program called the IBM Academic Initiative [0] which promotes the use of mainframes in Computer Science courses. It's only about 20 years too late.

But I think they could do a better job. Up until recently, the only way to try z/OS without accessing million dollar hardware was to break the law. You literally had to torrent a pirated copy of z/OS. And it's not easy to find. A few years ago, IBM changed this with their tool: Rational Developer and Test for System z [1]. It's essentially Hercules but you get a legal copy of z/OS. And it's $9500 per year per CPU. And there is some stupid hardware license usb key.

And regarding your questions, your points are valid. Vendor lock-in is a concern. IBM is the only player in the business, and they know it. But look at every major industry that's been around at least 30 years: all the mission critical stuff runs on a mainframe. Maybe it's because it's the only thing that runs their legacy COBOL code. But it works and it's rock solid.

The poster child of migrating to Linux on z is Nationwide. They successfully moved almost all of their x86 processing to Linux on z and saved a ton of money. There's the definitely-not-vendor-biased white paper out there. Do some googling on "Nationwide Linux on z".

[0]: http://www-304.ibm.com/ibm/university/academic/pub/page/acad... [1]: http://www-03.ibm.com/software/products/en/ratideveandtesten...


I totally agree it's too painful to set up. That's why I didn't despite the excellent work done on Hercules. I really wanted to have a mainframe running on my PC to screw with. However, the list of steps and background knowledge to even get started were ridiculous.

That's why I broached in another comment an application, VM model like we saw for VMware, etc. They have a pre-configured VM for about everything. Could do the same for various aspects of mainframes so people could selectively use or learn each piece while slowly building up understanding. Pre-configured Hercules with some good tutorials on usage might be a start if not the other model.


What about the idea of offering a mainframe experience for the cloud? Have a mainframe but instead of delivering it to a customer with a groups of consultants just offer its aspects as a service?

(Disclaimer: I have not idea what I am talking about when it comes to mainframe, just a quick thought that popped into my head).


I think IBM is headed that direction with BlueMix. That is their public cloud offering. However, there are no mainframe offerings yet.


So, why use Linux instead of z/OS? Is it cheaper? Offer more tools?


The reason from my perspective is that it is different than a traditional unix system, and what unix emulation it offers is limited (to put it in perspective, it feels like a bad vim emulation... your native skillset doesn't quite translate). Lots of issues with dealing with data formats (EBCDIC). Access to open source compilers and scripting languages was limited. Granted these were strengths of the platform as well, but it made our workload (maintaining a compiler for the target) harder than other unix-alikes for those of us not fully versed in JCL.


It's hard to answer "z/OS vs Linux on z" generically, because there's use cases for both. But perhaps the most generic answer I can give you in favor of Linux is: familiarity.

z/OS has a long history. It's predecessor, OS/390, dates back to the 1990s. Before OS/390, there was OS/360 which dates back to the 60s. Back then, IBM was first to market on business computer processing. (That was the only computer processing.) Major industries, like financial, insurance, and airlines, poured their infrastructure into mainframes, because it was the only name in the game. IBM prides itself in assuring its customers that the COBOL code that ran your business back in the 80s will still run today on z/OS 2.1 (the current version of z/OS). Chances are, when you swipe your credit card, that transaction touches a mainframe and probably some code that was written in the 1980s. Or earlier. I know, because I've seen the timestamps.

This compatibility is really evident when you use z/OS. The green screen is perhaps the most obvious example. When you see a systems programmer (that's mainframe-talk for sysadmin) debugging a system, you'll see them page through a really archaic, unsexy TN3270 interface. Why? The same reason Unix (and thereby Linux) uses teletype. Because that's what the operating system was built on.

Sure. z/OS has Unix System Services. In fact, it's even POSIX compliant. But can you `apt-get install ruby`?. No. There's no Ruby for z/OS (unless you count JRuby). There's no package manager for USS. It's just plain, vanilla Unix. There's no new open source contributions. There's no Bash that comes with the operating system. You get a 10 year old version of Shell. There's all kinds of shenanigans with SCP/SFP and ASCII/EBCDIC. IBM has to maintain the tools. (Actually, it's been turned over to Rocket Software.) It feels very "round peg, square hole".

So z/OS has an image problem. IBM made a huge mistake back in the mid-80s. When commodity x86 PCs became available, universities realized they could teach their computer science programs on cheaper hardware instead of expensive mainframes. Compute Science is platform agnostic, right? What IBM didn't do is recognize this as a problem. They didn't give out free mainframes to universities, so schools quit teaching with them. How many people do you know under 30 that had a mainframe-based curriculum? And for the self-taught, how are you suppose to learn the basics of a platform if costs millions of dollars? Anybody can install Linux on their $100 laptop in a few hours. But mainframes?

Fast-forward a couple of decades and now you have a talent pool that's extremely saturated with x86 people. And the mainframe people? Well, they are all retiring and there aren't many replacements. Second-generation mainframers are far and few. (I'm one of them.)

So let's say you have a new workload. It's undeveloped. What platform should you choose? There's probably not many technical reasons why you could not accomplish what you want to do in z/OS. But how many people do you know that consider themselves proficient with debugging z/OS? Outside of "dead languages", your options are pretty much limited to Java. (There's a few exceptions to this, but Java is the biggest modern language player.) But that's not to say there isn't any new development happening in the z/OS space. There's plenty of new workloads coming to WebSphere on z because porting a WebSphere application could be pretty easy. There's also performance benefits when you are on the same system where all your financial records are stored. z/OS is definitely an option, but it varies by use case.

With Linux on z, you get real Linux. And the majority know Linux. The majority can debug Linux. And you get the same benefits of being on rock-solid mainframe hardware and you get memory-speed I/O against mainframe data and services through a special networking interface called Hipersockets. Mainframes are also pretty good a virtualization, because they invented it. (z/VM has it's roots dating back to the 70s.)


That was definitely where they screwed up: universities. They have recent program pushing mainframes to universities more. However, they would've been better off (a) giving them to Universities at physical cost, (b) donating time to their students from a pool IBM themselves use, or (c) supporting the Hercules emulator for use in educational institutions that acquire licenses. This would've gotten more exposure. Each are still good moves today.

Not sure what they're actually doing but closing it off too much holds them back. It can still be proprietary. However, people need to be able to hack on it or a VM of it for best results. Preferably, a way for people to learn it in pieces so they don't have to know all mainframe stuff at once. Think the pre-configured, appliance VM's for various services. Have those for z/OS admins, CICS users, etc.


I'm curious too. Of course, anyone here can quote the usual benefits of open source (no licenses, familiarity with the technical crowd in general, the "many-eyes"), but I'm curious how valid these benefits are in the mainframe space.


Maybe more software is supported on Linux. So it combined some of the hardware advantages of a mainframe + more recent/wide variety of packages?


Note: Canonical seems to be a partner to bring Ubuntu on it

See http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on...



"IBM said LinuxONE Emperor can scale up to 8,000 virtual machines or thousands of containers, which would be the most for any single Linux system."

I wish article (or IBM) provided some specs. Thousands of containers is just too broad of a statement- what are these containers doing? Running just a bash script hello world program or minting bitcoins?


In an article which states "..IBM's z13 mainframe computer, which had been designed for high-volume mobile transactions.", all bets are off with respect to reality. I mean, what does 'designed for high-volume mobile transactions' even mean in the context of having Linux running on the hardware?


Not sure if this is still correct for the z13 but in general: "System z servers offload such functions as I/O processing, cryptography, memory control, and various service functions (such as hardware configuration management and error logging) to dedicated processors."

(from Wikipedia)


It's called a zIIP engine (http://www-03.ibm.com/systems/z/hardware/features/ziip/) and it utilizes the IFL also (http://www-03.ibm.com/systems/z/os/linux/solutions/ifl.html). We use the zIIP on our z13 basically to offload any work that takes a ton of I/O (DB2 data serves, dataset scans for performance monitoring, RMF and CICS reporting). It ends up saving a lot of money monthly because we are charged by maximum number of MSU's during any four-hour window of the month. The zIIP reduces the number of MSU's needed at any given time. In our case, 5-10.

Back in 2005, I installed SuSe on an s/390 box. If you could throw a lot of memory at it or MIPs, then it would run pretty well, and could be used as a file or email server. But anything that crunched data was horribly slow. Can't say that it's any better now, but if it's utilizing the zIIP, I'm sure it's miles above where it was 10 years ago.

Still, pretty high cost of ownership on the mainframe to get this. I can't imagine many people buying a million dollar mainframe just to run Linux. Seems like those who have a mainframe already would have been like us: let's partition out a Linux instance and try it out.


> I can't imagine many people buying a million dollar mainframe just to run Linux.

The Techcrunch version of this story [1] has an interesting bit on the pricing model, suggesting that the up-front cost will be much lower than usual (but no specific numbers).

By offering an elastic, cloud-like pricing model, [IBM] is hoping to land more customers who might have been scared away previously by the up-front cost of investing in a mainframe. The metered mainframe will still sit inside the customer’s on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained.

Of course it depends a lot on the specifics, since IBM does already bill in part based on usage. So this could be either a significant departure in pricing approach, or just marketing a tweak on it.

[1] http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on...


To clarify, zIIP processing is only relevant under z/OS. Linux processing runs under IFLs. But that's not to say you can't get some offloading in Linux. By design, mainframes can hardware accelerate some processing. The crytpocards [0] are an example of this.

But you are right. A lot of shops go through a lot of effort to maintain their zIIP window in their z/OS processing. And a big selling point for many mainframe software vendors is that their product's processing is "zIIP-enabled".

[0]: https://share.confex.com/share/120/webprogram/Handout/Sessio... (slide 30)


Yes, you are right there, and thank you for clarifying that. I should have been more clear that the zIIP is useful to Linux through accessing data via z/VM. You can't offload all of the I/O, but you can offload some of it if accessing the mainframe for data. Or so I remember at this point. It's been a few years since leaving my mainframe systems programmer position.


what does any of that have to do with mobile vs non-mobile? It's like they just added a word for no reason.


Interesting! Thanks for the info. :)


I am complete naive in this subject area. What would you say to a clueless peer who said, "my laptop has those same features, except built into the CPU (like the AES instructions found in i7 cores) so they have less latency than a mainframe would"?

I cut my teeth on an Amiga and understand the appeal of external processors, but it seems like bringing those features on-die would be better yet. Mainframes seem to be going the other way, though. What am I missing out on?


All of that is standard for a mainframe. Nothing on that list distinguishes this machine for handling mobile transactions as opposed to non mobile ones. IBM hasn't innovated here, they've merely iterated and added a buzzword.

It shows how outmoded mainframes are that they're now selling one that only supports being a fleet of Linux machines. Mainframe customers are no longer scaling their legacy zOS systems, meaning the big mainframe users must be processing their transactions in the Linux cloud now.


In fact mainframes are a growth market for IBM. They are selling quite well.

http://www-03.ibm.com/press/us/en/pressrelease/47029.wss


Probably a realtime(i.e. happens within a guaranteed amount of time) system but the statement is still too vague.


I mean, what does 'designed for high-volume mobile transactions' even mean in the context of having Linux running on the hardware?

Excellent question. Statements like this, grandiose but nonspecific, are intended for managers and those who are looking for a justification to spend money.


Do you have a source on if the LinuxONE Emprorer is actually a z13 underneath? It wouldn't be surprised if the Emperor and Rockhopper are just re-branded z13s and z114s.

If true, here's your specs for the z13:

http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=SP...


Exactly. Even its official site lacks any clear explanation and solution brief pdf is 404.

I don't see how this is any different from a regular server rack.

http://www-03.ibm.com/systems/z/os/linux/linux-one.html


> I don't see how this is any different from a regular server rack

well, the difference is that most rack cannot fit 10 TB of RAM and 140x5GHz cores: http://www-03.ibm.com/systems/z/hardware/z13_specs.html

but surely those are super-expensive and I doubt all that effective when it comes to performance per watt. I'm actually a bit more excited about Power8 servers, but are there any non-IBM available yet?


> well, the difference is that most rack cannot fit 10 TB of RAM and 140x5GHz cores: http://www-03.ibm.com/systems/z/hardware/z13_specs.html

I can fit 4 C7000 chassis in a single rack, which top out at an aggregate 32 TB of memory and over 2000 cores. 5 GHz cores running s390x microcode are not, FYI, twice as fast as Xeons.

It would be great if people who weren't familiar with the state of the art in x86 kit didn't blindly assume IBM et al's advertising was reality.


I'm no expert, but my understanding is that the IBM "z" platform has a number of interesting features perhaps not found in more typical server platforms/configurations, beyond just scaling out to loads of cores and RAM.

As with all IBM mainframes going right back to the 60s, the "z" systems are designed for continuous uptime, on the order of decades, and this is evident in many of the design decisions. For example, various subsystems and components have hot-spares available, so that even outright component failures will not cause downtime. Many components are then hot-swappable, even those that might not ordinarily be in other architectures, such as processors and main memory. No interruption to OS or application-level services is expected by hot-swapping such hardware. Across the useful life of the mainframe, most repairs and maintenance would be carried out without ever shutting it down.

I understand the platform also has extensive internal integrity checking built in, a potentially important factor for various types of jobs. Its auditing service is capable of detecting unusual conditions in various subsystems or jobs ("I've just picked up a fault in the AE-35 unit"), automatically retrying instructions on the processor if they executed anomalously. If the fault continues, the suspect processor is routed-around with no interruption to OS or applications, the job is resumed from last checkpoint on another processor, and the system phones home to IBM to log a service call. This monitoring is not being performed by processes running in userland or by the kernel, but is in fact baked into the hardware/firmware platform.

Furthermore, the systems can be configured with a variety of specialty offload processors or subsystems for tasks like encryption, key management, compression, and even logging -- which again might not be so commonly found on-board of some commodity servers.

(And, of course, even if you could put together an analogous solution with commodity kit, it's IBM! For the sorts of companies looking at a mainframe in 2015, having the IBM name on the SLA has got to be a pretty big part of the equation, right?)

Moreover, if you proposed to build and manage these sorts of capabilities from commodity x86 kit, I imagine IBM would claim that they'd have the lower TCO.


> I'm no expert,

Whereas I have hands on experience with this stuff. So I guess I should thank you for giving me first-hand experience of being on the receiving end of that mansplaining thing people complain about.

> I imagine IBM would claim that they'd have the lower TCO.

Yes, they will. The funny thing is, the man from HP was in just last month explaining to me that Xeon Superdomes have a lower TCO, too, and last year the lady from Oracle was telling me how I shouldn't balk at the headline cost of ExaData and ExaLogic because, from a TCO perspective, they'd save me money.


Good stuff here.

People fall into the mistake of just comparing cores and memory specs between x86_64 and s390x. The hardware redundancy benefits are pretty huge. You really need a full proof-of-concept to get any picture of how your workload might run on System z.


> The hardware redundancy benefits are pretty huge.

Not really, IME.

> You really need a full proof-of-concept to get any picture of how your workload might run on System z.

Absolutely. Some workloads can make sense. For others...


NB your AE-35 comment, despite loud protestations there's a very good case that 2001 was written and directed as a sharp critique on IBM. Including IBM logos in numerous places.


Is that a single-system image, I/O offloaded, and five 9's? What's the x86 equivalent for that?


> Is that a single-system image

No. Which workloads do you have that need more than 40 cores per image, are happy with a maximum of a hundred on, and won't run on clusters? (I hope it's not one that's going to be crippled by zVM's slow scan of large memory areas suspending guest execution).

> I/O offloaded

If you've actually worked with zLinux (I have) you'll know there's little effective offload, and that zVM overhead increases as the virtual IO ramps up.

> and five 9's

Is zVM offering sysplex? No. So you're relying on your single LPAR to be five nines? Never upgrading zVM? Never updating PR/SM?

Is the datacentre five 9s? The power? The network? Really?


> most rack cannot fit 10 TB of RAM and 140x5GHz cores

Any rack can fit 10TB of RAM and the equivalent of 140x5GHz POWER cores.

I can easily get ~1TB RAM and 64 2GHz+ x86 cores in 1U. A standard rack is at least 42U. In terms of density it's nothing special.

What makes these different is that you get it in a box that someone else will be maintaining, and where all the dirty work of designing and building a high availability redundant system has been done for you. Most people never build anything remotely as redundant as these things tend to be.


I'm guessing part of the appeal will also be super-wide bandwidth between the cores. Interconnect throughput and/or shared memory could make this significantly more interesting than a modular rack-based system for some workloads.


> I'm guessing part of the appeal will also be super-wide bandwidth between the cores.

If it's anything like previous-generations, that will be infiniband between books. The benefits on Z-class systems tend to be very big per-core, per-socket caches and, as the person you're replying to said, a lot of internal redundancy and automated failover (multiple backplanes with failover, spare memory and cores with failover, and so on and so forth).


> I'm actually a bit more excited about Power8 servers, but are there any non-IBM available yet?

Yep, OpenPOWER machines are starting to come out - Tyan has the Habanero server as well as the Palmetto reference/development platform. http://www.tyan.com/solutions/tyan_openpower_system.html

(disclosure: I'm an IBMer working on Power)


> High-performance computing (HPC)

> Data centers

> Big data

Can you give me some specific roles for this range of servers? What makes it's worth compared to x86 system?


The high-end POWER 7/8 hardware has incredible single core performance, beating the pants off Xeon. It uses huge amounts of power to do that, so it's not appropriate for all roles. Low end POWER is pretty niche. Freescale uses the architecture for telecoms applications.

In general Linux upstream these days "just works" on ppc64 & ppc64le. There's RHEL for POWER already, and IBM have loaned hardware to the CentOS project so we'll get CentOS on POWER pretty soon.

The licensing of (Open-)POWER is more open than x86 (but not as open as stuff like RISC-V), and there are several second sources for chips, in situations where that matters.


You get a pretty decent multiple of performance on power vs. intel. Probably something like 4-7x for common use cases.

Generally speaking, you buy the hardware because your workload needs the single core performance, or you're arbitraging vendor licensing costs for software.

Also, IBM's bread and butter is "peaky" financial services and gov't business, so they have business models that makes it work from a $ pov. You can buy a box with 100 cores, pay for 20, and lease 30 more for a few days to meet your peak demands for tax/christmas/billing season.


There is the TYAN GN70-BP010, although they state it isn't for production use.


That sounds like a whole lot of eggs in one very expensive basket. Plus we can get that density with standard kit I reckon.


It may be one basket, but IBM high end kit is the nuclear fallout shelter of baskets.

E.g. we used to have an IBM Enterprise Storage System (aka the Shark) back in the day (around 2000), and it came in American fridge size, full of drawers of drives. You could just yank any drawer safe in the knowledge that all the raid volumes were distributed over multiple drawers. If a SCSI controller failed, you could yank a drawer of SCSI controllers and hotswap them safe in the knowledge they were fully redundant.

The "brains" of the thing consisted of a fully redundant pair of two AIX RS/6000 servers, and you could yank either one of them without losing data (all writes were committed to at least non-volatile memory on both servers before being acknowledged). Either server also had at least hot-swap RAM (raid-like memory controller) and may have had hot swap CPU's (tell the OS to move all threads off a CPU, swap, switch back).

On top of that, it had a phone connection and would dial out to report any early warnings of problems directly to IBM who'd send out a technician before anything even failed as long as you kept paying your support plan.

So yes, you can get that density with standard kit easily, and probably much cheaper too. Assuming you have enough skilled staff to manage it. The reason IBM still manages to sell this kind of kit, on the other hand, is because what they are really selling is peace of mind that most issues are Someone Elses Problem. For some people it makes sense to pay a lot for that.


I wish I had that much confidence.

Back in the late 1990s I was involved in provisioning a large Sun e15k. Not indestructable but nearly.

It broke. You know what happened? The factory roof leaked and poured water onto the DC sub-building which the roof then collapsed onto the e15k which promptly blew up and caused a spectacular fire, halon dump and about a month of work arguing with insurance companies and guys with shovels.

In that circumstance, it doesn't matter what promises the vendor make. That's still all your eggs in one basket.

Buy two and keep one somewhere else didn't help either as the network termination, switching and routing layers were down and all the people using it were about 300 miles away from the backup location anyway. So some poor fucker had to dismantle the backup e15k and disk arrays, bring them in a large truck[1] to the original location and erect a temp DC in a portakabin outside the building.

Edit: We would have been better served with two smaller DCs with off the shelf kit on the same site but different buildings running a mirrored arrangement. All for pocket change compared to a zSeries...

That's what the company I work for now do. We have off the shelf kit,SAN replication, ESX, redundant routing and multiple peers in different locations.

[1] imagine the shit if that truck crashed.


That's why you never deploy to just one location no matter how reliable the actual kit is.

You'd be in exactly the same situation if you had off the shelf "normal" servers in a rack. The point is one IBM mainframe is generally going to be more reliable than the vast majority of "homegrown" setups in a single location.

If you're comparing against a setup in multiple locations, then you should compare against two or more of these.

And there too these kind of solutions are far more reliable if you are willing to pay the money. E.g. IBM provides a range of options up to full synchronous mirroring of mainframe setups over up to about 200km where both systems can be administered as one identical unit (distance is down to latency). They also provide a range of other options for various performance vs. amount of data you can potentially lose vs. cost tradeoffs.

> Buy two and keep one somewhere else didn't help either as the network termination, switching and routing layers were down and all the people using it were about 300 miles away from the backup location anyway.

And this wouldn't have been any better if you had two racks of kit instead of two mainframes.

> All for pocket change compared to a zSeries...

There we agree. I'll likely never buy or recommend one of these, for the reason that I tend to work on cost sensitive projects.


Except that you're going to pay a lot more than you would for those off the shelf "normal" servers in a rack. Probably enough that you can afford doubly-redundant normal servers for the cost of a non-redundant IBM mainframe, with quite a bit of cash left over.


Yes but when that system falls over, your boss is yelling at you, and you're on the hook. With IBM, you can all yell at IBM. And that's why big enterprise companies buy IBM.


Its also why, IBM has seen decreasing revenue for 13 straight quarters


Until its time to build that new datacenter.


This story is obviously a bit ridiculous nowadays, since no one that can afford an ESS is buying a single site. In fact, most can't due to legal regulations about having data redundancy. These regulations typically lead to having a secondary site across town and a tertiary site across the country.


We have an AS/400 (excuse me, iSeries) and the damn thing is rock solid. It also alerts us and IBM when it needs maintenance. Its basically a tank with a logistics chain.


System/38, later AS/400, was one of the most brilliantly designed systems of the time that I've seen:

https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf

Designed for business apps, future-proofing, integrated database, largely self-managing, capability-security, continuing on solid (POWER) hardware... did about everything right. That's why we regularly fix crashed Windows and 'NIX machines but my company's AS/400 has been running for around 10 years.

I've always wanted a modern, clean-slated version of the System/38 w/out relics from that time and with any tricks we've learned since. Throw in hardware acceleration for garbage collection and some NonStop-style tricks for fault-tolerance to have a beast of a machine.


Strangely, I've always wanted a modern version of the Burroughs Large Systems, but I like stack machines and have been a fan of Forth and Postscript.


It's not strange for anyone whose read this:

http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...

A similarly amazing machine that IBM's System/38 learned from a little bit. Somebody posted a link to an emulator but honestly I don't want to dredge through that. Like you said, a modern system that reimplemented its best attributes without the limitations or baggage would be nice.

Mainframes are complex enough that there's rarely projects to implement them but there's lots of work on safer CPU's. See crash-safe.org's early publications for a CPU that combined Burrough's-style checks, Alpha ISA, and functional programming at system level. Given stack preference, you might like these:

http://www.jopdesign.com/

https://www.cs.utexas.edu/~jared/ssp-hase-submission.pdf

http://www.ccs.neu.edu/home/pete/acl206/papers/hardin.pdf


I started out my career in IT as an AS/400 operator / Netware 3.12 admin, and while AS/400 / iSeries aren't "en vogue" these days, I have a lot of respect for those machines. As you say, they are rock solid. One of the places I worked for had an even older machine, an IBM S/36 (predecessor to the AS/400) and while ancient, it just kept plugging away, day after day after day after day...

OTOH, you couldn't pay me to program in RPG/400 using SEU. Building menus, or playing around with a little CL on on the '400 is one thing. But RPG programming sucks. Well, it did anyway. Maybe things have gotten better. I understand the ILE stuff made RPG less column oriented and closer to a free-form language, but I had never had a chance to use that.


I remember original RPG as being the electronic descendant of the old IBM unit record machines, with their plug boards and mechanical processing cycles. That heritage likely predates even COBOL. IBM added many extensions over the years, and at one of my mainframe workplaces we even did online CICS programming with RPG (not fun at all!).


Who are you and how have you stolen my[1] early career history?!

Let me guess you started in the early 90's, right?

I remember looking at HTTP for the first time and feeling like it was 5250 display files writ anew. The y2k mess made me jump into web dev full time.

-----

[1]: https://news.ycombinator.com/item?id=9816696


Who are you and how have you stolen my[1] early career history?!

Hahaha... well, it's a long story, regarding the early part of my career. Especially the whole bit about exactly how I got involved with AS/400's in the first place.

Let me guess you started in the early 90's, right?

Almost. I graduated H.S. in '91, started programming in '92 or so, but didn't start my first IT job until 1997.


The AS/400 (err, iSeries) never gets any love. If you need line-of-business applications that just work all the time, it'd be a great choice.


This used to be my choice of integration. Microsoft web front end and AS/400 back end for warehouse. DB2 is a beast.


That's kinda the point of a mainframe, is it not?


Spending money? Yep.


No, putting all your eggs into one (hopefully redundant) basket so you only need to yell at one person.


Except that the OS is still provided by a different company to your actual hardware, so there's plenty of room for blame-passing, and most of the OS development is being done on x86 machines by people with no access to IBM Power hardware of any kind.


If I never hear "one throat to choke" again, I'll die a happy man.

This only works well if you can negotiate an acceptable SLA, your main vendor doesn't balk when integrating with subcontractors or other vendors and if you have a rock-solid vendor manager on your side enforcing the SLA.

Needless to say, it often doesn't work that way.


Oh, that works great when you lose power to the rack. Or the datacentre. Or the SAN fails. Or the core routers. Or any of the many other SPOFs that can and do occur in a datacentre.


All of which are accounted for by having two or more of these, combined with the feature they call (I'm not kidding) Geographically Dispersed Parallel Sysplex (GDPS).

You can hook up multiple IBM mainframes remotely and set them up to automatically ensure consistent replication of machine state to various extents depending on your reliability vs. performance tradeoffs and replication distance (latency being the issue), all the way up to active-active operation across systems.

So in other words: It works far better than the failover options most people deploy on their off the shelf servers in their self-wired racks (and yes, I run my own setup across off the shelf servers; and no, they're not nearly as redundant as a pair of IBM mainframes).


Problem is we kitted out two 42U racks in two DCs with HP and EMC kit on VMware and got four humans for five years for less than the comparable quote from IBM. And we've tested replication and failover to the same extent and didnt have to rewrite the 2 million lines or so of code we have...


> All of which are accounted for by having two or more of these, combined with the feature they call (I'm not kidding) Geographically Dispersed Parallel Sysplex (GDPS).

And it is an awesome thing - although I didn't realise it supported zVM these days, rather than just zOS.

In any case, you've still got two baskets, which was my point.


That's why you buy two and put them in different DCs


And that's why IBM has their own bank branch to help customers figure out how to afford that?


Well one of their big customers are the banking sector...


That would be most of their customers I imagine. I work with one who still runs mainframes.


From my experience with zLinux environments I would be inclined to scrutinise any density claims very, very carefully.

Interesting they're working with Canonical on this - the Z has historically been a SuSE stronghold, with a lot of the development of Linux-on-Z happening in Germany, and Red Hat trailing behind. I'd be wary it will be another increasingly typical adware effort from them.

(I follow the main zLinux mailing lists for my day job, and I see a lot of folks from SuSE, some folks from Red Hat, a bunch of people from IBM, and basically no-one from the Ubuntu/Canonical world. Usual disclaimers apply.)


I can think of two good reasons. One is if they want to offer these for virtual desktop environments (as an alternative to the VMWare on Cisco UCS setups many current IBM mainframe customers run) The other to offer an alternative platform to EC2 where Ubuntu seems to be very common. Both would indicate that they are probably aiming at the desktop/user/development side with this product.


IBM's focus is on growing its alternative platforms - more users - not more tech. Tech is necessary but not sufficient. It makes sense to partner with Ubuntu if you want to focus on growing communities.

"typical adware" - heh heh! Seriously, in the consumer space everyone wants "free" so affiliate marketing is a sensible route - see the consumer internet for evidence. In the enterprise markets customers are focused on delivery outside the pure bits - open source bits are necessary but not sufficient - subscriptions such as Ubuntu Advantage with SLA's, management frameworks and consulting are viable there.


> Interesting they're working with Canonical on this

Ubuntu's distributions are the reference root FS for Cloud Foundry, to which IBM is the second largest contributor of treasure and engineering effort.

Disclaimer: I work for Pivotal, who are the first.


Ubuntu also inherits quite a bit of 's390x' platform compatibility work from Debian. For a user coming from a non-mainframe Linux background who wants the normal packages to "just work", it might be further along than other Linux distros (but: I have not quantified this). Debian-on-Z doesn't have as much testing as x86, of course, but since it's one of the five officially supported Debian architectures (x86, ARM, PPC, MIPS, S390x), it gets constant autobuilds of the whole archive, a reasonable amount of debugging effort, and release-engineering attention.


From my experience with zLinux environments

Any specifics you can share?


Depends what you want to know.


Well, I find I would be inclined to scrutinise any density claims very, very carefully interesting. Why do they deserve very careful scrutiny?


So I can only speak fairly generally, but I've never even met anyone running the kinds of ratios IBM is talking about in their press releases. I have, however, seen some workloads run pretty well; Oracle is IME a very good performer on Z, needing a smaller SGA and fewer IFLs than the equivalent Intel setup. Depending on the deal IBM offer you, zLinux can be a very good way of running Oracle.

Conversely, CPU intensive Java don't really seem to enjoy much, if any, advantage, which makes it a very, very expensive option for doing that.

There are a bunch of factors to consider (for example, if you've got zLinux and zOS co-resident on the same system there are some interesting things you can do with your legacy and modern code bases), but in general I'd want to prove anything myself before diving onto a Z.


Frankly, my experience on mainframes is that UNIX is a stronghold there for a reason. I'd be far more interested in this ecosystem opening up by having strong mainframe vendor support for running Illumos and/or FreeBSD, and less concerned with running Linux. Among other things, Zones are more useful than lxc, and in this type of environment you often need strong kernel support for specialized high-speed interconnects and real-time operations, which Linux only has experimental support for but is integrated well in the UNIX world. I love Linux and what it's done for the world, but UNIX isn't dead for a reason, there are still many things it is superior at.


This is an odd remark - apparently AIX/ESA, the AIX version for mainframe hardware was discontinued "in the late 1990s", roughly the same time when IBM started to invest in Linux.

http://www.lookupmainframesoftware.com/soft_detail/dispsoft/...

There was also Amdahl UTS, which was sold to a company UTS Global around 2000 that appears defunct now.

https://en.wikipedia.org/wiki/Amdahl_UTS

So where is this stronghold of AT&T descendant UNIXes on mainframes then?


Nowhere, but some people call System I/(A/400)/I-Series a mainframe. It's midrange, but people call it what they call it.

Anyway, IBM i and AIX run on the same hardware. They used to be called System I and System P. Now it's just Power Systems.

Maybe thats what he ment.

Or well, come to think of it. There is a UNIX side to z/OS itself.


It's about time. This is long overdue. The combination of the mainframe architecture's strengths with Linux's API/ecosystem could be pretty awesome. Anyone wondering why buy a mainframe should focus on these areas:

1. Reliability. Some have gone 30 years without downtime. Probably strongest selling point.

2. Channel I/O [1]: dedicated I/O processors plus scheduling that lead to high utilization (80-90+%) and throughput vs commodity servers. Second strongest point in mainframe's favor. I wish my desktop & servers had this rather than a knockoff.

3. Hardware partitioning that's more robust and rated at stronger security than most virtualization. Certain cutting-edge projects in INFOSEC are doing similar things at CPU and I/O layers. Mainframe's version, although not as cool, is decent and field-proven.

4. Built-in, proven software virtualization.

5. Hardware acceleration for some things such as databases and crypto.

6. IBM's ecosystem of apps, third-party providers, and services. This might matter to existing IBM customers.

So, those are a few advantages I hear from people who use mainframes. The z/OS-based mainframes have extra benefits in terms of software reliability, security through obscurity (obfuscation), and seemingly better use of both security (eg memory key) and functional (eg decimal) aspects of mainframe processors. The z/VM product has also been doing for decades what modern virtualization systems only recently do, even self-virtualizing since 70's.

So, there's some things to ponder. Whether it makes since financially vs other setups is a whole, different discussion. However, mainframes do retain strong, technical advantages over commodity architectures. They were doing cloud in one box before it was a thing. Their reliability is still unmatched with only VMS clusters and NonStop architecture getting close. So, it's a sensible choice for a business to spend extra $$$ to get high-throughput with no downtime and strong isolation of logical partitions.

Channel I/O [1] https://en.wikipedia.org/wiki/I/O_channel


So for a business with a complex AS400 system still in place, would this be a replacement for that or an augmentation of that? Their page isn't very clear.

http://www-03.ibm.com/systems/z/os/linux/linux-one.html


I wonder if the KVM on zSeries thingy is actually open source and upstream. From what I can see they (IBM) say it's "based on open source" which is market droid speak for "actually it's not that open".


It appears to be pretty open source.

http://lxr.free-electrons.com/source/arch/s390/kvm/


When would this type of thing become cost effective? Let's say if you compare it with your typical X amounts of Dell servers also capable of running 8,000 virtual machines.


At the last user conference for some iSeries-based software that we run, IBM had a booth where they displayed a 2U server with dual 16-core Power7 CPUs. They bragged that it only ran Linux and would save us a ton of money. They started out at about $20,000 US.

How small is this market? You'd have to have apps that were written to take advantage of Power7. Equivalent x86 Linux servers are 1/4 the price.


I think the horsepower on those machines shouldn't be underestimated, because they are not entirely as equivalent as you think... I was thoroughly surprised when an unoptimized (but correct!) ChaCha20/8 implementation I wrote on a 3.0GHz POWER8 little-endian machine was about as fast as the latest 3.5gHz Xeons @ AES-256 with AESNI (about 1.3cpb vs 1.0cpb IIRC, but the latter has a dedicated hardware unit for it!) On that same Xeon, the ChaCha20 code only hit somewhere around 5cpb - that's software vs silicon!

It also has 170 cores and was actually a QEMU instance (w/ hardware virtualization extensions) vs raw dedicated metal. If you're doing any kind of numerical or analytic workloads (even databases), I wouldn't throw them aside so quickly. You can even get CUDA for them these days, and certain physical addons like CAPI allow you to map and coherently share physical CPU address space with FPGAs or GPUs. If I could get those things in a reasonable workstation configuration, I'd probably go for it tbh.

(I'd be more than willing to repeat this and post some more accurate numbers if anyone cares. I also need to get around to benchmarking AESNI vs that POWER8 machines _actual_ dedicated AES unit. The benchmark above was only flexing its vector/integer unit capabilities. ;)


If you're getting a 4x difference in IPC using a crypto microbenchmark from compiled C code (i.e. it doesn't sound like you're bandwidth or I/O limited), there has to be something else at work. POWER8 is a nice core, but it's not that wide. Maybe the compiler was recognizing your operations and replacing them with AES primitives?


Caches and memory latency/bandwidth can have serious effects as well.


Yes, but at this kind of multiplier only in the case where the entire test is 100% cache-resident on one CPU and spilling on the other. Crypto stuff tends to have small working sets, so my intuition is that it's got to be something else.


an ASM optimized chacha20 is faster than AES-NI on newer intel chips.


> Equivalent x86 Linux servers are 1/4 the price.

You're severely underestimating the cost of dual-proc 16-core Xeons (about $3500 each for the E5-2698v3), and by the time you add memory, storage, I/O, networking, and other necessities, you're easily in the $15-20k range.

Source: I work for an integrator.


Just to clarify: P-series (Power/POWER8) Linux is not the same as what is announced here. LinuxONE runs on the System z (mainframe / s390x) platform.


Yes, but they now share the same microarchitecture. s390x is mostly a difference in the microcode.



"Equivalent x86 Linux servers are 1/4 the price."

That doesn't sound right to me. What are you considering an equivalent x86 machine?


Hypervisor is built-in. Single-core up to 2x faster clock than x86 per core. Double the cache. Decimal support built-in is great for financial calculations. Security advantage in that about every malware and attack tool is written for x86 with some attention shifting to ARM.



How much does the 8000 core thing cost?


Yeah, I'd really like to see that number. I bet it's so big that it's highly negotiable and still highly profitable. ;)


> (The story was refiled to correct the name of the server to "Emperor" from "Empire" in paragraph 4 and names of software to "MariaDB" and "PostgreSQL" from "Maria" and "Posture" in paragraph 5)

These are the journalists specialized in IT news for a giant like Reuters?


As I read the bit about the correction, I figured Empire and Maria were understandable mistakes to make. But what in the hell is posture.

Also, I know this is a newswire/business site, but this is super skimpy on the details.


Autocorrect? My phone corrects postgres to "postures".


[deleted]


linux is only 23 years old or so. (1991 iirc)

also, they have a history of supporting linux, why so bitter?


I meant they could have supported then some version of Unix. Linux wrote Linux only because the "big boys" did their best to ignore it.


Unix has a long history on the IBM mainframe, prior to Linux. It was running under VM/370 at Princeton in 1977 - https://www.bell-labs.com/usr/dmr/www/portpap.pdf page 3 - and in the early 1980s AT&T ran UNIX under TSS/370, which was used in development of the 5ESS switch - see https://www.bell-labs.com/usr/dmr/www/otherports/ibm.pdf. Amdahl UTS came out in 1981 (descended from the 1970s work at Princeton) and IBM/Interactive Systems VM/IX around 1984; then in 1988, the IBM/Locus AIX/370, followed by AIX/390. And then in 1993, MVS acquired a Unix compatibility subsystem (originally called OpenEdition, now called Unix Systems Services), and under its present name of z/OS is officially certified as a Unix, since Unix System Services passes the X/Open tests.

I think the real problem for Unix on the mainframe has been the cost of the mainframe platform. Once you are developing portable applications, it is easy to shift them on to something else (AIX, Solaris, Linux, whatever), and you'll probably save money by doing so. When developing using a classic mainframe technology stack (COBOL, CICS, IMS, etc), replatforming is much more effort/risk, so those apps are more likely to survive on the mainframe in the long-term than Unix ones.

(Disclaimer: I work for a competitor to IBM so I may well be biased.)


And they did. IBM have supported linux on mainframes since the earily days, they started sending patches in 1999.

IBM also have their own version of UNIX, called AIX, for longer than that.


IBM did support Unix [1] and [2], several years before Linux was available.

[1] https://en.wikipedia.org/wiki/Interactive_Systems_Corporatio... [2] https://en.wikipedia.org/wiki/IBM_AIX


You... don't know much about Unix history or IBM products, do you?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: