Hacker News new | past | comments | ask | show | jobs | submit login
IBM Launches Z13 Mainframe (ibm.com)
102 points by zmonkeyz on Jan 14, 2015 | hide | past | favorite | 97 comments



I've found some specs:

- up to 141 configurable processors

- new 22nm 8-core processor chip@5GHZ

- 110GIPS

- Single Instruction Multiple Data (SIMD) support

- On chip cryptographic and compression coprocessors

- up to 10 TB of memory (configured as 'memory raid', RAIM)

PDF is here: http://public.dhe.ibm.com/common/ssi/ecm/zs/en/zsd03035usen/...


I always feel like mainframes are off in some weird parallel universe.

> Single Instruction Multiple Data (SIMD), a vector processing model providing instruction level parallelism, to speed workloads such as analytics and mathematical modeling. For example, COBOL 5.2 and PL/I 4.5 exploit SIMD and improved floating point enhancements to deliver improved performance over and above that provided by the faster processor.

Most people would consider COBOL and PL/I to be laughably out of date languages from the 60s and 70s. But here's a new machine with 10 TB of RAM, and the release notes are quick to point out new features for COBOL and PL/I. Go figure.


I am on an iSeries, the little brother to the z. We have a pair of zSeries as well. COBOL and RPG (its fully free form by the way, looks like any other language) are both very much rooted in business logic, in particular math. Their number handling is well known and without inconsistencies of many newer languages. That and the code base is well established and works. Throw in that memory leaks are nearly non existent, code is independent of changes to the microprocessors and your investment always comes forward.

Seriously, we have code whose originals can be traced to systems long dead. Yet code is not stagnate, full SQL integration exists for most language, there are many ways to get to the web. That and you would be surprised at the number of companies who use these machines simply for one reason above all, reliability. Our last crash was a site failure and even then we rolled to the DR site instantly.


I've been lucky enough to do an internship in banking, where Cobol is (and I guess will be for the foreseable future) the core language.

I didn't have to write lots of Cobol, but it's been very educational: both its very rigorous syntax (and number handling as you said!) and the fact that the code I was modifying had been written even before I was born.

"If it ain't broke, don't fix it"

Cobol in itself is not a bad language at all! Mainframes are obviously rock solid and for development purposes, open source implementations of MVS exist (and were already useful 15 years ago, such as Hercules[1])

[1]http://www.hercules-390.org/


> I always feel like mainframes are off in some weird parallel universe.

Well they are. These things are built for a specific audience and it so happens that they are still willing to spend money on this, otherwise it wouldn't be made, especially not by IBM that is otherwise quick to disband business units that don't make heaps of money.

In my research I'm using an even older language: Fortran. Believe it or not, but in a field that is less narrow and specific than you would think[1], there's still no real replacement for (modern) Fortran.

[1] numeric applications that need to get close to the hardware's peak performance, written in a higher level language than C that is usable by non-CS-graduates.


It's not just learnability. In some cases, Fortran can be considered to be faster than C since it can apply something like -fno-strict-alisasing by default.

Not to mention that most compilers have OpenMP suport built-in. Also, there's the tons of heavily-optimized Fortran math libraries that make it easy to take advantage of big machines and big clusters.

Those scientists that are still using Fortran after all these years know what they're doing. It's not that they're ignorant of new technoglogy. A well-informed and open-minded CS person would probably end up using Fortran for these use cases, too.


> It's not just learnability. In some cases, Fortran can be considered to be faster than C since it can apply something like -fno-strict-alisasing by default.

Yes. In my experience, you can make C as fast as Fortran, but it will be a lot more work (and you need more knowledge about the machine). Fortran's defaults are very strong when it comes to crunching numbers as quickly as possible. Multidimensional arrays with Matlab-like slicing and operations, implemented as performant as possible, is enough reason alone to stay with Fortran.

> Those scientists that are still using Fortran after all these years know what they're doing. It's not that they're ignorant of new technoglogy. A well-informed and open-minded CS person would probably end up using Fortran for these use cases, too.

Well, I'm sort of coming out of CS (computer engineering with focus on software) and I'm using it, so there ;)


I always wonder why you can't just add a library to another language to get most of the features of somebody else's favorite language. If not a library, couldn't any other language add matlab style matrices and end up being as good as it, fortran, etc? Do we really need a whole different language with a whole different set of arbitrary rules just so a few features are easier or faster?


For Fortran style multidimensional arrays, libraries fly right out the window. First of all, you need slicing syntax and operators baked into the language. The compiler needs to be aware of the array's storage order, such that it can implement operations like A + B optimally for A, B as n-dimensional arrays. Then it needs to be aware of their boundaries, such that it can align the memory optimally.

So that leaves language level support. So far the only contender that I can see coming up is Julia - and it will take a lot of effort until it can reach performance levels on par with Fortran. This basically would require a big push by one of the big software companies - who all aren't making a lot of money in HPC software anymore. I could see Nvidia picking up the ball at some point, it would suit them well - but then we're getting into vendor lock-in again.


Sure. That is what you get e.g. with NumPy in Python. However, as you already said you only get "most" of it. The remaining niche is still large enough that Fortran has lots of users.

http://www.numpy.org/


One of many possible reasons is the way arrays are stored. In a matrix representation, should the memory be laid out one row at a time or one column at a time? It's mostly an arbitrary decision, but different languages have made differennt choices.

http://en.wikipedia.org/wiki/Row-major_order explains it nicely.

If you wanted to take a native Fortran matrix and turn it into a native C matrix, you'd either need to copy everything to re-arrange the in-memory layout or do funky things with the indexing. Neither option is great.

---

Bringing it back to a higher level, I'd agree that your point makes sense most of the time. In most cases, the cost of a memcpy(3) would be a rounding error. That's certainly the case for most web applications. In a tight loop in a math-heavy application, though, the standards are a bit higher.


One disadvantage of libraries is that the compiler treats them as regular code. If something is baked into the language, the compiler and run time (if any) can reason about particular operations and optimize them.


couldn't any other language add matlab style matrices and end up being as good as it, fortran, etc?

In theory, sure. In practice a bunch of languages have tried and yet Fortran is still the king.


To be fair, anyone who uses numpy or scipy ends up using Fortran even if they don't know it. A lot of scientific libraries have been ultra optimized for Fortran.

I'm sure optimized libraries could be written in any language, however the vast number of researchers have decided that Fortran is by far more accurate and faster (for these libraries) so they go with that.


> optimized libraries could be written in any language

Well, at least in anything that has really performant compilers. That's currently true for C, Fortran and Assembly, to some extent also C++. Between the first three, Fortran is by far the easiest to use for numerics - its syntax is in fact not much more difficult than Matlab. It's clunky in anything but numerical code (such as when you try to create a nice API or when you try to have generic methods), but at the end it still wins.


If you want to learn a bit more about mainframes, this podcast http://www.se-radio.net/2012/03/episode-184-the-mainframe-wi... has a lot of great info.


In about 1993 or so, at IBM, PL/I was being supported by just one guy! If they have a new release, then they may have put some more people on it!

PL/I is not totally out of date! E.g., it has some really sweetheart scope of names rules. The On Conditions are generally nicer than Try-Catch, e.g., if in a code block A and do a Goto out of it to a statement label in a containing code block B (it's fair to call PL/I a block structured language), then the stack of that task (in some ways nicer than just a thread) is rolled (popped as in popping a stack, which is essentially what happens) back to the context of the last time block B was active.

Block B can have a function F that inherits names defined in the blocks that contain B, and block B can call a function G, pass F as an entry variable, and function G can call it's parameter for function F, function F can execute and access all the variables it has access to that function G may not. That can be nice. I used it once when scheduling the fleet for FedEx.

Can have tasks, like threads. Can allocate some storage as, say, controlled. Then when the task ends, that allocated storage is freed. That can be nice -- nice way to stop memory leaks when have to kill a task.

PL/I has structures which replace much of the utility of classes. And the structures, with arrays of structures of arrays of structures of arrays, etc., are much faster than instances of classes because all the addressing is just a fairly nice generalization of simple old array addressing without pointers.

Such a structure can be based which means that can allocate one and get back a pointer to it. So, that is a lot like allocating an instance of a class. B can be part of a structure A, and B can be defined to be like structure C -- so, can get some inheritance.

The flat file I/O is much sweeter than what is in C and was brought over to, say, Visual Basic .NET or C#.

PL/I will convert nearly any data type to nearly any other, e.g., something like a cast in the C family of languages, but the documentation says in very full and fine detail just how each conversion is done.

There's a relatively powerful pre-processor macro language executed by the compiler that can be used to generate a lot of source code -- can be nice.

If they have tried to upgrade PL/I, say, to 64 bit addressing, maybe more on event handling, some collection classes, a sweet DB/2 or RDBMS interface (it always had some DB syntax and semantics in the language), then it could be a nice language even now.

It's been said that the golden age of language design was the 1960s. C was designed after PL/I and was a really big step down so that it could compile on a DEC mini-computer with 8 KB of main memory while PL/I always had at least 64 KB.

On VM/CMS and MVS, PL/I didn't have access to TCP/IP, but C did, so I wrote some C code callable from PL/I that gave PL/I access to TCP/IP. Used it a little! Maybe now TCP/IP is closer to being native to the language.

I was in the group at IBM's Watson lab that did the expert system shell KnowledgeTool that was implemented as a pre-processor to PL/I. One night I stayed up until dawn and coded the logic that made rule subroutines fast, functional, etc. The key was to keep a crucial part of our run time software and the user's rule subroutine on the stack of dynamic descendancy. Gee, I got an award! The hourly rate for that night was actually relatively good!

PL/I doesn't have to be so bad!


> It's been said that the golden age of language design was the 1960s. C was designed after PL/I and was a really big step down so that it could compile on a DEC mini-computer with 8 KB of main memory while PL/I always had at least 64 KB.

UNIX's success in the market brought C upon us.

Actually it was a very big step down, not only in terms of what other systems programming languages of the same era were capable of, but also in terms of security and compiler research.

One just has to look at PL/I, PL/M, Algol 68, Mesa and then to C.

Just as an example, Algol for B5000 systems (1961) already had the distinction between safe and unsafe code, with unsafe program modules requiring some form of "root" access. Sounds familiar?


Burroughs, being a stack machine, had MMU like qualities 'for free. Desktop got there 25 years later.


> Algol for B5000

Early in my career I wrote some Algol. So, got to love block structured languages and adopted the style of left margin indention then called publication Algol.

I was on the computer selection committee at FedEx, and from what it had we were impressed with Algol, etc. So a Burroughs system is what we recommended and got.

Alas, it was too slow. Soon the computing was way, way behind, and Big Blue got called in and got a really big, new account. The MVT/MVS family of operating systems, with JCL, etc. was like an unanesthetized root canal procedure while undergoing a barbed wire enema, but it could get the work done.

Generally that computing is having problems with virus is a total bummer -- we should be able to run malicious code safely.

Gee, Flash keeps telling me their code is a security risk. No joke! And that's version 12 or so of their code. Flash guys, I've got more to do than be a lab rat running on a wheel going 'round and 'round downloading code.

Yup, looks like Flash guys or someone was correct: I spent all of December and part of January fighting a virus on Windows: Kept getting instances of iexpore.exe running. Yes, that is the EXE of Windows Internet Explorer (IE). So, I got those instances even without starting IE. So, I ran Microsoft's latest virus removal tools, days for each, and they found nothing. The virus was still there. So, I did a Windows System Restore back to the earliest copy I had, 2 months ago. Seems to have worked. But that was about six weeks of virus mud wrestling for no good reason.

A simple Google search shows that discussions about fighing that iexplore.exe problem is all over the Web; apparently it is a very common virus.

Flash guys, are you to blame for that one?

Flash guys: Programming lesson 1 in Coding 101: Check your input and detect any problems. If the input is not suitably clean, then refuse to use it. If the specifications of your input is too complicated for a check, at least good checks while running, then correct your specifications. Flash guys, I can understand one bug, but 12? Are you guys even trying, I mean trying to fix bugs instead of pushing users to download?

And Microsoft guys, why do you let your code let bad guys somewhere east of Moscow mess up my computer? You guys just like shipping bugs? Believe me, I do not like fixing viruses.

As I recall, Multics was written in PL/I. Prime had a version of PL/I and used it for much of their operating system.

Once Google ran a recruiting ad and I sent them a resume. I got a phone call from one of their recruiters, and his big question was "What is your favorite programming language?"

I said, "PL/I".

Wrong answer! Likely he was looking for C++.

Come on guy, I want to do something other than fight memory leaks -- in PL/I if some work is some part of my code raises an On Condition (a software version of an interrupt, which might also have been from a hardware interrupt), then in the code block that gets executed by the On Unit that has been established for that condition, can decide what to do with the interrupt. So, one thing to do is just to kill off that work that raised the interrupt. So, say that the block of the On Unit that gets executed is X in code block B, some code in block B did the function call that got the work going, and just want to kill off that work. So, from block X just do a Goto to the statement label want in block B and not in block X, and presto, bingo, that work is killed off, that is, wiped clean by the wrath of non-local Goto. So, in particular, likely all the storage (all except based storage) allocated by that worked to be killed off is freed. So, look, Ma, no memory leaks from exceptional condition handling!

Google guys: The original definition of C++ was just as a pre-processor to C. And you want to assume that that is a lot better than what IBM, George Radin, etc. did with PL/I? Do you have anyone who understands PL/I?


Many thanks for the history part.

I got introduced to computers with a Timex 2068 back in 1986, but as a language geek I always researched the old systems.

Additionally I managed to use a few languages of the Algol family for systems programing, hence strenghting my belief in safe systems programmin.

One of the best quotes I keep recalling is the Turing Award speech from Hoare, where he mentions his company customers were against having support to disable bounds checking in Algol compilers. Yet here we are.

Every time I check the CVE list I wish UNIX had never left AT&T labs.


The specs from 2045 will still mention COBOL, PL/I and of course Java.


> I always feel like mainframes are off in some weird parallel universe.

This parallel universe is called reality and you live in the matrix.


Maybe I'm out of the loop, but isn't 22nm big, SIMD available on almost everything, and dedicated crypto and compression more or less ubiquitous?


22nm is pretty close to the smallest; the fabs are ramping up for 14 in intel's case, and 16 for global foundries' case, but you're not gonna find much on the shelves right now with a smaller foundry. While SIMD's everywhere, dedicated crypto is fairly recent, with IBM notably doing elliptic curve acceleration, and dedicated compression is something that is new. On the other side, most of the CPU improvements they are touting are things everyone else is doing -- wider SIMD units, more cache, etc.

However, that's ignoring what's made a mainframe fast and expensive for the past several decades. Mainframes still slaughter all comers when it comes to I/O, and this is no different. When you've got up to 320 fibre links available, you're gonna have no problem keeping those processors busy.


Those features are new for COBOL and PL/I (as other commenters mentioned). Good luck getting Intel or ARM to care about your dusty decks full of BCD math.


110 GIPS is not too much for 141 CPU. My Core i7 4700HQ CPU dos 3 GIPS/core at 2.9 GHz.


It depends on what kinds of instructions you count. I strongly suspect that your 3GIps number is only register-to-register integer arithmetic, while the 110GIps is some IBM-ish "mixed workload" that uses most of the System/z instruction set. (which includes things like "get CRC32 of this buffer" as one instruction)


Average of Dhrystone and Whetstone, which is not a proper benchmark indeed.

Those special instructions such as CRC32 or SHA256 are rarely needed in scientific simulations such as Monte Carlo simulations or real-time working with Bayesian Network, what this system is aimed for. In these scenarios raw power and big on-board caches what matter and on paper the POWER8 has these, but 110 GIPS is not the best bang for bucks.


System z is completely different thing from what is currently called Power Systems. Target market for System z is pretty much limited to OLTP and most of the weirdness of the platform comes from the fact that it is heavily optimized for exactly that and not for raw speed in two's-complement or FP arithmetics.


yes but buying 3 of them gets you 3 machines that can do 3GIPS, not one machine that can do 9GIPS


That's a standard consumer chips though. The Xeon server line can, with the E7-8895 v2 do 15 cores per CPU and you can have 8 of them for a total of 240 hyperthreaded cores. I don't know what could take advantage of that and what the tradeoffs for scheduling etc. would be but Sun will sell you one: http://www.oracle.com/us/products/servers-storage/servers/x8... -- 6TB of RAM.


True, but that is rather motherboard-design and core-stacking question. x86 CPUs are made for consumer use-cases, but that doesn't mean it can't be stacked almost linearly, like the Knights Landing CPUs.


I'm curious what one might expect something like this to cost.


For the previous generation: "IBM doesn’t publish prices for mainframes but the word on the street is that it is around $80,000 per core for a [Linux/Java core] compared to $400,000 per core for regular [cores] that mostly run z/OS" http://www.enterprisetech.com/2013/10/09/ibm-slashes-hardwar... I'm not sure whether that's the amortized total price or just the activation after you've already bought the machine.


Also the per-core licence is not the only position. You have to add a maintenance contract, buying the machine, firmware licence (which in this case is afaik a big Java application running on its own core), OS licence, setup, and probably more.

This really is a cashcow of IBM, mostly because they own the "mainframe" trademark/monopoly and everybody who wants to sell a "mainframe" is sued out of the market. Without the "mainframe" title, you are dead in this market of ultra-conservative customers (banks,airlines,etc).


I guess this this market of ultra-conservative customers are mostly US companies? I have hard time imagining companies in other countries getting into the habit of buying such kind if hardware?


I traveled all over the world while at IBM in the 1990s. They were doing a very nice business with the biggest banks in Europe as well as many of auto companies in Europe. Major ibm customers in Japan as well.

They wouldn't build these if they couldn't sell them. With the io and data flow rates these thing can handle, there really aren't alterntives on the market. the debate I keep spinning through in my head is will the market stop caring about the hardware (it's all just in the cloud or grid) before the mainframe goes away or will the cloud be partially mainframe.


1. So far no cloud provider tried to provide reliability at the level of a mainframe.

2. Those conservative customers want to keep their data in-house. So a cloud is probably impossible to sell here.

They can either build their own "in-house cloud" (paradox?!) or buy a mainframe. The later is simpler and proven. Cost savings of the roll-your-own-in-house-cloud are questionable.

You could try to sell an ultra-reliable (maybe distributed) system for in-house use. However, you cannot call it a "mainframe".


While it's an undeniable technological achievement, I don't think it's a good strategical one.

Nowadays you don't create a single monolithic system, really expensive too, which is capable of processing all of your transactions, as it says, like 100 cybermondays every day.

Instead, you have a distributed system, with many cheap, geographically distributed servers, each one capable of a much lower number of transactions, but still quite high today... and also you can spin up new servers as needed, or destroy servers as they are no longer needed, so you control your costs when you don't need as much instead of having a very expensive mainframe 90% underutilized 90% of the time.


The CAP theorem suggests some limits on a distributed systems approach. These matter very much in industries like banking and others involving financial transactions. It's no accident that checking each transaction for credit card fraud is listed as a feature. If there's float, there's an opportunity for arbitrage.

IBM is full of really smart people some of whom work on really hard problems using the principles of Computer science. The z13 wasn't the result of two recent grads pivoting a Yelp for Rabbits startup and the z13 isn't for companies hoping to do a billion transactions a day. It's for those doing it.


These things tend to be used nowadays as the single source of truth in middle of large distributed systems. When your data represent real value you really need full transactional semantics and when your transactional volume is large enough, buying or renting System z is the cheapest solution.


Computers, even these, are cheaper than development and opportunity costs for a complex system. Especially if that system has been around for 30+ years and lies at the core of a large business.


But development for mainframes seems to be much more costly than development on pc's. If you need this type of machine you need it, but it will probably cost you a lot to work with it over the years.


You say that as if managing a system at Google or AWS or Facebook scale is free. The downtime of individual servers may not matter as much in that architecture, but they each have an army of (costly!) engineers just to keep things running.

To say that the Google 'many-servers' approach is the only valid one is to overstate the trade-offs.


Distributed systems as a whole can still suffer from the same kind of issues that you would expect to see in a monolithic system. Look at how a single bad software update can take down Google or AWS.

At a basic level this kind of sytem gives you lots of low latency bandwidth connecting CPUs to each other and CPUs to storage. That is a useful quality to have and not neccessarily easily repoducible using network and internet connections between servers and data centers.


You're regurgitating 'common' wisdom here, but you really should check your assumptions and reality when it comes to current mainframes. IBM's mainframes have a minimum and maximum number of engines (CPU). An engine can be either general purpose (I forget the acronym they use) for use with z/OS, or can be a specialty engine (IFL, zIIP, zAAP). My understanding is that it's the same hardware, just a simple firmware update to tell the engine what 'type' it is. An IFL (what you would use to run Linux under z/VM) is significantly cheaper than the general purpose one used for z/OS. Also, mainframes can be loaded with more engines that what you've paid for -- this is what IBM calls Capacity-On-Demand (COD). So you can temporarily activate additional engines to handle spikes. The zSeries has some fantastic capabilities for Reliability, Availability, and Serviceability (RAS). The systems I've been around keep engine utilizations quite high (98/99%) as the norm.

A mainframe is typically partitioned into multiple, smaller systems. This is what IBM calls a Logical Partition (LPAR). This partitioning is done with firmware called PR/SM. This capability has been around since the 3090 (mid to late 80's). Within an LPAR, one typically installs either z/OS or z/VM (there are other systems too, but less commonly used this way). Running in an LPAR is 'bare metal'. Within z/VM, there's typically a mix of guest types, but this is where Linux would typically be configured to run. z/VM has some pretty impressive capabilities and has been around for a LONG time. It's very stable. The biggest things that you would probably dislike about it are: (1) one uses a 3270 emulator to do day-to-day system-level admin work, (2) it only runs (legally) on real mainframe hardware, and (3) much of IBM's jargon is dated and would unfamiliar to people coming from x86.

The modern mainframe hardware and software has an impressive feature set for virtualized networking (networking is all software defined and runs within the box). This means you can set up hundreds or thousands of Linux guests and have them on networks that are all virtualized within z/VM. And of course, z/VM guests can be created and spun-up on demand and stopped on-demand. This has been there for decades.

IBM now has support for OpenStack in z/VM. From what I've perused, it seems to be quite slick. Assuming it all works as advertised, this would make folks coming from x86 feel much more at home.

Many seem to think that distributed systems are so much cheaper. You can't just think of it as the price of the rack server though. You have to include everything in the mix to understand the total cost: hardware, software, networking, people (headcount and consulting), power and cooling, floorspace, and intangibles (such as capabilities). When you do the math, mainframes running Linux under z/VM are often fairly economical (YMMV).

Once you throw off the stereotypes and objectively dig into the modern mainframe, you might be surprised at what you find.


Excellent explanation! Specially the fact that within a mainframe you can create hundreds or even thousands of virtualized linux guests, which can be created or destroyed on demand. That makes for a much more compelling use case!


That's a fairly content free release as these come. Nothing about the actual architecture, clearly aimed at managers rather than techies, IBM does enterprise sales very well. But of course I'm more interested in the nitty gritty details (though I'll likely never be in a position to pull the trigger on a purchase like this). Funny how Linux is mentioned in the press release, and other platforms don't even rate a mention.



> Funny how Linux is mentioned in the press release, and other platforms don't even rate a mention.

Because the other OSes which run on the hardware are so obscure that non-technies would never have heard of them. I'm morally certain CMS and z/OS (or whatever MVS is called these days) and VM are still in use, but Linux has the penguin and the media coverage. Therefore, it gets the inches.


Very true.. Though I recall the power7 stuff being released years ago as a very tech heavy event, and yet it seemed to go down without much of an impact as well (though the aix guys I worked with were convinced it was game changing). I think people who are invested (culturally/mentally)in ibm will continue to buy ibm (and no doubt be successful). I just don't see them growing any new markets.


I have participated in sales, and I did not want to get into hours long discussion (boring, and not really important) about what Power/AIX offers so I boiled it down for potential customers like this:

If you want the best with insane price level, buy into Power/AIX. If you are not really sure, please don't. You need to train administrators (with larger setups) - or prepare to shell in a lot of money, the hardware contains nasty licensing surprises, the hardware is rather expensive, and so on. When you get it running, it's insanely powerful indeed, and the platform itself nearly never screws up (if you tested properly for Monday parts), and the virtualization is insanely good in practice.

Some potential customers have bought Power/AIX, some not. The ones that did are still years after customers, and to my knowledge pretty happy. I believe IBM is actually gaining customers. Slowly, but they are. Those that made the choice knew what they got into, and are not going to change their views.


> the [Power/AIX] virtualization is insanely good in practice

This is my experience as well. IMO, Power/AIX has the most sophisticated virtualization available. IBM just has no idea how to market it.


>though the aix guys I worked with were convinced it was game changing

I was talking to an ibmer a while ago and AIX was mentioned. We agreed the platform had technical merits, and on a pure platform level was superior to x32/x64 but the network effects of commodity hardware had already won the platform game. No one writes software for any other platforms but intel/amd64. AIX lost for the battle, similar to the reasons betamax lost to VHS.


Never mind this specific release - you never hear about other platforms on System Z period. Linux got a s/390 port quite early on, and SUSE and Red Hat both support it, but no, Windows, Mac, BSD, etc. are unheard of in this world. You can mess with Fedora / Debian images for that architecture on your desktop using Hercules.


I have been working in the mainframe industry for over 5 years now and i know tons of company (government, banks, insurance and others) are still running mainframe. I picked this field because i saw the opportunity of how alot of the old folks are retiring and companies are willing to pay alot of money to get young folks to join and learn the mainframe. Honestly its not even that hard as long as you are willing to learn.... what are your thoughts about it? If someone was willing to train people, would people be interested in it?


The part about the in-memory analytics reminds me a lot of the SAP HANA spiel:

    Transaction Database --> copy -->  analytics db   = slow
    
    In-memory transactions with built-in analytics    = fast 
I totally understand the value of analytics and BI applications, but does it all have to be realtime? And what "mobile analytics" are they going to compute exactly? Forget analytics, I can tell you what mobile users are doing right now---they're all playing candy crush.


Their customers are not King.com or most techie startups. Their customers are insurance companies, banks, and other financial behemoths.

My bank recently alerted me because I swiped my credit card at one gas station away from home but did not buy gas (the pump was out of gas) and then swiped again at another gas station in a 3mi radius within 30 minutes (obviously since I needed gas).

Their system picked up these two transactions, figured out they were both gas stations, geo-located them to be out of my area, calculated the distance to be close enough to reason it was just one person (and not my wife using her card elsewhere), and triggered an alert because the first transaction was really just a $1 pre-auth with no actual charge.

While none of this is really that complex if you built a fraud-detection system from ground up with these requirements, imagine running this rule on a few hundred million transactions per day. Now add in a thousand more fraud-detection rules, scoring algorithms, and pattern recognition for uncommon usage. Then hook it up to the mobile app to further reduce chances of fraudulent usage by tracking user's location, businesses silently checked-in, and alert preferences. IBM is targeting companies that need this.


I thought the customer was always king, and nobody ever got fired for choosing IBM.


It's fascinating to see the rhetoric focussing on mobile transactions being the main driver of demand. While it's certainly huge, there are still a lot of other uses.


I was surprised that the word "omnichannel" is missing.


> z13 is the first system to make practical real-time encryption of all mobile transactions at any scale

Or you know, any kind of transaction because there is no difference.

Fuck I hate marketing.


The biggest, baddest Linux hypervisor known to mankind.


YES BUT WHY IS THE FONT SO SMALL?


That's IBM's new 9-pixel type process.


what's a transaction in this context?



I know that but this is a very general definition of TPS. How many CPU cycles a transaction requires? It varies on the transaction of course.

So I am wondering why they use TPS as a unit when it can be anything.


They use TPS as a unit because that's how large banks and other transaction oriented entities do their provisioning and capacity planning. That's their 'unit', not CPU cycles.


But how can they claim they can do X transactions when a transaction's processing requirement is undefined?


There are industry-standard benchmarks that measure TPS. See for example (or maybe not even for example; it may be effectively the only one in town) http://www.tpc.org/information/benchmarks.asp.

And of course, anybody considering buying such a machine will spend quite a bit on evaluating how it performs under their load, just like SPEC (https://www.spec.org/) says something, but not all.


I've looked at the tpc.org maybe 13 years ago. It never made sense to me to use it as a measurement unit back then and it still don't make sense to me now. :)


Because they are not arguing with scientists but with "decision makers". So bullshit is more important than proven facts... Tanksfully, banks are moving slowly out of mainframes. You'll never find a new business going on a mainframe. Mainframes users are captive, not free.


You're thinking very much in terms of a PC architecture. Many components of the mainframe including a lot of the I/O hardware and some instructions in the assembly language itself are record oriented, and within reasonable limits operate on an entire record at once. It's not that big of a jump to go from entire records being atomic operations to at least speaking about entire transactions being the basic, atomic units of computation in the system.


Define a record.

Also can you elaborate more on how ASM instructions can be record oriented?


Like a row in a table. I/O instructions in S/360 (as I recall) assembler would fetch an entire record in a single instruction / cycle (the I/O devices are radically different and would support this - it wasn't just syntactic sugar around reading a word or byte at a time). So the time to read in a record, perform some operations and write a record back out is actually much more predictable based on clock speed than it sounds like it would be to someone from a PC background.


to the CPU, what's a row and what's a table in this context.


Row and table would be the database terminology. In mainframes you have records in datasets, but they're the same thing: tuples of fields. It's just that in a mainframe to maximize throughout a lot of the work like dealing with natively supported data formats is offloaded to special I/O controllers, which makes it easier to work with via low-level assembly instructions.


How many TPS Reports per second can it issue?


literally every device used by consumers today can be traced back to IBM.


How so? How does an ARM powered smartphone or tablet trace back to IBM?


It's a lot easier to claim that all modern processors (that are based on a load/store architecture... All RISC chips, basically... and that includes x86) trace back to the CDC 6600, probably Seymour Cray's most direct and influential gift to the world of computing.


And the IBM reference would be IBM 801 by John Cocke:

http://en.wikipedia.org/wiki/IBM_801 http://en.wikipedia.org/wiki/John_Cocke

Dunno how much it actually influenced the Archimedes though.


I can get behind that. Control data, or it will control you!


Well, off the top of my head, and unlikely to be the only association..

IBM licensed the 286 for the IBM PC/AT. Acorn wanted to use the 286 for the Archimedes, but Intel would not allow licensing. This forced Acorn to create their own processor, resulting in the first RISC based consumer PC.

Perhaps that snowballs into our current tech environment.


Exactly, that's why I picked ARM. So that line does not trace back to IBM in any way, besides, Intel isn't IBM.


So that line does not trace back to IBM in any way, besides, Intel isn't IBM.

eh.. sorta.[0].

IBM had significant stake in Intel during the time period we're talking about, and that whole period was filled with anti-trust talk, which turned into actual anti-trust fire a few years later in the 90s.

I may not have been blunt enough in my other comment, but I was alluding to the idea that IBM and Intel may have been involved in anti-competitive behavior against Acorn, which caused them to innovate in such a way that created the systems we enjoy today (modern mobile). The idea that the pressures on the company spawned innovation.

[0]: http://articles.latimes.com/1987-08-29/business/fi-1393_1_in...


'Was the cause of a complete redevelopment' is the polar opposite of 'can be traced back to' whatever ugly dealings IBM and Intel may have had (which doesn't surprise me one bit).


When you press the main button. BOOM! Interrupt! That is one example. They even had virtualization in the 80's on System / 370 long VMWare came out.


There was the operating systems CMS and CP67 done by the IBM Cambridge Scientific Center for the IBM 360/67 in, right, about 1967. The CP67 was control program 67 and was for virtual machine. CMS was Cambridge Monitor Systems, maybe off and on Conversational Monitor System, and the command-line time-sharing operating system users saw.

CP67 was intended as a means for interactive, time-shared operating system development. So, right, could run CP67 on CP67 -- once that was done 7 levels deep.

The combination CP67/CMS was, for the time, a total dream for a time sharing system.

Stop malicious code? Sure: On CP67 write and run any code, any instructions doing anything you want with any data you want, and you just cannot bother any other users. So, could run malicious code safely.

I used CP67/CMS and PL/I from National CSS Time Sharing in Stamford, CT to schedule the fleet at FedEx. Founder, COB, CEO Fred Smith's remark about the output was "Amazing document" "Solved the most important problem facing FedEx". The Board was pleased and a nice chunk of funding was enabled. It literally saved the company. No, Fred never gave me my promised stock, that once he said would be worth $500,000. Add a few zeros for now.

Since then CP67 was called VM, and for years IBM's internal computing was done on about 3600 mainframes around the world and all running VM/CMS and connected with VNET which was a lot like the Internet except the communications were via bisync lines and the routing was done by the mainframes themselves. No, that setup didn't depend on Systems Network Architecture (SNA). Yes, there were a lot of fora!

The advantages of running on VM were too good to pass up, so eventually essentially all production IBM mainframes were running their operating systems as guests on VM on the bare metal.

In virtual machine, IBM was way out in front.


They still are.

When I started learning how OS/400 work, now IBM i. I was quite interested to see the execution model of having everything stored as bytecode, with a JIT kernel doing AOT compilation at install time.

Something that is kind of being done on the Android and Windows worlds, attempted on Oberon and Inferno, but not with the extent that the OS/400 does it.


http://virtualirfan.com/history-of-interrupts

So no on that one.

Visualiation? You mean virtualization. That was a given, Turing should be credited with that, after all the whole idea of a universal Turing machine is that it can emulate any other Turing machine (including another universal one).


VM is not really an emulation. If the guest's code is just user level code, then it just runs as usual directly in the hardware processor. But if in the guest's code there is a privileged instruction, that is, an instruction reserved for an operating system, then the VM gets an interrupt and, then, decides if VM wants to execute the privileged instruction on behalf of the guest. To make this all efficient, there is some hardware called virtual machine assist.


It goes back even farther than the 1980s. IBM's virtualization product, VM/370, was released in 1972.[1] The development of it happened in the late 1960s on a special version of the 360 that had virtual memory support.[2]

[1] https://en.wikipedia.org/wiki/VM_%28operating_system%29

[2] https://en.wikipedia.org/wiki/CP/CMS


The "encryption" buzzword is worthless without details. And I wonder if they'll stand behind their product if their crypto flavor of choice is broken and needs to be changed. Otherwise that's an expensive brick of swiss cheese.


http://www-03.ibm.com/systems/z/advantages/security/zec12cry... (previous generation) http://www-03.ibm.com/security/cryptocards/

One reason for industry standards is that they distribute risk across the whole industry. So if NIST crypto is broken the entire industry goes down together; one company wouldn't have a competitive disadvantage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: