Yes, looking at IBM stuff is like being in a parallel universe where everything you take for granted is slightly off. You have token-ring instead of Ethernet, you have SNA (or something) instead of TCP/IP. Characters are EBCDIC, not ASCII. Terminals are connected with coax, not RS-232. For hardware, chips are packaged in metal instead of plastic. Circuit boards are a weird grid. Even the terminology and schematic symbols are different: if it looks like an AND gate, it's an OR gate.
Ethernet (OSA cards) and Fibre Channel (FICON cards) are standard on z Mainframes these days. TCP/IP is standard on z/OS, CP (z/VM), AIX and Linux. Terminal emulators connect over TCP/IP, not RS-232 or coax. etc.
But still today:
- The character set for most OSes (not Linux) is EBCDIC.
- The terminal is form-based (like a web page, but with invisible input fields) rather than character based.
- You ALWAYS have to have key punch and card reader devices defined (even on Linux).
- z/OS needs proprietary FICON (not plain fibre channel) connections to emulated ECKD disks (not block based) on one of just a few SANs that support it.
- VSE still needs a TCP/IP stack (one of two 3rd party vendors) purchased separately.
- You need several x86 computers (HMC and two support elements) to manage or even boot-up a mainframe.
- You have to painstakingly configure virtual to physical device address mappings (IOCDS) before you can boot-up anything.
> - You ALWAYS have to have key punch and card reader devices defined (even on Linux).
Linux doesn’t actually support physical card readers/punches, only paravirtualized readers/punches under z/VM (implemented using DIAG hypervisor call interface). [0] And that’s because paravirtualized card devices are heavily used under z/VM for IPC (inter-VM communication), although there are alternatives (IUCV, TCP/IP). So if you aren’t running under z/VM, Linux can’t use readers/punches, because the hypervisor interface isn’t there. And even under z/VM, Linux will work fine without them, because they are mainly used for sending data between Linux and other mainframe operating systems such as CMS and RSCS, and maybe you aren’t interested in that. And if somehow you managed to connect a real card reader or punch to your mainframe (you’d need to chain together a bus/tag to ESCON bridge with an ESCON to FICON bridge), bare metal Linux would have no idea how to talk to it, because it doesn’t support real card devices, only paravirtualized ones. Linux under z/VM might be able to do so, by relying on the hypervisor’s card device driver.
[0] Have a look at https://github.com/torvalds/linux/blob/v6.10/drivers/s390/ch... – if it encounters a real card reader/punch, it is hardcoded to return -EOPNOTSUPP. Actually it looks like it does use CCWs to write to punches, but it relies on DIAG for reading from card readers and device discovery. And due to that code, even if it is generating the correct CCWs to write to a physical punch (I don't know), it would refuse to do so.
> Linux under z/VM might be able to do so, by relying on the hypervisor’s card device driver.
Actually thinking more about the code I linked, I don’t think this would work - even if the z/VM hypervisor (CP) still knows how to talk to physical card devices (maybe the code has bitrotted, or maybe IBM has removed it as legacy cruft) - the DIAG interface would report it as a physical/real device, and hence that Linux kernel driver would refuse to talk to it
From the "If I could" files, I would have liked to spent 5 years on an AS/400, trying to make it work for whatever company I was working for.
The best way to learn this stuff is simply apply it, trying to solve problems.
Going from a High School PET to a College CDC NOS/VM Cyber 730 to a RSTS/E PDP 11/70 was very education cross section of computing that really opened my eyes. If I had gone to school only a few years later, it would have been all PCs, all the time, and I would have missed that little fascinating window.
But I never got to go hands on with an IBM or an AS/400, and I think that would have been interesting before diving into the Unix world.
The OS for the AS/400 is really remarkable as a "path not taken" by the industry and remarkably advanced. Many of the OO architecture ideas that became popular with Java were baked into the OS
> Many of the OO architecture ideas that became popular with Java were baked into the OS
I disagree. OS/400 has this weird version of “OO” in which (1) there is no inheritance (although the concept has been partially tacked on in a non-generic way by having a “subtype” attribute on certain object types), (2) the set of classes is closed and only IBM can define new ones.
That’s a long way from what “OO” normally means. Not bad for a system designed in the 1970s (1988’s AS/400 was just a “version 2” of 1978’s System/38, and a lot of this stuff was largely unchanged from its forebear.) But AS/400 fans have this marked tendency to make the system sound more advanced and cutting-edge than it actually was. Don’t get me wrong, the use of capability-based addressing is still something that is at the research-level on mainstream architectures (see CHERI) - but the OO stuff is a lot less impressive than it sounds at first. Like someone in the 70s had a quick look at Smalltalk and came away with a rather incomplete understanding of it.
> and of course it started out with a virtual machine in the late 1970s.
If you consider UCSD Pascal, BCPL Ocode - far from a unique idea in the 1970s. It is just that many of those other ideas ended up being technological dead-ends, hence many people aren’t aware of them. I suppose ultimately AS/400 is slowly turning into a dead-end too, it has just taken a lot longer. I wouldn’t be surprised if in a few more years IBM sells off IBM i, just like they’ve done with VSE
I'll say this. There is more than one side to "object orientation".
A better comparison would be between the AS/400 architecture and Microsoft's COM. That is, you can write COM components just fine in C as long as you speak Hungarian. This kind of system extends "objects" across space (distributed, across address spaces, between libraries and application) and time (persistence) and the important thing is, I think, the infrastructure to do that and not particular ideas such as inheritance.
When I started coding Java in 1995 (before 1.0) it was pretty obvious that you could build frameworks that could that kind of extension over space and time and I did a lot of thinking about how you'd build a database that was built to support an OO language. Remember serialization didn't come along until Java 1.1 and than an RMI were still really bad and really cool ideas built on top of them often went nowhere, see
there was the CORBA fiasco too. What's funny is that it just took years to build systems that expressed that potential and most of them are pretty lightweight like what Hazelcast used to be (distributed data structures like IBM's 1990s coupling facility but so easy... Not knocking the current Hazelcast, you can probably do what I used to with it but I know they've added a lot of new stuff to it that I've never used) Or the whole Jackson thing where you can turn objects to JSON without a lot of ceremony.
The more I think about it, objects have different amounts of "reification". A Java object has an 8-16 byte header to support garbage collection, locks and all sort of stuff. That's an awful lot of overhead for a small object like a complex number type so they are doing all the work on value types to make a smaller kind of object. If objects are going to live a bigger life across space and time those objects could get further reification, adding what it takes to support that lifetime.
I worked on something in Python that brought together the worlds of MOF, OWL and Python that was similar to the meta-object facility
where there is a concept of classes and instances that build on top of the base language so you can more or less work with meta-objects as if they were Python objects but with all sorts of additional affordances.
Yes, AS/400 / IBM i is the other IBM OS I like to play with (I have an actual AS/400e at home), and in a lot of ways I consider it to be the polar opposite of MVS on the mainframe:
Where MVS seems to be missing very simple abstractions that I took for granted, AS/400 abstracts way more than I'm used to, differently, and most importantly far away from the very, very "file-centric" view of today's systems that was derived from UNIX. It indeed shows you what computing could have been, had AS/400 been more open and had those ideas spread farther.
Before I got to know AS/400, I thought UNIX was great, and that it rightfully took over computing. Now, not so much, and I've started to see how detrimental the "everything is a file" concept that UNIX brought into the world actually was to computing in general.
> From the "If I could" files, I would have liked to spent 5 years on an AS/400,
pub400.com still exists and probably will for 5 more years. not sure to what extent you can make it work for a company but you can at least do learning projects on it