Been doing COBOL for the past five years, so it's been my main thing since leaving school.
A on of insurance companies, banks, telcos, airlines and old government systems for anything related to healthcare and taxes still use it and it's not all IBM Mainframes. I think IKEA just recently migrated away from it but a lot of companies with old inventory systems still do use COBOL.
The language is easy, systems and products gigantic, your mentors write their code (incl. Java) in a 55x132 char terminal and sometimes there is even more than absolutely no documentation!
It's fun but I'll probably want to do something more modern for a while before maybe returning to the old systems.
Keep an eye on IBM's "Master the Mainframe" and hobbyist licenses for OpenVMS. I found setting up GnuCOBOL in a nice way was too much of a hassle to recommend.
A lot of these kind of skills arent always applicable or comparable to a salary position.
Many do odd contract jobs that are extremely high value; i.e. come in a fix this super big bug or add this super important feature on a COBOL system at an extremely high day rate because its hard to find people with appropriate skills.
FWIW when I worked in a Finance company with a lot of Cobol + JCL + DB2 devs (including in management so I could see more info) their salaries were on average similar to Full Stack but possibly lower, especially as we put more AWS emphasis which those people started getting more premium salaries. Some banks I hear give cobol premium but it seems to be more specifically very specific mainframe systems experience + cobol.
But what do those Full Stack engineers make? Salaries are hugely variable across the industry. There are “senior” engineers making 60 k$/yr, and new grads starting at 200 k$/yr
There are several different “national” variations of ebcdic as well as double byte encodings. IBM was all about selling, and their customers had deep pockets.
When I was new to engineering management, I took over a project where one of the large components, a claims processing rules engine, was written entirely in SQL. Apparently, the previous manager had assigned that component to a DBA to build and by the time they found out, it was too late in the project to re-write.
Anyway... years later my scope had increased to include some mainframe teams working in COBOL. I have always been a big believer that you can't effectively lead a team if you can't do the job, so I dove in and learned COBOL. Now, I am not saying that COBOL and SQL are syntactically similar, but the experience of building in COBOL was very similar to building application logic in SQL. None of the abstractions we are used to, and no real concept of separation of concerns. It was fairly easy to build in, but an absolute nightmare to maintain.
Learning COBOL is easy. If you want a good skill for those rare low-paying COBOL jobs, learn JCL. That is the secret sauce that glues together IBM Mainframe databases, COBOL programs, and user interfaces. Any legacy application will be a chain of JCL jobs linking input and output from several COBOL programs. This only applies to IBM mainframes. The State of Michigan uses Burroughs/Unisys mainframes running COBOL, but I don’t know what those use to control job flow and input/output.
We learned COBOL, CICS, and JCL in college (in the early aughts), because we had a lot of industries in our town that still relied on them, so they were churning out programmers that could actually apply for those jobs as the older programmers were retired.
COBOL was fine, ultimately. Clunky, but functional. CICS was less fine, but still something you could adapt to. JCL was _miserable._ I honestly cannot remember why, I've blocked as much of it from my mind as I could, I suppose, but I remember every part of JCL being awful and no fun to work with.
Same here, I think Fidelity (the bank) had a large hand in it. We had IBM mainframe access and it was an interesting class. Our professor told us he did all sorts of consulting jobs around COBOL and there was a lot of money to be made.
Never used COBOL section though, but everything else was excellent: jcl, Fortran and how to operate the equipment. The Russian translation was THE main programming textbook in Moscow State University in 1970s.
> Very few resources explain this, but having the code in strict positions was what allowed old computers to read the instructions on punched cards. On punched cards, a certain COBOL statement, let’s say an IF statement, is mapped to a pattern of holes. If you know that a statement starts at a certain column (12-72 in COBOL), it is easier for the hardware to read (this was done by passing a light through the card).
i think it would be more accurate to say that dividing punched cards into predefined fields was a pre-existing practice (dating back to the census of 01890) which was unthinkingly copied when punched cards were adapted to applications for which it is a stupid idea, such as programming
it's true that iterating over the leading blanks on a free-form card to find the first nonblank character takes more cpu time than just looking at column 8, but those cpu savings are so minor as to be lost the first time a compiler run has to be aborted because someone accidentally punched their statement starting in column 7
cobol, being a dod initiative, didn't care much about whether ideas were stupid as long as they could be made to work, and hci (and even ergonomics) was still in its infancy
(what would i know about unthinking copying? well, just last night i formatted my code to fit within 80 columns, ultimately because that was the width of ibm cards, and therefore of many printers, terminals, and character generators, and is still the default width of many terminal emulators)
It’s easy to overstate how important this kind of copying is.
80 characters is a nice round number that’s roughly the minimum reasonable width of a programming interface. It’s perfectly possible to code using a 40 or 60 character wide terminal, but there is significant utility from hitting 80. Meanwhile the gain from hitting 100 or 120 characters isn’t as significant.
Similarly you might have frequently used a 237 character wide editor or similar uninteresting number. Those odd numbers are simply happenstance with little reason to discover what they are or recall if you do happen to find out.
i think 80 is maybe a bit large for code; in most languages it's rare for a line of code to go past column 64, unless you're using 8-space indents, and those rare cases are often bad code. if your code was formatted for 64-character width, you could fit 12 columns across a 3840-pixel 4k display with a 5-pixel-wide font, versus only 9 columns if each was 80 characters wide
40, 64, 81, 128, or 121† would have been more logical; 72, 84, 90, 96, 108, 126, and especially 120 could be divided into equal columns in a larger number of ways; and for single columns of text, 80 is also a bit too large for optimal readability (it's about 14 words per line), but too small to fit two columns of text the way traditional book layouts do
usa typewriters commonly used 10 or 12 characters per "inch" ("pica" and "elite" respectively) which works out to 85 or 102 characters respectively across a sheet of usa letter paper, or 82 or 99 characters across a sheet of iso a4 paper, so plausibly the character width for fixed-width printers for these paper sizes had to be somewhere in the range 76-102
this week though i've been programming in proportional fonts, so the whole question of width becomes less well-defined. specifically the font lobster two because fuck you that's why
______
† 136? yes, because that's the most positive number you can represent in five trits of balanced ternary. in our timeline of course nobody uses balanced ternary, but plausibly if people had been doing this sort of optimization instead of following tradition, they'd have switched to balanced ternary 70 years ago
People generally want borders around the text they’re editing. This makes the width of the editing area more arbitrary than simply the width of the screen / fixed width font.
Just wanted to say I wrote a whole fermi estimate reply for the amount of jobs in the world until I realized you meant as in getting paid for work and not as in JCL files you can submit to run cobol programs.
but there are like 4 or 5 other systems on top of it, some of them use XML, some html, some asp, some java, some vba, some c#.
i dont really understand the concept of a "x language job". like jobs tend to need people to learn systems and domain language, and be problem solvers, and communicators, coordinators, testers, the actual computer language is kind of irrelevant. like i cannot fathom there is a job out there where someone just hacks cobol all day and thats all they do.
The problem is that the "cobol language job" is really a mainframe job which means an operating system that is from a completely alien parallell reality that diverged from the rest of the timeline 60 years ago. So it's not a cobol job, it's a relearn computing from the ground up again job. I'm exaggerating but it's a complete nervous system shock until one acclimates to the zOS paradigm.
That's my experience. I've actually been around systems that use Cobol amongst other things for 20 years (only did tiniest bit myself) but the developers tend to be experts in business as well - like, this set of Cobol programs are for payroll, and the developers understand payroll processing. They can discuss legislative nuances and taxation and benefits and deductions with functional analysts.
Basically, all Cobol I've ever seen is firmly performing a business / functional task. I guess it's in the name :-)
I was getting into Linux around then too so I devoured Litt's site. Shortly after I read this my cobol-programming neighbor (worked for CapOne) was laid off. As far as I know he never returned to Cobol programming (unless he managed to snag something during covid).
I’d like to know these places you speak of because every single time I’ve applied for a job in the past ~15 years where I didn’t have explicit experience in the specific programming language that was specified, I failed to get the job. A lot of those that I applied for I had significant domain knowledge as well. In fact I can’t even think of a time I got further than an introductory phone call.
But that is because HR people and recruiters generally don’t know anything about anything tech. They cannot filter resumes on anything vague, so programming language is an easy one. But gp is right; once you are in, the language is mostly irrelevant, unless the company is small and/or their business is selling or maintaining actual tech products.
No idea how many jobs. Places that use it extensively usually have apprenticeships and/or training. $DAYJOB does (big financial firm). There are even some mainframe/COBOL college programs.
They lost a bit of their glam when moved from 24" racks to 19" ones so that the datacenter people could hide these beauties in warehouse sized facilities instead of being proudly displayed at corporate headquarters ;-)
Any normal PC or server will continue working during an earthquake. Source: working as a sysadmin/programmer/jack-of-all-trades in Japan for almost 5 years, I have seen many earthquakes, but never a PC or server crashing because of it. Oh, and my workplace has a supercomputer, it also survives earthquakes just fine.
the cerebras wse-2 has 40 gigabytes of on-chip sram, pretty sure tianhe-1 has a few gigabytes of l2 cache, and it's common nowadays for even laptops and cellphones to have weird multi-gigabyte memories as the level in the memory hierarchy below l3 cache or even l2 cache; they just don't call it cache, and they use nonvolatile memory for the level below that
also, of the machines i listed, only tianhe-1 will have any particular trouble with earthquakes
Tianhe-1 is a lot of Xeons, Nvidias, and some SPARC-like CPUs. It's not a single image machine with a gigantic shared pool of memory, but a cluster of 7000-ish nodes with very fast interconnects, managed with SLURM. In that sense, these supercomputers are somewhat boring - they are a very large cluster of relatively mundane machines. In that regard, an ARM Mac is more interesting because CPUs and GPUs share memory and you don't need to copy it over a PCIe bus.
Cerebras is a tour de force, but it's far from a general purpose computer. What it does, it does well, but you won't see it doing transactional workloads.
this is the same kind of error as saying that an arm mac is somewhat boring because it's a large cluster of relatively mundane logic gates and capacitors
Apple's M-series, in particular their humongous reorder buffer, is really cool, but IBM's Telum processor with its revolutionary distributed virtual cache architecture and on-chip inference accelerator (with a latency so low it allows AI-fraud detection during transactions) and 5+ GHz base clock, is on a different league, along with all the other little nice things, such as the multiple smaller computers that allow the CPUs to run user code and as little as possible of everything else.
As far as I can tell this point about the benefits of fixed point is very confused. The difference here is entirely because they use fixed point numbers with higher precision than the precision of standard floats. You could get the same benefit with higher-precision floats (and then also be able to represent large or small numbers). So this has nothing to do with the merits of different ways of storing your numbers. I have nothing against fixed point where appropriate, but this is not a good argument.
Besides, they show at the end that increasing the precision doesn't solve the problem, just delays it.
"A better implementation will require careful use of the Decimal package in Python."
No: a better implementation requires careful analysis of why/when rounding becomes a problem, and designing your algorithm to mitigate that.
EDIT: I just had a glance at the decimal library in python and it looks like this python "fixed point" implementation doesn't even use fixed point: it's just a decimal float. (Though correct me if I'm wrong).
It's not actually fixed point. All data in COBOL is text based, including numbers[1]. When number are put into working storage fields, they are converted to text and have to be truncated to the length of the field in characters.
[1] The COMP keyword does not convert working storage to binary, BTW.
I get what you're saying about the representation, but I think it's fair to say COBOL numbers are stored as decimal fixed point. i.e. it stores a certain number of digits, and the least-significant digit always has the same place value. (Though, again, correct me if I'm wrong).
the reason you want to use decimal arithmetic is not so that your rounding will be more correct, but so that it will be rounded off in the same way as your bank and your auditors round off; it would be very unfortunate if there was a one-penny discrepancy in half of a million transactions per day, especially if for some statistical reason there's a bias in that discrepancy
Absolutely there are reasons to choose to do your rounding in a particular way for a particular purpose. I was only making the point that the argument for using fixed point decimal that is laid out in the post had nothing to do with the choice of decimal and nothing to do with the choice of fixed point.
A on of insurance companies, banks, telcos, airlines and old government systems for anything related to healthcare and taxes still use it and it's not all IBM Mainframes. I think IKEA just recently migrated away from it but a lot of companies with old inventory systems still do use COBOL.
The language is easy, systems and products gigantic, your mentors write their code (incl. Java) in a 55x132 char terminal and sometimes there is even more than absolutely no documentation!
It's fun but I'll probably want to do something more modern for a while before maybe returning to the old systems.
For anyone interested to learn, I've recommended think Jan Sądek's mainframe playground before ( https://mainframeplayground.neocities.org/ ).
Keep an eye on IBM's "Master the Mainframe" and hobbyist licenses for OpenVMS. I found setting up GnuCOBOL in a nice way was too much of a hassle to recommend.