Hacker News new | past | comments | ask | show | jobs | submit login
Say it's 1962, you're lucky enough to be a programmer working with an IBM 7090 (twitter.com/foone)
248 points by ohjeez on Dec 4, 2019 | hide | past | favorite | 114 comments



No, no, you did not need to interleave blank cards with original cards to duplicate a deck. That was something you could do on a keypunch if absolutely necessary.

The IBM 519 Document Originating Machine was a 514 Reproducer with a mark-sense reader attachment. That was the "document originating" part. Loaded with an 80-80 reproduce board, either machine could copy a deck of cards. Two feeders, one with a reader, and one with a punch, going to separate output stackers. Duplicating a card deck was its most common use, although it could do a few other tasks.

The 514 and 519 could be cabled up to a 407 accounting machine, using a cable almost 2 inches in diameter and a huge connector about two by six inches. This allowed the 407 to generate new punched cards with results from a tabulator run.

The little glass window front and center was the comparison display. The original and new card were usually re-read by a second set of brushes and compared, as an error check. That display showed any differences.

The 1401/7090 combo was fairly common. A lower end configuration used the IBM 7040, and that setup was popular with universities. The usual setup was to have some tape drives shared between the two computers, with manual switches. Cards were read into the 1410 and written to tape, and output was written to tape on the 7090 and printed on the 1401, using a 1403 printer like the one at the Computer Museum in Mountain View.

Here's what a job, as a deck of cards, looked like.[1]

[1] http://sky-visions.com/ibm/7090/ibsys/jobs/arg1.fo


Me, 2015: "This engineering code is some serious fixed FORTRAN"

My 80+ year old boss at the time: "Yeah it had to be" Whips out several boxes of punch cards


What kind of engineering code you dealing with? Just curious.


It was a tornado risk assessment wind fragility modelling software, used by the nuclear power industry.


Like for the spread of fallout?


No, like "what happens if a tornado hit the powerplant and chucked trees and other debris at various parts of the powerplant".


Sounds like NASA.


That's how we did things when I was in high school. Local keypunch wasn't an option, and having somebody transcribe written programs was too expensive for supporting a couple of dozen schools in the district, so we used Mark Sense cards. Those were slow to read directly, so our cards were duplicated to punch cards on the cheap machine while the expensive machine was doing slightly more important things than printing out "syntax error, try again idiot". It was then much faster to batch a couple of hundred syntax error messages from punched cards during free time. One must think about process efficiency.


Terrible use of statement numbers and are those continuation lines I never used those


I'm glad I learned to program using punched cards, in a batch-job system.

At my university, the local (IBM 1130) machine was used during the day to send jobs to a larger (IBM 360?) machine at an affiliate university. The turnaround from putting cards in the output tray and getting output was often over an hour, since a local operator sent jobs in groups, and separated printouts only when there was enough paper to justify interrupting her other work. The whole computer system was behind locked doors, and this operator was like a god who took our offerings and bestowed her blessings when she felt like doing so.

Past 6PM or so, the local machine was available only to researchers, who worked through the night. I think the weekend was also dedicated to local research, so mere mortals (undergraduates) like me did not get a lot of chances to meet our deadlines.

The upshot of these restrictions is that code often worked the first time. You didn't write it on coding sheets until your logic made sense. And you didn't punch cards from those sheets until you had gone through them again, and were sure not just on the logic but the variable spelling, etc. Then you punched the cards. Then you worked through the deck, one by one, imagining yourself to be the machine. Every extra hour spent this way increased your chance of getting results before the end of day.

The difficulty of the process led to good habits.


I also learned on an IBM 1130. At the very end of the era of punched cards. With 029 keypunch. During my 1st semester a new (not IBM) minicomputer with two CRT terminals was installed.

Macros had been written on the new system by school staff so that you could change a few control cards and run your existing deck on the new system. Some people would run their jobs on the 1130, but you had a choice.

The new system had a very small tabletop high speed punched card reader. Had a huge fan that blew air through the deck causing all the cards to separate and the weight on top of the cards to lift. It was fun to watch the deck "expand" like a blown up balloon. It could not read card at a time. It read in the entire stack of loaded cards at very high speed and spooled it to disk. These jobs ran (up to 3 concurrently) in background, along with any interactive CRT users. Then printout of your job would come out on a printer that was way faster than the printer on the 1130. Some people stuck with the 1130 because there were minor differences in the compilers. So it wasn't necessarily magically transparent to run your job on the new system by just swapping a few cards.

There were initially two interactive CRT terminals. (Beehive) I asked a lot of annoying questions. There was now about three feet of manuals bolted to a table in the keypunch room. I made a point of reading it. After more questions, I was able to log in using an interactive terminal. I was encouraged. I was surprised that very few people expressed any interest in doing this. I could type my code into a file. Compile and run it. Send my printout to the printer. Come back to terminal, edit some more, etc. Fast turnaround. No more throwing out an entire card over a single character typo.

People who watched over my shoulder became interested. I showed a few people how to edit, and compile and run their programs, spool to printer, etc. Soon there were sign up sheets for 30 minute blocks on the two CRTs. In the 2nd semester the school realized they needed more CRT terminals and fast. Some people did continue to use keypunch and cards because those were now less in demand and often not in use.

Those were fun days.


As a younger developer, I feel like I would have enjoyed the work of writing code for those sorts of machines more. These days it's just too easy to throw a bunch of junk at the wall and seeing what sticks.

I would have loved to be able to slow down, have an incentive to really know my stuff, think through all the possibilities and write out a few lines of beautiful, efficient code.

I suppose I can still do that with discipline -- but it's too easy to lean on the debugger, and even if you want your code to be clean, the codebase you're working on isn't necessarily going to be.


Sorry for the plug, but you can kind of simulate that process in the nand2tetris.org course (also available on Coursera).

In that course, you build a computer (through hardware simulators) that can run binary programs. Then you build up a programming toolchain: stack based virtual machine language that compiles to assembly, and a high level language that you compile to the VM language. You also write the assembler that translates your assembly into ones and zeros, which are like the punch cards.

So you can simulate what it's like to be writing the binaries: if you take out the compilers, you can go through the process of manually translating your high level code into punchcards. The high-level lines become stack operations, which become assembly commands, which become binary words.

(For efficiency reasons, you'd probably program directly in assembly where possible, but you can add on the abstraction levels.)


That sounds like a great course. Plugs are welcome when they benefit the readers -- they're only bad when they only benefit the authors.


Fair enough. I was just worried that HNers might feel that that course gets promoted too much here. (IMHO, nand2tetris deserves that level of promotion! I just recently gave a talk about it for the Austin Python meetup.)


mmm. Maybe. I can't say what you'd like, and I never had to use a keypunch, but I did start my career in the Age of the EPROM. I hated having to code, code no worky, unplug EPROM from circuit board, take erased EPROM from eraser and insert into programmer while putting used EPROM into eraser and pressing "ERASE;" taking programmed EPROM and plugging it into ZIF/LIF socket to start the debugging process again.

And that was an order of magnitude faster than using punch cards.

As soon as EPROM Emulators came on the scene I lobbied for us to buy one. $200 very well spent.

If you want to torment yourself and build a Z-80 computer, I have a tube or so of Z-80 CPUs, some EPROMS, a couple tubes of SRAMS and the relevant books. All yours for the price of shipping.

Sorry, I threw out the eraser about 20 years ago!


No one will pay you to slow down unless you’re working on bleeding edge stuff at a very wealthy company


No one will pay you to slow down, but they won't tell you to speed up if they think you're going as fast as you can ;).

It's not unethical, because you are working faster. The time you're saving is the kind of hazy future multiplicative time improvements that are supposedly related to the 10x programmer.

You're not going slow, you're 10x-ing.


I find this… interesting. I mean, yes, think about your problems up front, but a computer will always be a better computer than a programmer would. How do slow feedback cycles make one a better programmer? In what ways do you find that these habits serve you well now? Is, for instance, painstakingly checking variable spelling an interesting (or valuable) use of a programmer's time, when it can be handled by automated tooling?


> In what ways do you find that these habits serve you well now?

It doesn't help now, it helps you get better for the future.

This is something in photography: if you lengthen the feedback cycle, you are forced to build a stronger mental model of the behavior of the camera.

With film, you had to see whether the exposure was right well after the fact. With digital, you could see right away and take another shot after adjusting. With live view, you can fix exposure before taking the shot.

Which teaches you more? If you always have exposure preview, you blindly learn to adjust the exposure compensation knob until the exposure is good, and there's no learning forced upon you.

With film, when I would mess up exposure, I would find out a week later and feel intense regret: "Damn, in that situation I should have used positive exposure compensation."

This leads to me, having understood the system better and internalized it, being able to anticipate what exposure comp I need ahead of time.

Basically, if there's nothing (or less) lost as a result of a mistake, you won't learn as quickly.


I'm going to make a little stretch here, just bullshit really, but:

There is no (conventional) way to lengthen the feedback cycle when playing a musical instrument. It is immediate. But beginning jazz piano soloists are often taught to vocalize their lines as they play. This clarifies your goals, and keeps you from aimless noodling that is in key, but lacks a pull towards something (which is how I play, since I have shit pitch memory and perception).

I guess I'd draw the connection to writing tests before implementation. And then to extend out a little further, as you become experienced and have internalized what your instrument will sound like, this habit could hold you hold you back from truly free expression, not that anyone wants that in my code.


On the other hand, personally when playing music I don't perceive any feedback on the more subtle aspects unless I make blatant mistakes. Intonation and bowing on cello, unwanted accents on piano.

These are only revealed fully when I record myself.

And regardless of how short or long he feedback loop is, listening to a recording of my own mistakes when playing is sufficiently traumatic for me to make for a strong learning experience.


> There is no (conventional) way to lengthen the feedback cycle when playing a musical instrument.

While maybe not quite the same thing, recording yourself has been a standard tool in music education for decades.

With some instruments, you can even play with earplugs or turning off the amplifier. It's surprising how much it changes your focus.


My impression of how slow feedback cycles make for better programmers is that habits of careful thought are developed because the cost of waiting is so high.

The following are anecdotes, not data, but consider their sources:

"When I learned to program, you were lucky if you got five minutes with the machine a day. If you wanted to get the program going, it just had to be written right. So people just learned to program like it was carving stone. You sort of have to sidle up to it. That's how I learned to program."

- Don Knuth [0]

"Before I sit down to code something, most of the instructions have already run through my head. It’s not all laid out perfectly, and I do find myself making changes, but all the good ideas have occurred to me before I actually write the program. And if there is a bug in the thing, I feel pretty bad, because if there’s one bug, it says your mental simulation is imperfect. And once your mental simulation is imperfect, there might be thousands of bugs in the program. I really hate it when I watch some people program and I don’t see them thinking."

- Bill Gates [1]

[0] http://www.softpanorama.org/People/Knuth/index.shtml

[1] https://programmersatwork.wordpress.com/bill-gates-1986/


Along those lines, in the Kemeny-Kurtz documentation for their original Basic from Dartmouth college on GE hardware, there was a remark "Typing is no substitute for thinking."


Have you ever heard teachers say that they don't allow cheat sheets on tests because students learn the material better without them?

Have you ever studied for a test and REALLY mastered the material, and walked away with the feeling that you know it inside and out? It's actually a great feeeling. Alternatively, have you ever managed to squeak and get a decent grade despite the fact you only partially knew the material but had prepared a good enough cheat sheet?

I feel like code completion, fast compilers, and debuggers are one big cheat sheet.


Habits that could shave 4/5ths off of our existing software stack if you ask me.


These old photos do not convey the noise and spectacle. The guys are in white shirts and ties, but those old machine rooms were more like a factory floor than an office. Part of the reason they were separated off was to try to contain the noise. Those line printers struck the page with 132 characters at once - WHAP! WHAP! WHAP! - and the paper is almost flying out the top. Those high speed card readers consumed a stack of thousands of cards in a minute - it reminded you of a log moving into the whirling blade at a sawmill. Then there was the roar (not hum) of the fans and the banging of the keypunches and teletypes - much louder than typewriters. You really got the sense that computation was a physical, mechanical process that consumed a lot of energy.


My favorite PL/I trick from the punch card days. It takes two cards:

  DEBUG = 0; /*/ ;1 = GUBED
  */
Reverse the card to turn on the debug switch. Not invented by me, I forget who showed me that. A professor I had as an undergrad, I think.


I've done something similar in C type languages to switch between blocks of code:

    /*/
    This block is inactive
    /*/
    This block is active
    /**/
Just swap the first star to two stars to flip between them.


That is what #ifdef, or #if 0, is for.


Seems like any user of this trick would think of //-style comments and add them to the next language in this lineage. How come this didn't happen until C++?


I programmed and earned a living with this technology 1965-1980. I don't miss it. I wrote a 4,000-card Assembler program that required two metal trays to transport.

Did anyone else "insert" and "delete" columns by pressing down on one card in the keypunch while duplicating? This enable one card to advance while keeping the other in place.

We invented the "240-column" card at the insurance company where I worked. We stored three digits per column (four bits per digit times three equals twelve rows). Being the 70's (0111), our cards looked like lace doilies.


I love the trick with the diagonal red line to indicate the order (and help you to sort if you did drop them). It's so beautifully neat, and simple.

https://twitter.com/Foone/status/1201959165926592513


Run the cards through one of the punched card machines and put sequence numbers in columns 73-80. Then if the cards get out of order, take them to a punched card sorting machine. Yup, it was fast, generally faster than the fastest possible, meets the Gleason bound heap sort. How? That machine used radix sort!


Yeah, you can see the program in that picture is not properly ordered.



I learnt programming on punched cards. As a student, we were limited in the number of cards we were given for free.

I remember cannibalising older stacks for cards containing (fortran IV) statements like "DO 100, I = 1, N", and "100 CONTINUE". My first exposure to reusable code :)


Yup, we did the same, I wrote my first program 50 years ago this year, on perforated punch cards, they cost 1c each (in NZ), that paid for processing, we would share and reuse cards exactly like this


How much did they cost?

Even special cardboard seems like it ought to be very cheap.


I have no idea. This was India in 1981. There was this "general knowledge" that it was expensive so I never even bothered to find out! Esp. since my beginning programs were well within 300 lines, and the rest I could get by using cards from friends. I never thought of this way until now, but we were trading cards: "Does anyone have a 'FORMAT INT2' card"?

I'm not sure it built any character!


Love the comment by Kathleen:

> My dad, a mainframe guy for the army in the late 60s-early 70s, claims that the (men) programmers knew which of the (women) punch card operators to take your deck to, because some would thumb through and say “sure you want to submit yet? I think you have an error... here.”


In the days of batch computing with turnaround time of several hours, a good idea was to take the deck of cards to a machine that right away would just list the deck, that is, print the cards. Then go over the listing line by line to desk check.

In those days, some of the job control cards were tricky to get right. So, at a big meeting about the computing services, one frustrated user stood and explained that when he had a job control card that worked, he would laminate it in plastic, punch a hole in one corner, and hang it on a chain.


As someone who enjoys playing around with MVS on an emulator just to get an idea of how the IBM mainframe stuff actually works, I can understand the pain of job control cards.

I heard someone mention that only a single JCL statement has ever been written. All others are copy-pasted from another one and just modified.

I believe this is true, since I had to try a lot of times just to be able to change JCL script to read FORTRAN code from a separate dataset.


The laminate didn't end up re-covering all of the holes?


He (not me!) used clear plastic so that, sure, he could see the holes, but, really, usually the top edge of the card had printing for the characters so that for a human doing the reading, didn't really have to see the holes. And for the machines, no way would they accept the plastic! He was so desperate to find a way to get errors out of the control cards that he wanted the plastic for durability and was willing to copy the card at a key punch machine.

Once I was working in fluid flow calculations at the Naval R&D Center, the one with the ship model towing tank. I was at a keypunch machine typing in my code and comments, and the head of the place, a Navy guy, said that I should use the keypunch machines only for small changes and otherwise should write my code in the little blocks on the coding sheets and have the keypunch staff do the typing. I responded that I was good with a keypunch machine, since I could make control cards maybe better than the keypunch staff, and could sit at the keypunch machine and type my code just from my rough notes faster than I could fill out the forms AND could ad lib and type in the comments without ever writing them down, in total MUCH faster. The Navy guy let me do it my way!

E.g., for extended comments, say, at the top of some code, could have a simple control card that would type the 'C' Fortran comment delimiter in column 1, indent to, say, column 7, permit typing, and at column 73 automatically move to the next card, type the 'C', and stop at column 7 again. Semi-, quasi-amazing!

For moderately advanced jobs the IBM JCL (job control language) was so tricky that a good solution was to use a keypunch machine to make copies of control cards that had actually worked! Then either use the exact copies or make small changes -- net reduce JCL errors, important when job turn around time was in the hours.

At one time I was writing PL/I code to run on an IBM 360/91, the one at the JHU/APL Navy lab, to read 7 track tapes written in submarines at sea. Each reel of tape had data from several trials. The guys on the submarines were really good about putting delimiters between the trials: They used a Dymo label maker and put the sticky plastic label on the tape. Then on the 380/91 -- you guessed it! -- the tape started reading, really fast, powerful tape drive motors, and suddenly BAM, WHAM, POP, CRACK, SNAP, flutter flutter, the Dymo label hit the reading heads, stopped cold, and the tape broke and flapped in the vacuum columns! It was IBM's job to fix it!

When we got tapes with the dymo labels off, I had to read the data in in binary, lay over it a based structure, based because it was relative to a pointer. The structure had lots of bit string fields for the intricate structure on the tape. Then my code had to do various conversions and return easy to use data! It was a box of cards -- took a while.

Later one week I raced through Blackman and Tukey, The Measurement of Power Spectra, got smart on power spectral estimation (mostly chi-squared statistics), typed PL/I code furiously, for about 1000 lines of code, and had the code generate white noise, pass it through a filter, record the resulting power spectrum, and show that as the noise signal continued the estimated power spectrum converged to the one used in the filter.

The code illustrated clearly how long a stream of data was needed for an accurate estimate of the power spectrum. Basically what is needed is enough cycles at the frequencies of interest. But since their data was to be from ocean wave noise, the frequencies were really low so that for enough cycles the length of data they needed for accurate estimates was quite long, in uncomfortably many HOURS.

But my illustrative code cleaned up a technical point in a competitive proposal for some software, and as a result of my little PL/I effort got our little company "sole source". Ah, punched card days.

When I got to Ohio State as a B-school prof, the students were STILL forced to use punched cards. I mounted an effort to get a good time-sharing computer, with ADM dumb terminals, for the B-school. The CIO opposed me. IBM and their super salesman Buck Rodgers opposed me. The Deans took my advice, and I won.

We got a Prime, nice computer, a single board bit-sliced super-mini version of Multics, maybe the one Mike Bloomberg started with. Nice multiple virtual storage machine, with security rings (Intel 386 borrowed?) with a nice hierarchical file system with security from capabilities and attribute control lists, all in 40 KB of code! Nice machine!

I was appointed Chair of the college computing committee and put on a committee to pick a new CIO for the university. I taught a grad seminar in the selection and evaluation process. Fun days!


OT: Do people really enjoy reading information like this sprinkled over multiple tweets? Is it so that the can granularly react to a specific segment of information?



The explanation makes it sound as if the author of one of these tweet threads can start working on one but leave it in some unfinished state until such time when the author can publish it all at once. If that's the case then I can kind of understand the medium although I still think a blog post is a better fit.


How many blog posts receive that level of interaction?

And yes, actually, I do find tweet streams to be engaging. It has evolved into a different writing style that suits the medium well.

I should write a blog post about it.


The medium is less than ideal for delivering this kind of information, but it does force people to actually distill their thoughts clearly instead of writing a rambling medium post.

Which is kind of funny, because I assume punch cards forced people to distill their thoughts better, too.

TWITTER IS THE MODERN DAY PUNCH CARD


We should take this full circle and build a mainframe that one programs by tweeting at it.


I'm not always in the right headspace to read long form content. Twitter threads, that are more of a list than a text, can be very condusive to short attention span reading, I have found.


Yes. Literally millions of people enjoy writing this way and enjoy reading this way, how would Twitter exist otherwise?

Granular, fast to consume, mingled topics. It’s also addictive.


The important thing is Foone prefers writing them that way. There's a comment in thread here with a link to a more detailed explanation.


I like it when it's Foone doing the tweeting. Not so much when it's anyone else.


Yes i like it


I like it. I often retweet specific paragraphs.

I come across many such threads because of specific paragraphs others have retweeted.

I also find I'm more likely to read almost everything, rather than skim like I would with many articles. Even with the thread reader version, I find I take in less information - the Twitter format breaks up the wall of text.

But then, I also sometimes copy plaintext into Notepad++ and turn on syntax highlighting for a random language to make it easier to read.

(Also, I don't have Dyslexia nor ADD, I read novels, and I read Slatestarcodex blog posts without skimming, so I'm not really sure what's going on here).


>But then, I also sometimes copy plaintext into Notepad++ and turn on syntax highlighting for a random language to make it easier to read.

whut


It's a bit like the BeeLine reader that can make it easier to read text:

https://chrome.google.com/webstore/detail/beeline-reader/ifj...

But the advantage of the syntax highlighting is it also gives vertical structure.

If my eyes drift away or something, I can use "pictoral landmarks" via the blocks of colour to figure out where I was.


Learned to program assembly language on punch cards to run on the IBM 370. One line per card - just like in the story - and you don't want to fall down and drop your cards....imagine trying to reorganize 10000 lines of code (I like the red line in the photos - can't recall we using that). Sometimes, the computer just ate the cards....and no matter what, you weren't getting more than 24 runs in a day 9assuming you stayed up all night, which we did), so you had to really think through the code before sitting down to write it. Mine was the last class to learn assembly on the punch cards (1983) - the next year they wheeled the punch card readers out....but you could still only do programing at the terminals in the computer lab....no programming in your room. That's how I learned Pascal, machine language, Fortran, and C.


Relive it here for modern languages (Python, Perl, JavaScript): https://www.masswerk.at/card-readpunch/

For a proper reenactment, write you program by pen(cil) and paper first, then punch. Let a few hours pass before execution in order to raise anticipations to appropriate levels. Refer to your pen-and-paper listing for debugging.


Here's the entire thread compiled into an easier-to-read format in case anyone prefers: https://threadreaderapp.com/thread/1201956309941116928.html


Archive of that link, for posterity: http://archive.is/9xQlw


For those interested in the history of computer programming, "Uncle" Bob Martin gave an excellent talk in 2016 called "The Future of Programming", starting with Alan Turing in 1936. It's a fascinating introduction to the evolution of hardware and programming languages. I was particularly amazed at how Turing used CRT screens for memory and output.

The history discussion starts at around 11 min: https://youtu.be/ecIWPzGEbFc?t=672



> The 7090 ran at 458.7 kHz -- https://twitter.com/Foone/status/1201977408175235072

That's actually really fast for 1962.

By 1980 the workstation market was thinking about 3M: A megabyte of memory (RAM), a megapixel display and a million instructions per second (all for less than a "megapenny"). -- https://en.wikipedia.org/wiki/3M_computer

OK, the machines of 1980 we're desktop sized rather than room sized, but, as Foone points out, the 7090 was also really a single-user (at a time) machine.

These days you can get a similar spec (within approximately an order of magnitude) in an Arduino: 8,000 or 16,000 kHz with 32kB (rather than the 7090's 144kB) of RAM.

Moore's law is strong but the applications are varied and sometimes the form factor is what changes the game rather than the raw specs.


The 360/91, still in the punched card days, had a 60 ns cycle time. So that is

60 x 10^(-9)

seconds per cycle or

(1/60) x 10^(9)

Hz or 17 MHz. Not so shabby!

It read memory 64 bits at a time interleaved 64 ways. And it had IRCC 8 locations of 64 bits each of an instruction cache. Some loops could fit in the cache at which time the machine would go into loop mode.


> These days you can get a similar spec (within approximately an order of magnitude) in an Arduino: 8,000 or 16,000 kHz with 32kB (rather than the 7090's 144kB) of RAM.

That's 32K of flash memory, not general purpose RAM - the ATMega328 has only 2K of actual general purpose RAM (also, the ATMega series is also a mostly Harvard architecture - but I don't know what the 7090 was).

If you wanted something closer to the 7090, the Arduino Mega (ATMega2560) with a RAM extension upgrade (because the 2560 only has 8k of RAM) would be a better comparison (if you wanted to stick with the Arduino microcontroller comparison).


It may be that the plug board wiring of the IBM punched card machines was similar to early microcode, i.e., had to work with the different signals at different points in the cycle.

On how to program an IBM 7094? Could use Fortran. And then could use Formac, IBM's formula manipulation compiler pre-processor to Fortran for algebraic expression manipulation. E.g., supposedly an effort was to use Formac to develop local solutions to the Navier Stokes equations of fluid flow.


Georgia Tech, got out (we don't say "graduated") in 1984. First couple of years included submitting jobs on card decks to the Control Data Corporation mainframe. One day, waiting for my job to run, I hear someone say "the computer is slow today, I had to submit my job six times before I got it back." Wanted to slap them.


I went to a community college with an IBM 370 that used punch cards. When they got rid of the mainframe they had punch cards left over that they gave out to students to write notes on them or study cards.

The IBM PC/XT replaced the Mainframe on a Novell Network using DOS programs.


De Anza had a 370 in the early 80s, but I don't know what replaced it.


The only problem is that APL was running in interactive mode by late 1966 on far smaller systems like APL/1130 which ran in about 8K.

Then soon after APL/360 was also available for the 360 line. Multiple users could sign on at once and run commands intetactively.


Computation was extremely expensive back then. Was it typically financially beneficial?


Computation was so financially beneficial that performance could be trimmed by a couple of orders of magnitude, and people still found it beneficial as long as it was interactive.

For any application that didn't require scientific supercomputing, a mid/late-70s 8-bit interactive command line microcomputer with 16K of memory and paper tape or cassette storage was far more financially beneficial than a 7090 or System360 - not just in terms of cost, but in terms of the ability to get useful output in a quick-enough time frame.

Speeding up the development cycle and simplifying data entry was more valuable than raw processor speed. The metric that mattered was problem throughput, not clock speed. Some problem classes were cycle-bound, but many weren't - and still aren't.


And business calculations at many companies were automated or semi-automated by tabulating machines before those were finally replaced by fully programmable computers. The IBM 1401 released in 1959 was designed to make inroads into that market as a drop-in substitute or upgrade. Earlier computers would probably not have been financially viable in many of those roles: https://www.youtube.com/watch?v=ZPpV8X91neQ


The first commercial applications were for basic but critical accounting; payroll and end-of-period reporting. Although the LEO page doesn't have much info on what programs were run: https://en.wikipedia.org/wiki/LEO_(computer)

> One of its early tasks was the elaboration of daily orders which were phoned in every afternoon by the shops and used to calculate the overnight production requirements, assembly instructions, delivery schedules, invoices, costings, and management reports. This was the first instance of an integrated management information system


Here's a demonstration of and business case for LEO from 1957: https://www.youtube.com/watch?v=-8K-xbx7jBM .


In general, definitely. But it was only done for critical/high value tasks that either weren't feasible to do any other way or the cost savings from labor elimination made it cost effective. As recently as the 80's businesses typically weren't frivolous with computer purchases... that changed in the mid-to-late 90's.


One product I think could have had success in the late 70s and 80s was using 8 bit micros as a pre-processor for COBOL. You could type and verify the syntax using a micro before you would submit the code to the mainframe.


A lot of data entry jobs in the early 80's was done on 8-bit computers that wrote the data to 8" floppies that were then read by the mainframe. Some other setups had tape output. I interned at a company that did input to tape, which fed an IBM 4341.

That lasted a couple years.


I wrote a COBOL interpreter on the C64 for this very purpose.


Most 8-bit micros had very poor data interchange capabilities, though. That was one thing the IBM PC and 'clone' ecosystem got right. Still, you could use a minicomputer for the job. Minis were often used for similar tasks.


I learned programming with a terminal on an HP2000. It had a serial link to our IBM 360.

College class assignment - punch a deck with a simple program, submit it and turn in the deck and output.

So I would type my programs in, submit a punch job to the IBM, get the deck back, add JCL and hand it to the operators to run.

This was required for class. But I forgot the step where you had the program printed on the top of each card - they called that 'interpreting'. I handed in a blank deck to the TA. Fortunately they never noticed.


This shows how advanced Doug Engelbart was that he imagined interactive editing and real-time use back in 1950s.

Also shows how far-sighted ARPA was back then.


Was curious how a statement was converted to punches. Found this.

https://craftofcoding.wordpress.com/2017/01/28/read-your-own...

Imagine only programming with 64 CHARACTERS :)


Can imagine that quite easily really, only characters missing from that list that would be used often would be curly braces.


Unless your shop only had the commercial version, so your A(I)=B(I)+C(I) looked like A%I¤#B%I¤&C%I¤.


I cut my teeth on 40 columns on an Apple II+. It took a couple years before I had the money to buy a VidEx 80-column card (and two more until I got an Apple //e)


I worked at Videx in the '80s, and designed the 2nd generation 80-column card, and the 2nd-generation keyboard fixer, with macro capability; it had a little 6502 (6504) on it.

Apple ][ (that is how they wrote it then) had an upper-case only keyboard, though some had a switch inside you could flip to enable lower case.

Nice to know somebody remembers them.


I really wanted that 132 second-gen card, but getting an official one in Brazil was not an option - I had to live with the clones.

My II+ (clone, by CCE) with the keyboard fixer and its function keys was my favorite computer for a long time and was my developer box until well after I got a //e (also a clone), which was used as the testing machine and for AppleWorks/SuperCalc


How about programming in Assembler with a maximum of four characters per label or variable name?


Mandatory mention [0] of the amazing talk of Uncle Bob "The Future of Programming", and his timeline of programming since Turing. Fascinating conference

[0] https://www.youtube.com/watch?v=ecIWPzGEbFc


Surely this is better suited for a blog post rather than a series of hard to follow twitter comments.


How interesting that that is the reason for the 80char limit default setting of many terminal apps!


It was the common character width of a page on typewriters.


That's the reason it's 80, not 81 or 78. But a width near there is right for readability.


This is all incredibly fascinating, but was anyone else's first thought "geez, look at those incredibly uncomfortable chairs/desks. How did they manage?"


What was the process of debugging these programs? Did you get any sort of error code or card number that gave you clues as to what had gone wrong?


Still have mine. Stopped using them for notes still use as bookmarks.

132 column lineprinter paper was made for kindergarten


This post is great. I've never understood why they used punchcards in the past until now.


My father used similar punch cards at his works way into 80ties (socialist country). Every shopping list in my childhood was written on those


Decks of ‘perfocards’ and square lined journals, yeah. I watched how he enters codes through tumblers and didn’t understand until there was a keyboard computer connected to tv+vesna.


Why the socialist country comment? They weren't mass replaced until the 80s anywhere.


My (state) university abandoned cards and scrapped its mainframe and punches in 1980. The punches were unwanted anywhere. The most prized parts of the mainframe, a CDC 6600, were the smoked-glass doors.


Don’t forget the distinctive aroma of the computing center.


“So whatever device you're reading this on either has a direct linage back to a 7090 (with Android/iOS/macOS/Linux being varieties of Linux)”

Yikes. macOS & iOS has nothing to do with Linux.


He says that 7090 formed the basis for CTSS which influenced Multics which influenced Unix. He doesn’t claim they are the same.

See his correction tweet: https://twitter.com/foone/status/1201994846656843776?s=21


Oh, so what if it's not a direct descendant and only a cousin? It shares enough DNA that people looking at the structure would assume common elements.


It's like saying the Taurus, Camaro and Camry are all varieties of Mustang. The statement is false. Those are all varieties of cars, or automobiles - but not varieties of Mustang. Similarly, the listed Operating Systems are not varieties of Linux...


Seems obvious he meant they are varieties of Unix.


Maybe. I've met an awful lot of people to whom such a distinction is just not obvious. People who would, for example, assert with perfect sincerity that MacOS is built on Linux, or that Solaris (or OpenSolaris, or Illumos, etc...) are the same thing as Linux. It may have been a typo, or it may not have been. The person I was directly responding to didn't seem to think so.

Anyway, that's where the sentiment is coming from.


I think the last 'Linux' was meant to be 'Unix'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: