Yes, the y2k crisis was real, or more accurately, would have been a serious crisis if people had not rushed and spent lots of money to deal with it ahead of time. In many systems it would have been no big deal if unfixed, but there were a huge number of really important systems that would have been a serious problem had they not been fixed. Part of the challenge was that this was an immovable deadline, often if things don't work out you just spend more time and money, but there was no additional time that anyone could give.
The Y2K bug did not become a crisis only because people literally spent tens of billions of dollars in effort to fix it. And in the end, everything kept working, so a lot of people thought it wasn't a crisis at all. Complete nonsense.
Yes, it's true that all software occasionally has bugs. But when all the software fails at the same time, a lot of the backup systems simultaneously fail, and you lose the infrastructure to fix things.
I remember hearing a story on NPR years ago years after Y2K with someone knowledgeable on the subject that addressed this question. He gave basically the same answer you did: a lot of effort went into avoiding disaster and a lot of people treated it after the fact as if it was hype or hysteria. I recall the interview because it changed my significantly thinking on the subject. (I wasn't in the industry at the time of Y2K.)
He also noted that a lot of American firms worked with Indian firms to resolve it. Indian engineers were well suited to the problem, according to him, because they had been exposed to a lot of the legacy systems involved during their studies.
He said an interesting consequence of this was that many American companies concluded they could use lower wage Indian engineers on other software projects so it helped initiate a wave of offshoring in the early 2000s.
I couldn't find the story I heard in the NPR archives, but they do have a number of stories reported right around that time if you want to see how it was being discussed and treated by NPR at the time:
I did find this NPR interview with an Indian official from 2009 that notes Y2K's impact in US-Indian economic relations:
But I think the real turning point in many ways in the U.S. relations was Y2K, the 2000 - year 2000, Indian software engineers and computer engineers suddenly found themselves in the U.S. helping the U.S. in adjusting itself to the Y2K problem. And that opened a new chapter in our relations with a huge increase in services, trade and software and computer industry.
It was also really difficult to detect bugs since no good testing paradigms had been established yet.
This created an exploitable opportunity for the few who knew how to write automated tests.
During the Y2K panic Sun Microsystems (IIRC) announced that they would pay a bounty of ~$1,000 per Y2K bug that anyone found in their software (due to the lack of automated tests, they didn't even know how to find their Y2K bugs)
James Whittaker (a college professor at the time) worked with his students to create a program that would parse Sun's binaries and discover many types of Y2K bugs. They wrote the code, hit run, and waited.
And waited. And waited.
One week later the code printed out it's findings: It found tens of thousands of bugs.
James Whittaker went to Sun Microsystems with his lawyer. They saw the results and then brought in their own lawyers. Eventually there was some settlement.
One of James' students bought a car with his share.
It’s sad to live in a world where people are naive enough to assume that doing the right thing without incentive is commonplace. You must be great to negotiate with.
I think a more accurate way of describing the Indian engagement is that a lot of recruiters and Indian firms discovered there was very little vetting of candidates, and that almost anyone could be put forward and start earning money for the recruiter or offshorer.
Yep, this was something that many firms were delaying spending on it. It's pretty frustrating because this seems normal. Only do something about an issue when the time horizon is near rather than take it on when it's further out.
From what I've heard, a number of IT departments used it as justification to dump legacy systems.
For example, at a former employer of mine, a big justification for getting rid of the IBM-compatible mainframe and a lot of legacy systems which ran on it (various in-house apps written in COBOL and FORTRAN) was Y2K.
In reality, they probably could have updated the mainframe systems to be Y2K-compliant. But, they didn't want to do that. They wanted to dump it all and replace it with an off-the-shelf solution running on Unix and/or Windows. And, for reasons which have absolutely nothing to do with Y2K itself (the expense and limitations of the mainframe platform), it probably was the right call. But, Y2K helped moved it from "wouldn't-it-be-nice-if-we" column into the "must-be-done" column.
COBOL programmers were charging a fortune to be hauled out of retirement to work on this. There was a huge shortage of experienced COBOL devs. And devs who actually understood the specific legacy system involved were even rarer. If the company had customised their system and not kept the documentation up to date or trained new programmers on the system, well, they didn't have a choice. They had to replace it.
To be fair, I've met a lot of young COBOL programmers since starting to work on mainframe systems last year. I think the "elderly COBOL programmer coaxed out of retirement by dumptrucks full of money" dynamic that supposedly dominated the Y2K response is less prevalent these days, and companies that still run mainframe systems just realize they have to hire regular programmers and teach them COBOL and z/OS.
Sorry, didn't mean any slight at you by that. I don't doubt your account, but having no idea how widespread the practice was, its dominance on a large scale is just a supposition on my part.
> From what I've heard, a number of IT departments used it as justification to dump legacy systems.
> In reality, they probably could have updated the mainframe systems to be Y2K-compliant.
Legacy systems weren't always mainframe systems and, in any case a “legacy system” is precisely one you are no longer confident you can safely update for changing business requirements.
A change in a pervasive assumption touching many parts of a system (like “all dates are always in the 20th century”) is precisely the kind of thing that is high risk with a legacy systems.
Worked in a project to migrate a huge and critical .gov database from an AS400 nobody understood anymore to Microsoft SQL Server. We had to guess some of the values in some bitwise fields through trial and error. Made it with a few weeks to spare - nobody noticed.
Oh, understanding the underlying system, that's comparatively easy. Understanding the bespoke business logic and how it's represented in the system...now that's harder, an order of magnitude at least.
Story time: years ago (2010-ish?), we were doing some light web CMS work for a client. Nothing too complex, except we had to regenerate one section off a data feed that we received in a very peculiar custom format.
Great, everything worked, we finished on time and on budget. And then: "oh, and could you also check this other site? We need some minor tweaks on what your predecessors made". That other site was also consuming that data feed, so we took a peek. It ran in PHP3, walled off from everything, processed its own intermediate languageS, and the output was a massive 200+ page PDF (which was then manually shipped to offset print in large quantities). For Reasons, this had to run daily and had to work on first try. There was no documentation, no comments, and no development environment: apparently this was made directly on prod and carefully left untouched.
Needless to say, the code was massively interdependent, so that the minor tweaks were actually somewhat harder. We did manage to set up a development VM though, and document it a bit - but last I checked, the Monstrous Book Of Monsters Generator still seems to be chugging along in its containment.
I'm having PTSD just thinking about updates of YY to CCYY in mainframe code. Very real and heavy investment across numerous industries from aviation to financial and more. At a macro level, I remember it being pretty well managed risk and mitigation.
When you start 3+ years before the known threshold date, you're giving yourself the best chances at success. I remember flashing BIOSes for weeks in the computer labs at my university.
EEPROM. And there has always been a reflash method for bricked firmwares for all AMI and Phoenix BIOS' as well. Insert disk with firmware, Hold a key, power on.
Once upon a time we got somewhere in the neighborhood of 1200 dell workstations and not a single one failed the bios upgrade. 10/10 would do again
Dear god, reviving the ghosts of IT departments past!!!
Yes, updating JDEdwards AS/400 systems and many a PC was a big project, but i dont recall doing it at that time as being super difficult... frack ist so long ago i cant really recall many of the details other than reporting daily on the number of systems updated.
Also, fuck you google - this was when you were nascent a i converted the entire company to your “minimalist” front page vs yahoo, and a few years later you wasted 3 months of my time interviewing to tell me to expect an offer letter tomorrow and called to then reacind the offer (as i didnt have a degree) and then continuing to contact me for FIVE FUCKING YEARS for the same job that you wouldnt hire me for.
It happened to all of us. But be glad you are not there now.
None of those people contacting you, or phone screening, were Google employees. They were, or worked for, contractors. It didn't matter to them or to Google whether anybody they contacted was hired.
I had a teacher in college who'd been part of a project to fix the Y2K bug in the bank he worked at the time. He said to us when talking about project management, that the only project he saw actually finish on time was that one.
The pressure was immense. People don't realise what a huge success story the Y2K situation really was.
I worked at a University that had a 28 story tower. Around 1998 they set the clock in the lift management system to 1 January 2000 and all the lifts lowered themselves to the bottom of the lift wells because they hadn't been serviced for 50 years (or something).
So yes indeed stuff was broken, but it got fixed before the big date.
This sounds apocryphal. If setting the date forward triggered a maintenance warning, it sounds like the date system was working correctly.
More importantly, lift systems are not sensitive to long duration dates. They do not need to be.
This story reflects one of the myths that emerged from the media. Around 1997 they started discovering the topic of embedded systems and that many devices, including lifts, contained "computers." Not understanding the restricted and specialized nature of embedded systems, they then claimed all these systems were vulnerable to Y2K, and that the whole world was about to crash.
Most embedded systems, in lifts and other devices, had been designed in the 1990s. If they used dates, they did not use mistakes from the 1960s.
> Yes, it's true that all software occasionally has bugs. But when all the software fails at the same time, a lot of the backup systems simultaneously fail, and you lose the infrastructure to fix things.
This is especially important if you think about what things were like in the late-90s. As a teenage geek in Northern Ireland I read about the the first tech boom, but locally there was very little exploitation of tech and you were an anomaly if you had dial-up internet at home. There was very limited expertise available to fix your computer systems in the best of time.
Big companies had complex systems, but they could also afford contracts with vendors in the UK mainland, or Eire, to fix systems. It was much more limited for other companies. While people were generally not relying on personal computers too much, small businesses had just reached that tipping point where Y2K would have been painful. As a result, they took action.
I only got to visit the US a few times in the early-nineties and it seemed so futuristic (I got a 286 PC in '93 to use alongside my Amiga). I imagined the Y2K problem as being much more painful in the US, and I wasn't accounting for the distribution of expertise.
One great experience was summer training courses that the government had organised with a IT training company in Belfast. These were free and it was a like a rag tag army of teenagers and older geeks from both sides of what was quite a divided community. The systems we were dealing with were fairly simple. It was mostly patching Windows, Sage Accounting, upgrading productivity apps (there was more than Microsoft Office in use) etc., but the trainer was normally teaching more advanced stuff like networking and motherboard repair so we spent a lot of time on that.
>treated it after the fact as if it was hype or hysteria
I agree with everything you and the parent said, but there was also hype and hysteria on top of it all - especially among the communities of preppers, back to the landers, and eco-extremist types. I met entire communities of technophobes who declared I was just ignorant or 'a sheep' and refused to believe me when I tried to explain that the industry was working on the problem and had a solid shot at preventing major catastrophe.
Frightening predictions of power plants going offline for days, erratic traffic lights causing mayhem, food and water supplies being interrupted, even reactors melting down were not uncommon.
As always, there is money to be made by exploiting people's fear. This hysteria was considered an irrelevant side show by most educated people, the way I look at the chem trail people or the way I used to view the anti-vaxxers [1], but the unjustified hysteria was real.
[1] Before I learned their numbers were growing and could threaten herd immunity in some areas.
True. The first image that pops into my head when I hear Y2K is a Simpsons joke where airplanes and satellites start falling from the sky and landing in their backyard:
I was working for Hughes, now Boeing Satellite Systems, for a couple of years doing Y2K remediation. I was writing ground station code that were on VAX/VMS, basically older hardware and software. So that thought did occur to me of satellites falling from the sky. From what I understand, everything worked out ok. There might be only one instance that we know of that was unanticipated when the year roll-over.
This. So many parallels with every "looming disaster" since. The media went into frenzy mode a few times, predicting the End Of The World As We Know It and scaring everyone witless.
And a few people did take it really seriously and spend the Millennium in some remote place where they would stand a chance of surviving the collapse of Western Civilisation.
I was a computer operator in the late 1970s and I used to read ComputerWorld which was the biggest enterprise tech "journal" (actually a newspaper) of the day. In the late 1970s they had a number of articles commenting on the need to fix the Y2K problem. So smart people were planning for it well ahead of time. On the other hand memory and storage were so limited then it was probably hard to get permission to allocate two more bytes for a 4-digit year. But people were definitely thinking about it for a long time before the year 2000.
ahhh i remember as a teenager making sure our board of ed systems were Y2K-compliant. numerous times the principal staff would evacuate the office to take care of some action going on in the halls of the school. leaving me with access to the entire student records db and the VP login and password written on a sticky under the keyboard (not even joking!). in addition this was a laptop so RAS details were conveniently saved in network neighborhood.
this left me at a crossroads. i thought about writing malware that would randomly raise peoples semester grades by 4 points (e.g. C- would became a B+). i thought about mass changing grades. i thought about altering a target group of kids i didn't like. all but the last of the scenarios ended with me getting fingered because I was the smart computer kid. if i didn't touch my grades i would have plausible deniability. i wrote the malware. then i watched office space and decided to think about my actions (and promptly forgot as a horny teenager does). soooo glad i didn't release it because years later i went back into the code and found a leftover debug that would have targeted the only 2 kids in the school who had this letter in their last name.
> people literally spent tens of billions of dollars in effort to fix it
The sceptic inside me is curious as to how much they actually accomplished in their effort to fix it. I mean, yes, they spent billions to get millions of lines of code read, but how many fixes have been made and what would be the cost (when compared to those billions of dollars spent) if they weren't fixed at all.
A good friend of mine was (and still is) a civil/construction engineer. Other than some introduction courses in college, he had never done any programming before. But after losing his job in 1998, he was hired by a major European consulting company to work on the Y2K problem.
With an auditorium full of others who had barely any programming experience, he was given a crash course in Cobol for a month, and then he had to go through thousands of lines of code of bank software that ran on some mainframe and identify all cases where a date was hard-coded as 2 characters.
Extremely tedious work, but there were tons of fixes to be made.
The problem was real, and the billions that were spent on fixing things were necessary.
I’m wondering what happened to the people in meeting rooms who suggested that this could be automated if it needed no deep understanding of the COBOL language or the banking domain..?
EDIT (for clarity): that’s what I would say, and I’m pretty sure many people smarter than I had said it - so I’m just curious what objections made business push forward with the “relatively unskilled” human labour intensive practices instead, because that’s the history we have now.
There were tools available, but it was increasing the risk (in a risk-averse atmosphere) to trust them to catch all cases. Try explaining to your superiors why you were given the task of fixing the issue and instead of completely fixing it you saved some money. Probably unacceptable.
Also, much of the remediation could not be automated anyway (for example you may not be able to assume the century is '19' for a given date field which can be in the future. You may need to do a comparison to a cutoff date, or you may only have to enlarge the field knowing that the value is supplied from elsewhere which will itself get fixed), so it fell back to people looking at the code, understanding it just enough, making the required changes, and planning/executing the testing.
I'm not a COBOL expert, but I suspect you misunderstand both the difficulty of teaching a machine to understand the problem well enough to automate fixes for it and the available computing resources at the time. This was still the era of Windows 95, the classic Mac, and expensive workstations.
I worked for a place that simulated what would have happened with an older device. It would have been pretty catastrophic... easily losses in the billions.
Remember that big old systems were the low hanging fruit in 1970. The business logic was COBOL and assembler, without things like functions. The system may have hundreds of duplicate routines for things like subtracting dates.
if you're asking "was the money well spent?" then I'm sure it was as well spent as usual, i.e. mistakes got made, a percentage was wasted and some contractors made very good rates. But in the end we got there.
If you asking "would it be better to have done nothing?" then I would have to say No, of course not. What are you thinking?
>..and what would be the cost (when compared to those billions of dollars spent) if they weren't fixed at all.
I worked on fixing Y2K bugs at the time. It was very methodically and planned, and tested on-site at a space agency. If we hadn't fixed those Y2K issues at the time then weather forecasts would have suffered greatly. The cost of that happening? Hard to calculate I think.
Lots of people seem to come up with it now. It's been two decades, and it's brought up by people who use it as an example in their other other agendas:
"look at y2k: we spent so much money, and nothing happened!" (so far, so good) "...therefore, if we hadn't, nothing would have happened anyway!!" (uh, based on...?) "...and it was all a massive scam, just like any massive present issue!!!" (ah, there's the underlying reason)
What you're saying here is that there was no crisis.
The fact that people worked on date issues for a decade does not mean the public scare campaign in the late 90s was justified.
You could equally claim airplanes would crash if they weren't refuelled, or people would die if garbage wasn't collected. In those cases, the media is educated enough to know the claims are pointless.
In the case of y2k, they had little understanding of how enterprise IT worked, and lots of businesses who were happy to encourage that ignorance.
And the dotcom bust came when it did in part because the Y2K spending was lifting all boats. A bunch of companies shifted from buying too much to buying not much.
I'm not sure why you're being downvoted. It wasn't really related to the dotcom bust per se. But the falloff in Y2K spend was part of the overall drop in IT spend on consultants, system refreshes, etc. We use dot-bomb as a shorthand but there were clearly a number of things that happened in the same general timeframe that made it something of a perfect storm.
The Y2K bug did not become a crisis only because people literally spent tens of billions of dollars in effort to fix it. And in the end, everything kept working, so a lot of people thought it wasn't a crisis at all. Complete nonsense.
Yes, it's true that all software occasionally has bugs. But when all the software fails at the same time, a lot of the backup systems simultaneously fail, and you lose the infrastructure to fix things.