Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What was being a software developer like about 30 years ago?
297 points by kovac on Oct 31, 2022 | hide | past | favorite | 467 comments
I'm curious what it was like to be a developer 30 years ago compared now in terms of processes, design principles, work-life balance, compensation. Are things better now than they were back then?



It was great. Full stop.

A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-)

Starting in 1986 I worked on bespoke firmware (burned into EPROMs) that ran on bespoke embedded hardware.

Some systems were written entirely in assembly language (8085, 6805) and other systems were written mostly in C (68HC11, 68000). Self taught and written entirely by one person (me).

In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

Bugs in production were exceedingly rare. The relative simplicity of the systems was a huge factor, to be sure, but knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

Schedules were no less stringent than today; there was constant pressure to finish a product that would make or break the company's revenue for the next quarter, or so the company president/CEO repeatedly told me. :-) Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).


> In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

This was it, even into the 90s you could reasonable "fully understand" what the machine was doing, even with something like Windows 95 and the early internet. That started to fall apart around that time and now there are so many abstraction layers you have to choose what you specialize in.

And the fact that you couldn't just shit another software update into the update server to be slurped up by all your customers meant you had to actually test things - and you could easily explain to the bosses why testing had to be done, and done right, because the failure would cost millions in new disks being shipped around, etc. Now it's entirely expected to ship software that has significant known or unknown bugs because auto-update will fix it later.


It isn't right to consider that time as a golden age of software reliability. Software wasn't less buggy back then. My clear recollection is that it was all unbelievably buggy by today's standards. However things we take for granted now like crash reporting, emailed bug reports, etc just didn't exist, so a lot of devs just never found out they'd written buggy code and couldn't do anything even if they did. Maybe it felt like the results were reliable but really you were often just in the dark about whether people were experiencing bugs at all. This is the origin of war stories like how Windows 95 would detect and effectively hot-patch SimCity to work around memory corruption bugs in it that didn't show up in Windows 3.1.

Manual testing was no replacement for automated testing even if you had huge QA teams. They could do a good job of finding new bugs and usability issues compared to the devs-only unit testing mentality we tend to have today, but they were often quite poor at preventing regressions because repeating the same things over and over was very boring, and by the time they found the issue you may have been running out of time anyway.

I did some Windows 95 programming and Win3.1 too. Maybe you could fully understand what it was doing if you worked at Microsoft. For the rest of us, these were massive black boxes with essentially zero debugging support. If anything went wrong you got either a crash, or an HRESULT error code which might be in the headers if you're lucky, but luxuries like log files, exceptions, sanity checkers, static analysis tools, useful diagnostic messages etc were just totally absent. Windows programming was (and largely still is) essentially an exercise in constantly guessing why the code you just wrote wasn't working or was just drawing the wrong thing with no visibility into the source code. HTML can be frustratingly similar in some ways - if you do something wrong you just silently get the wrong results a lot of the time. But compared to something more modern like JavaFX/Jetpack Compose it was the dark ages.


I'm reminded of the Windows 95 uptime bug https://news.ycombinator.com/item?id=28340101 that nobody found for years because you simply couldn't keep a Windows system up that long. Something would just crash on you and bluescreen the whole thing, or you needed to touch a mandatory-reboot setting or install some software.


running FF on a windows 11 flagship HPE OMEN gaming laptop right now and this bitch crashes at LEAST once a day.


I get forced restarts on Windows 10 due to .net updates only. These tend to ensure applications that ran on previous CLR cannot run until the shutdown thing does the job of rebuilding everything, and it's not done online.


Reliable and buggy can go together - after all, nobody left their computer running for days on end back then, so you almost always had a "fresh slate" when starting up. And since programs would crash, you were more trained to save things.

The other major aspect was that pre-internet, security was simply not an issue at all; since each machine was local and self-contained there wasn't "elite hax0rs" breaking into your box; the most there was were floppy-copied viruses.


> a lot of devs just never found out they'd written buggy code and couldn't do anything even if they did.

This is undoubtedly true. No doubt there are countless quietly-malfunctioning embedded systems all around the world.

There also exist highly visible embedded systems such as on-air telephone systems used by high-profile talents in major radio markets around the country. In that environment malfunctions rarely go unnoticed. We'd hear about them literally the day of discovery. It's not that there were zero bugs back then, just nothing remotely like the jira-backlog-filling quantities of bugs that seem to be the norm today.


This was what passed for an "AAA" game in 1980

https://en.wikipedia.org/wiki/Ultima_I:_The_First_Age_of_Dar...

it was coded up in about a year by two people who threw in just about every idea they got.


1980 wasn't about 30 years ago, though. 30 years ago is 1992 which is Wolfenstein 3D, Civilization, and Final Fantasy type games. It's on the cusp of games like Warcraft, C&C, Ultima Online, Quake, Diablo, and Everquest. Games that are, more or less, like what we have now but with much much worse graphics.


In 1992 (and pretty much in a good part of the 90s) it was still possible and practical to build a small dev team (under 10) and push out incredible titles. IIRC Both ID Software and the team that worked on Diablo were relatively small-ish.

Nowadays we have to look to indie studios.


You're not going to be believe this, but I'm still working on Ultima IV. I come back to it every 3-4 years and spin up a new player and start over. I love it, but never can seem to commit enough time to build up all my virtues.


It was also very unforgiving - one mistake and you were back to rebuilding that virtue.


I recall that on one of my play-throughs as a kid, I got everything done except my characters needed to be level 8. So I un-virtuously tweaked the save file. (I think that was Ultima IV, but it's been a while.)

I also tweaked a later Ultima to not need the floppy disk in the drive. The budget copy protection had added itself to the executable and stored the real start address in a bad sector on disk, so I just patched the real start address back into the EXE header.


I read somewhere that Ultima V is the last Ultima that Lord British did the majority programming work. For Ultima VI he was convinced that he need a full team to get it done.

I still think it should be rather doable (and should be done by any aspiring game programmers) for a one man team to complete a Ultima V spin-off (same graphics, same complexity but on modern platforms) nowadays. Modern computers, languages and game engines abstract away a lot of the difficulties.


Completely agreed. The tile based graphics that pushed the limits of a mid-80's computer (and in some cases required special hardware) can now be done off the cuff with totally naive code in a matter of a couple hours:

https://github.com/mschaef/waka-waka-land

There's lots more room these days for developing the story, etc. if that's the level of production values you wish to achieve.


I actually got turned down by Chuckles for my first 'real' programming job ... They wanted someone that actually played the games :-) It was a neat experience interviewing though - I was really surprised they had offices in New Hampshire.


This guy? Chuck "Chuckles" Beuche?

they interview him on the "Apple Time Warp" series. He wrote "Caverns of Calisito" at origin. The optimizations they came up with to make things work was crazy. (I think they were drawing every third line to speed things up and had self modifying code).

https://appletimewarp.libsyn.com/

or youtube

https://www.youtube.com/channel/UC0o94loqgK3CMz7VEkDiIgA/vid...

good resource on Apple programing 40ish years ago.


I grew up in Manchester, it was years after I left that I realized Origin games had a development office right by the airport.


And now we have AAA games that take 6+ years to make and still ship unfinished and broken. What a weird time we live in.


Ultima I was a phenomenal game, but also incredibly hard by today's standards


Throwing up another Ultima memory...

I've got great memories of Ultima II on C64 and some Apple II as well. It was far more expansive than I, but still relatively fast. I remember when III came out, it was just... comparatively slow as molasses. It was more involved, but to the point where it became a multi day event, and it was too easy to lose interest waiting for things to load. II was a great combination of size/breadth/complexity and speed.

Then I got Bard's Tale... :)


speaking of Ultima www.uooutlands.com/

Been playing this recently. It is Ultima Online but with all the right choices rather than all the wrong choices post 1999 the development studio made. Anyone who enjoys Online RPGs should certainly give it a try. The graphics don't do the game justice at all but the quality of the game makes up for this and then some.


This is why I drifted towards game development for most of my career. Consoles, until the penultimate (antipenultimate?) generation, ran software bare or nearly bare on the host machine.

I also spent time in integrate display controller development and such; it was all very similar.

Nowadays it feels like everything rides on top some ugly and opaque stack.


For context of what this guy is saying: the modern Xbox's (after 360) are actually running VMs for each game. This is part of why despite the hardware being technically (marginally) superior: Xbox tends to have lower graphical fidelity.


The 360 has a Type 1 (bare metal) hypervisor. So there's not much, if any, performance impact to having it since the software runs natively on the hardware.

Microsoft used a hypervisor primarily for security. They wanted to ensure that only signed code could be executed and wanted to prevent an exploit in a game from allowing the execution of unsigned code with kernel privileges on the system itself.

Every ounce of performance lost to the hypervisor is money Microsoft wasted in hardware costs. So they had an incentive to make the HyperV as performant as possible.


the 360 has no hypervisor.

The CPU had an emulator that you could run on x86 Windows, but it was not itself a hypervisor.

The hypervisor in the XB1 served a more important purpose: to provide developers a way of shipping the custom SDK to clients, and not forcing them to update it. This was quite important for software stability and in fact we made a few patches to MS's XB1 SDK (Durango) to optimise it for our games.

VM's are VM's, there are performance trade-offs.

I know this because I worked on AAA games before in this area, do you also work in games and are repeating something you think. you heard?


I don't work in the industry. But just because you "worked on AA games" before doesn't make you correct.

This detailed architectural overview of the 360 discusses the hypervisor:

https://www.copetti.org/writings/consoles/xbox-360/

This YouTuber, who is an industry vet, and has done several xbox ports claims the XB360 has a hypervisor:

https://www.youtube.com/watch?v=Vq1lxeg_gNs

And there entries in the CVE database for the XB360 which describe the ability to run code in "hypervisor mode":

https://www.cvedetails.com/cve/CVE-2007-1221/

This detailed article on the above exploit goes into detail on how the memory model works on the XB360, including how main memory addressing works differently in hypervisor mode than in real mode:

https://www.360-hq.com/article1435.html

That's a whole lot of really smart people discussing a topic that you claim doesn't exist.


Appreciate the detailed reply!

> This detailed architectural overview of the 360 discusses the hypervisor:

> https://www.copetti.org/writings/consoles/xbox-360/

Yes, the 128KB of key storage and W^X. That's not a hypervisor in the sense that the XB1/HyperV or VMWare have a hypervisor, they shouldn't even share a name it's not the same thing at all.

It's like calling the JVM is a virtual machine in the same way QEMU is.

The 360 "Hypervisor" is more akin to a software T2 chip than anything that actually virtualises.


I don’t think you are showing respect when you simplistically repeat your assertion without effort, after two people expended their precious time to tell you in detail that you are wrong with examples. I don’t know anything, but a few minutes following the provided links and I find https://cxsecurity.com/issue/WLB-2007030065 which says:

  The Xbox 360 security system is designed around a hypervisor concept. All games and other applications, which must be cryptographically signed with Microsoft's private key, run in non-privileged mode, while only a small hypervisor runs in privileged ("hypervisor") mode. The hypervisor controls access to memory and provides encryption and decryption services.

  The policy implemented in the hypervisor forces all executable code to be read-only and encrypted. Therefore, unprivileged code cannot change executable code. A physical memory attack could modify code; however, code memory is encrypted with a unique per-session key, making meaningful modification of code memory in a broadly distributable fashion difficult. In addition, the stack and heap are always marked as non-executable, and therefore data loaded there can never be jumped to by unpriviledged code.

  Unprivileged code interacts with the hypervisor via the "sc" ("syscall") instruction, which causes the machine to enter hypervisor mode.
You can argue your own definition of what a hypervisor is, but I suspect you won’t get any respect for doing so.


360 indeed uses hypervisor [0], but uses it only for security, to make the app signature verification run on a higher level.

Windows on PCs also runs under hypervisor if you enable some security features (e.g. VBS/HVCI which are on by default since Windows 11 2022 update, or Windows Sandbox, or WDAG) or enable Hyper-V itself (e.g. to use WSL2/Docker).

The performance losses are indeed there, but by purely running the hypervisor you lose just around 1% [1], because the only overhead is added latency due to accessing memory through SLAT and accessing devices through IOMMU...

I'd imagine XB1 is running with all the security stuff enabled though, which demands additional performance losses [2]..

[0]: https://www.engadget.com/2005-11-29-the-hypervisor-and-its-i...

[1]: https://linustechtips.com/topic/1022616-the-real-world-impac...

[2]: https://www.tomshardware.com/news/windows-11-gaming-benchmar...


There’s no reason a pass through GPU configuration in a vm would have lower graphical fidelity.


There is a reason but it would only harm particularly poorly written games and even then a single digit percentage.

To excercise that you need a lot of separate memory transfers. Tiny ones. Games tend to run bulky transfers instead of many megabytes.

Memory bandwidth and command pipe should not see an effect even with the minimally increased latency, on any HVM.


Again with the caveat that this is specific to dom0 virtualization taking advantage of full hardware acceleration VT-d/VT-x, etc, what you say isn’t even necessarily the case.

With modern virtualization tech, the hypervisor mainly sets things up then steps out of the way. It doesn’t have to involve itself at all in the servicing of memory requests (or mapped memory requests) because the cpu does the mapping and knows what accesses are allowed or aren’t. The overhead you’re talking about is basically traversing one extra level in a page table noticeable only on micro benchmarks when filling the tlb or similar - this is a change in performance you might encounter a micro-regression in (without any virtualization to speak of) even when going from one generation of cpu architecture to the next.

Theoretically, the only time you’ll have any overhead is on faults (and even then, not all of them).

Of course I guess you could design a game to fault on every memory request or whatever, but that would be a very intentionally contrived scenario (vs just plain “bad” code).


Hello ComputerGuru,

As you may understand: there's more to graphical fidelity than just the GPU itself.

CPU<->GPU bandwidth (and GPU memory bandwidth) are also important.

There is a small but not insignificant overhead to these things with virtualisation: VMs don't come for free.


"Pass through GPU configuration" means that GPU memory is mapped directly into guest address space in hardware.

Bandwidth from a VM partition should be identical to that from the root partition.


I don’t understand what you’re trying to imply here.

Are you seriously suggesting that I chose to downgrade the graphics on the XB1 because I felt like it, and that dozens of other AAA game studios did the same thing?

Our engine was Microsoft native, by all rights it should have performed much better than PS4.

If you’re going to argue you’ll have to do a lot better than that since I have many years of lived experience with these platforms.


OK, you have a technical disagreement. No need to take it personally.

You may be right - you probably have more experience with this particular thing than I do.

I can't answer for the performance of the XB1, but I am curious what % reduction in GPU memory bandwidth you observed due to virtualization.

Did you have a non-virtualized environment on the same hardware to use for comparison?


I didn't take it personally, i just think you're presenting ignorance as fact and it's frustrating.

Especially when seemingly it comes from nowhere and people keep echoing the same thing which I know not to be true.

Look, I know people really love virtualisation (I love it too) but it comes with trade-offs; spreading misinformation only serves to misinform people for.. what, exactly?

I understood the parents perspective, GPU passthrough (IE; VT-d & AMD-Vi) does pass PCI-e lanes from the CPU to the VM at essentially the same performance. My comment was directly stating that graphical fidelity does not solely depend on the GPU, there are other components at play, such as textures being sent to the GPU driver, those textures don't just appear out of thin air, they're taken from disk by the CPU, and passed to the GPU. (there's more to it, but usually I/O involves the CPU on older generations)

The problem with VMs is that normal memory access's take on average a 5% hit, I/O takes the heaviest hit at about 15% for disks access and about 8% for network throughput (ballpark numbers but in-line with publicly available information).

It doesn't even matter what the exact precise numbers are, it should be telling to some degree that PS4 was native and XB1 was virtualised, and the XB1 performed worse with a more optimised and gamedev friendly API (Durango speaks DX11) and with better hardware.

It couldn't be more clear from the outside that the hypervisor was eating some of the performance.


I guess I should clarify that my point was purely in abstract and not specific to the XBox situation.

Of course in reality it depends on the hypervisor and the deployed configuration. Running a database under an ESXi VM with SSDs connected to a passed-through PCIe controller (under x86_64 with hardware-assisted CPU and IO virtualization enabled and correctly activated, interrupts working correctly, etc) gives me performance numbers within the statistical error margin when compared to the same configuration without ESXi in the picture.

I haven’t quantified the GPU performance similarly but others have and the performance hit (again, under different hypervisors) is definitely not what you make it out to be.

My point was that if there’s a specific performance hit, it would be pedantically incorrect to say “virtualizing the GPU is the problem” as compared to saying “the way MS virtualized GPU access caused a noticeable drop in achievable graphics.”


Sorry, I don't think I implied virtualising the GPU is the problem.

I said "the fact that it's a VM has caused performance degradation enough that graphical fidelity was diminished" - this is an important distinction.

To clarify further: the GPU and CPU is a unified package and the request pipeline is also shared, working overtime to send things to RAM will affect GPU bandwidth, so overhead of memory allocations that are non-GPU will still affect the GPU due to that limited bandwidth being used.

I never checked if the GPU bandwidth was constrained by the hypervisor to be fair, because such a thing was not possible to test, the only corrolary is the PS4 which we didn't optimise as much as we did for DX and ran on slightly less performant hardware.


I always figured texture loading from disk was mostly done speculatively and during the loading screen, but what do I know.

Anyway, a 5% memory bandwidth hit does not sound to me like a huge deal.


Lower graphical fidelity than what? PlayStation?


> Marginally superior to what? PlayStation?

Precisely

Both the original Xbox One and the Xbox One S have a custom, 1.75GHz AMD 8-core CPU, while the Xbox One X bumps that up to a 2.3GHz 8-core chip. The base PS4 CPU remained clocked at 1.6GHz and contains a similar custom AMD 8-core CPU with x86-based architecture, while the PS4 Pro bumps that clock speed up to 2.13GHz.

EDIT: you’ve edited your comment, but also yes.


The CPU isn't particularly relevant is it (although the CPUs in the PS4/XBone generation were exceptionally terrible compared to what was standard on PCs at the time)? Graphical fidelity is going to depend much more on the GPU (although the CPU is going to bottleneck framerate if it's not powerful enough).

In the current generation the Series X has a more powerful GPU than the PS5, which tends to mean a lot of games run at higher resolutions on the system, although there's some games that run slightly better on PS5 (I think the Call of Duty games might be in that category?). And a lot (most?) are basically the same across both systems - probably because devs aren't bothering to tweak cross platform games separately for the two different consoles.


>This was it, even into the 90s you could reasonable "fully understand" what the machine was doing, even with something like Windows 95 and the early internet. That started to fall apart around that time and now there are so many abstraction layers you have to choose what you specialize in.

This doesn't really track. 30 years ago computers were, more or less, the same as they are now. The only major addition has been graphics cards. Other than that we've swapped some peripherals. Don't really see how someone could "fully understand" the modem, video drivers, USB controllers, motherboard firmware, processor instruction sets, and the half dozen or so more things that went into a desktop.


This is why you fail. Thirty years ago I could make a wire wrapped 68000k board that did nothing but play music. CE/CS was different back then. I'd cut pins and solder in chips to twiddle the filters on audio output. You could know the entire process from power on to running of your computer and it was easy to change bits even down to the hardware level like adding a 'no place to put it unless you build it yourself' CPU/MPU/RAM upgrade and make it work. Adjust your NTSC video output, just cut that resistor in lieu of replacing it with something really high resistance, it'll be better. Let's build our own new high speed serial port for MIDI. How about a graphics co-processor that only does Mandlebrot calculations, let's build three of them. Only few of the younger generation comprehend the old ways. And the machines have changed to fewer chips and machines have turned into system on a chip. It's a bit of a shame.


Where did one acquire your kind of knowledge outside of a university? Were there any books or USENET groups that you visited to attain it?


You would build a wire wrapped 68000 board in 1992? Isn’t that a tiny bit late to expend that much effort on a 68000?


Not at all. I was still building embedded hardware around 68k 10 years later. There are undoubtedly new products being built around 68k today.

If all you want to do is synthesize music the 68k is perfect.

If you’re taking issue with wire wrap, there just weren’t general purpose dev boards available back then. You were expected to be able to roll your own.


Wire wrap is the most reliable form of construction, used by NASA for many years for this reason - the wrapping of the wire around the square pegs creates a small cold weld at every corner.

Plus when mulitlayer boards were not really a thing, wirewrap gives you all the layers you want, more or less.


30 years ago was DOS computers - usb certainly wasn’t widespread even if it was out, and many of the video drivers at the time were “load palette here, copy memory there” type things.


As mentioned, we didn't have USB controller until 1996. But even if you included that, which was an order of magnitude more complex than parallel port, USB 1.0 spec was only about 100 pages long. And yes you could reasonably understand what was going on there.


the crazy thing to me is just how many different workflows/UI/UX you need to learn along so many platforms today. AWS, GCP, Azure - like you need to learn so much deeply about each in order to be "marketable" and the only way you shall learn all of them is if you happen to work at a company that happens to rely on said platform.

Then there is low-level training of ILO bullshit that ive done weeks training on for HPE and I have building and dealing with HPE servers since before the bought COMPAQ....

And dont even get me started on SUN and SGI... how much brain power was put into understanding those two extinct critters... fuck even CRAY.

there is so much knowledge that has to evaporate in the name of progress....


Yeah, it's definitely great but also terrible that bugs can be patched so easily now.


Just so its documented ;; may you plz ELI5 how easy a bug is to patch today? Thanks


When DOOM was released in 1993 it spread like wildfire across bulletin boards and early FTP services. But the vast majority of players got it from a floppy copy at their local computer store - it was shareware, and so copying the floppy was fine. They even encouraged stores to charge for the shareware game, they wanted it distributed as widely as possible.

And if you paid the $40 for the full game, you got floppies mailed to you.

There was no easy way for the company to let you know there was an update available (the early versions had some well-known bugs) so the user would have to go searching for it, or hear a rumor at the store. If you called id, they'd have to mail you a disk with the updated executable on it. This was all confusing, time consuming, and was only for a game.

Things were much worse with operating systems and programs.

Now almost every piece of software is either distributed via an App Store of some sort that has built-in updates, or has a "Check for updates" button in the app itself. Post an updated build and within days a huge percentage of your users will be running the latest software.


This makes me saddest in the game market with day-one patching. I'm old enough to remember bringing a game home, plugging in the cart and playing it, but if I was to do that now with a disc or cart, there is likely a download in my future. At least some of the game publishers will pre-download so I can play when the game is made available, but I miss the days (not the bugs) of instant playing once the game was acquired.


> if I was to do that now with a disc or cart, there is likely a download in my future.

The newest Call of Duty (was released this past Friday) was released on disc as well as the various digital forms. Apparently, the disc was only contained ~72mb of data for a game that easily cleans 100gb when fully installed.


I miss the days when bugs were extra content. B)


For history. We're at time where SaaS is ascendant and CI/CD tools are everywhere. This means that to patch a bug, you fix the code, commit it, and then it magically makes it way to production, within like an hour, sometimes less. Customers are interacting with your product via a web browser, so they refresh the page and receive the new version of the software. Compared to times of old, with physical media and software that needed installing, it's ridiculously easier.


grats on this reply... even though ELI5 context will have to have some domain knowledge... but this was a great response, thanks.


It's fascinating to think about a historian from the future, coming along and reading about the day to day lives of a 2020's developer and them just being enamored and asking about the most mundane details of our lives that we never considered or at all document. What colorschemes did they use in VScode? What kinds of keyboards did they use? What's a webpack?


In 2014 it was writen that Sergi Brin (goog founder) had a genetic future-forming-cancer gene and he was funding pharma around it....

so I posted this to reddi May 2 2014....

===

Fri, May 2, 2014, 4:49 PM

to me

In the year 2010, scientists perfected suspended animation through the use of cryogenics for the purpose of surgery. After more than a decade of study and refinement, long term suspended animation became a reality, yet a privilege reserved for only the most wealthy and influential.

The thinking at the time was that only those who showed a global and fundamental contribution to society (while still viewed through the ridiculously tinted lenses of the global elite of the era) were worthy of entering into such state.

The process was both incredibly complex and costly. As each Transport, as they were known, required their own stand alone facility to be built around them. Significant resources were put into the development of each facility as they required complete autonomous support systems to accommodate whatever duration was selected by the Transport.

Standalone, yet fully redundant, power, security and life support systems were essential to the longevity of each facility.

Additionally, it was recognized that monetary resources would be subject to change over time, especially fiat-currency based resources. Thus there was a need to place physical holders of value that would be perceived to not deplete/dilute over time into the facilities for use by the Transport when they resuscitate.

These resources are the most sought after treasure of the new world.

After hundreds of years of human progress, civilization could no longer sustain itself in an organized self-supporting system. Through utter corruption of what some call the human soul, the world has fallen dark. There are very few outposts of safety in the current Trial of Life, as its now known.

Many Transporters have been found, resuscitated and exploited already. There are believed to be many many more, but their locations are both secret and secure. Akin to your life relying on the discovery of an undisturbed Tomb of a Pharaoh - even though every consciousness on the planet is also seeking the same tomb.

They are the last bastion of hope for they alone have the reserves of precious materials needed to sustain life for the current generation.

Metals, technology (however outdated), medicines, seeds, weapons and minerals are all a part of each Transport 'Crop'.

One find can support a group or community for years alone based on the barter and renewable resource potentials in each Crop.

One transport, found in 2465, that of a long dead nanotech pioneer - who was purportedly responsible for much of the cybernetic medical capabilities of the 21st century, which he sought to cure his genetic predisposition for a certain disease, was so vast that the still powerful city-state in the western province of North America was able to be founded.

The resources of this individual were extraordinary, but his resuscitation, as they all are, was rather gruesome and cold.

The security systems in each Transport Facility are biometric and very complex. They can only be accessed by a living, calm and (relatively) healthy Transport.

If the system, and its control mechanism AI, detect signs of duress, stress or serious injury to the Transport - they go into fail-safe. Which is to say they self detonate. Taking with them all resources, the Transport and the Seekers as well.

There have been many instances of this, such that the art of successful Resuscitation has become an extremely profitable business.

The most active and successful Resuscitation Team (RT) have been the ironically named, Live Well Group.

The most conniving, well practiced and profitable con in the history of mankind.

LWG alone has been responsible for the resuscitation of more than 370 Transports. Their group is currently the most powerful in the world. With their own city-state, established after the Brin case mentioned, they have a cast of thousands of cons all working to ensure the Transport believes they have been Awakened to a new, advanced, safe world and that they would be allowed to stake part in a significant way now that they have been Transported.

They are fooled into releasing their resources, then brutally tortured for information about any other Transports or any other knowledge they may possess, which invariably is less than nothing.

It is a hard world out there now, and the LWGs ruthlessly strive to locate the thousands of other Transport Facilities is both the worst aspect of our modern struggle - yet ironically will serve to be the basis of the ongoing endeavor of the species.

There is rumor of a vast facility of resources and Transports in an underground 'CITY' of the most elite Transports ever. A facility supposedly comprised of the 13 most powerful and rich bloodlines of people to have ever existed.

It is not known which continent this facility is on, but I believe it is in Antarctica - fully automated and with the ability to auto-resuscitate at a given time.

This is my mission, this is my life's work. To find and own this facility and crush any and all other groups that oppose me.


Today you can use the internet to patch a bug on a users computer, and the users expect this, and even allow you to patch bugs automatically.

Previously, patching bugs meant paying money for physical media and postage.


I've been trying to teach my young teenage kids about how things work, like, washing machines, cars, etc. One of the things I've learned is that it's a looooot easier to explain 20th century technology than 21st century technology.

Let me give you an example. My father was recently repairing his furnace in his camper, which is still a 20th century technology. He traced the problem to the switch that detects whether or not air is flowing, because furnaces have a safety feature such that if the air isn't flowing, it shuts the furnace off so it doesn't catch on fire. How does this switch work? Does it electronically count revolutions on a fan? Does it have two temperature sensors and then compute whether or not air is flowing by whether their delta is coming down or staying roughly the same temperature? Is it some other magical black box with integrated circuits and sensors and complexity greater than the computer I grew up with?

No. It's really simple. It's a big metal plate that sticks out into the airflow and if the air is moving, closes a switch. Have a look: https://www.walmart.com/ip/Dometic-31094-RV-Furnace-Heater-S... You can look at that thing, and as long as you have a basic understanding of electronics, and the basic understanding of physics one gets from simply living in the real world for a few years, you can see how that works.

I'm not saying this is better than what we have now. 21st century technology exists for a reason. Sometimes it is done well, sometimes it is done poorly, sometimes it is misused and abused, it's complicated. That fan switch has some fundamental issues in its design. It's nice that they are also easy to fix, since it's so simple, but I wouldn't guarantee it's the "best" solution. All I'm saying here is that this 20th century technology is easier to understand.

My car is festooned with complicated sensors and not just one black box, but a large number of black boxes with wires hooked in doing I have no idea what. For the most part, those sensors and black boxes have made cars that drive better, last longer, are net cheaper, and generally better, despite some specific complaints we may have about them, e.g., lacking physical controls. But they are certainly harder to understand than a 20th century car.

Computers are the same way. There is a profound sense in which computers today really aren't that different than a Commodore 64, they just run much faster. There are also profound senses in which that is not true; don't overinterpret that. But ultimately these things accept inputs, turn them into numbers, add and subtract them really quickly in complicated ways, then use those numbers to make pictures so we can interpret them. But I can almost explain to my teens how that worked in the 20th century down to the electronics level. My 21st century explanation involves a lot of handwaving, and I'm pretty sure I could spend literally a full work day giving a spontaneous, off-the-cuff presentation of that classic interview question "what happens when you load a page in the web browser" as it is!


> This was it, even into the 90s you could reasonable "fully understand" what the machine was doing

That was always an illusion, only possible if you made yourself blind to the hardware side of your system.

https://news.ycombinator.com/item?id=27988103

https://news.ycombinator.com/item?id=21003535


Your habit of citing yourself with the appropriate references has led me from taking your stance as an extremely literal one (“understanding all of the layers; literally”) to actually viewing your point as…very comprehensive and respectful to the history of technology while simultaneously rendering the common trope that you are addressing as just that, a trope.

Thanks, teddyh.


Fully understand and “completely able/worth my time to fix” are not identical. I can understand how an alternator works and still throw it away when it dies rather than rebuild it.


In that case, what is the value proposition of investing the time to learn how an alternator works? It surely has some value, but is it worth the time it takes to know it?

To bring it back to our topic, is it worth it to know, on an electrical level, what your motherboard and CPU is doing? It surely has some value, but is it worth the time to learn it?


You’re just saying you didn’t go to school for science or engineering. Plenty of people program and also understand the physics of a vacuum tube or capacitor. Sometimes we really had to know, when troubleshooting an issue with timing or noise in a particular signal on a PC board or cable.


It was a mix of great and awful.

I wrote tons of assembly and C, burned EPROMs, wrote documentation (nroff, natch), visited technical bookstores every week or two to see what was new (I still miss the Computer Literacy bookstore). You got printouts from a 133 column lineprinter, just like college. Some divisions had email, corporation-wide email was not yet a thing.

No source code control (the one we had at Atari was called "Mike", or you handed your floppy disk of source code to "Rob" if "Mike" was on vacation). Networking was your serial connection to the Vax down in the machine room (it had an autodial modem, usually pegged for usenet traffic and mail).

No multi-monitor systems, frankly anything bigger than 80x25 and you were dreaming. You used Emacs if you were lucky, EDT if you weren't. The I/O system on your computer was a 5Mhz or 10Mhz bus, if you were one of those fortunate enough to have a personal hard drive. People still smoked inside buildings (ugh).

It got better. AppleTalk wasn't too bad (unless you broke the ring, in which case you were buying your group lunch that day). Laserprinters became common. Source control systems started to become usable. ANSI C and CFront happened, and we had compilers with more than 30 characters of significance in identifiers.

I've built a few nostalgia machines, old PDP-11s and such, and can't spend more than an hour or so in those old environments. I can't imagine writing code under those conditions again, we have it good today.


> No source code control

30 years ago is 1992, we certainly had source control a long time before!

In fact in 1992 Sun Teamware was introduced, so we even had distributed source control, more than a decade before "git invented it".

CVS is from 1986, RCS from 1982 and SCCS is from 1972. I used all four of those are various points in history.

> No multi-monitor systems, frankly anything bigger than 80x25 and you were dreaming.

In 1993 (or might've been early 1994) I had two large monitors on my SPARCstation, probably at 1280×1024.


That's like saying that "we" had computers in 1951.. The future is already here – it's just not evenly distributed.

Something existing is different from something being in widespread use.

When I was a kid in the 90s, I had a computer in my room that was entirely for my personal use. There was a pretty long stretch of time where most kids I encountered didn't even have access to a shared family PC.. much longer before they had a computer of their own.


Had a Kaypro back in ‘82 that I used to create a neat umpire for a board war game. It had a markup language and could run things that let me get on arpanet and run Kermit. Lots of time has passed and programs used to be way more “efficient”. And the workstations and mini supers that followed shortly had great graphics it just wasn’t a card as much as a system. SGI’s and specialized graphics hardware such as Adsge and the stuff from e&s. Lots happened before pc’s.


I'm certainly not saying that nothing happened before PCs, only that when talking about the past, one cannot say "we had X" based simply on whether X existed somewhere in the world, but one must consider also how widespread the usage of X was at the time.


There were gobs of suns and SGI’s in the 80’s on just not at home. Whole lot of Unix work was done before that on pdp11’s and VAXen. Had to dial in or stay late to hack :-).


Indeed, I'm not disputing that.

However, you still need to mind the context. For instance, there existed computers in 1952.. Saying that "we had computers in 1952" is still right, very few institutions had access to one. Most people learning to program a computer in 1952 wouldn't actually have regular access to one, they'd do it on paper and SOME of the their programs may actually get shipped off to be actually run. So it'd even be entirely unreasonable to say "We, the people learning to program in 1952, had computers", one may say "We, who were learning to program in 1952 had occational opportunity to have our programs run on a computer"..

Yes, there were lots of nice hardware in the 80s, and LOTS of people working professionally in the field would be using something cheaper and/or older. In context of my original post, I took issue with op writing that "we had version control", sure, version control exited, but it was not so widely used throughout the industry that it's reasonable to say that we had it, some lucky few did.


The topic of the thread was software developer experience as a professional career, not at home.

Sure, in the early 90s I didn't have muti-CPU multi-monitor workstations at home, that was a $20K+ setup at work.

But for work at work, that was very common.


Maybe GP was in a less developed or wealthy area than you.

Often when talking to Americans about the 90s they're surprised, partly because tech was available here later and partly because my family just didn't have enough money.


Dude is literally talking about Atari. It’s surprising they didn’t have better source control by 1992; Apple certainly had centralized source control and source databases by that point. But Atari was basically out of gas by then.


>(I still miss the Computer Literacy bookstore)

I used to drive over Highway 17 from Santa Cruz just to visit the Computer Literacy store on N. First Street, near the San Jose airport. (The one on the Apple campus in Cupertino was good, too.)

Now, all of them—CL, Stacy's Books, Digital Guru—gone. Thanks, everyone who browsed in stores, then bought on Amazon to save a few bucks.


Won’t defend Amazon generally, but you need to blame private equity and skyrocketing real estate prices for the end of brick and mortar bookstores. And, for completeness, if we’re saying goodbye to great Silicon Valley bookstores of yore: A Clean Well Lighted Place For Books in Cupertino and the BookBuyers in Mountain View were also around 30 years ago, although they were more general. And strip mall ones like Waldenbooks still existed too.


Agree with the poster. Much better IMHO and more enjoyable back then.

Because of the software distribution model then there was a real effort to produce a quality product. These days not so much. Users are more like beta testers now. Apps get deployed with a keyboard input. The constant UI changes for apps (Zoom comes to mind) are difficult for users to keep up with.

The complexity is way way higher today. It wasn't difficult to have a complete handle on the entire system back then.

Software developers where valued more highly. The machines lacked speed and resources - it took more skill/effort to get performance from them. Not so much of an issue today.

Still a good job but I would like seek something different if I was starting out today.


> Still a good job but I would like seek something different if I was starting out today

I'm only 6 years in, and I am starting to feel this.

I went into computer science because it's something I knew that, at some level, it was something I always wanted to do. I've always been fascinated with technology ever since I was a child -- how things work, why things work, etc..

While studying computer science at my average state school, I met a few others that were a lot like me. We'd always talk about this cool new technology, work on things together, etc.. The was a real passion for the craft in a sense. It's something I felt similar during my time studying music with my peers.

Perhaps, in some naive way, I thought the work world would be a lot like that too. And of course, this is only my experiences so far, but I have found my peers to be significantly different.

People I work with do not seem to care about technology, programing, etc.. They care about dollar signs, promotions, and getting things done as quickly as possible (faster != better quality). Sure, those three things are important to varying degrees, but it's not why I chose computer science, and I struggle to connect with those people. I've basically lost my passion for programing because of it (though that is not the entire reason -- burnout and whatnot has contributed significantly.)

I'm by no means a savant nor would I even consider myself that talented, but I used to have a passion for programming and that made all the "trips" and "falls" while learning worth it in the end.

I tell people I feel like I deeply studied many of the ins and outs photography only to take school pictures all day.


Don't get me wrong there are still incredible opportunities out there. IoT is starting to pick up steam. Individuals that really like knowing what the metal is doing have many green fields to settle. You can get prototype boards designed and delivered for prices that an individual can afford. That was not possible 30 years ago. If you can find areas that cross disciplines things get more interesting.

WebStuff is dead IMHO. It is primarily advertising and eyeballs - yawn. If I see one more JS framework I'll puke. We have so many different programming languages it is difficult to get a team to agree on which one to use:) Don't get me started on databases. I have apps that use 3 or 4 just because the engineers like learning new things. It is a mess.


Better workplaces exist! Don't settle for one that saps your will to live.


> A sense of mastery and adventure permeated everything I did.

How much of that is a function of age? It is hard to separate that from the current environment.

Personally, I don't feel as inspired by the raw elements of computing like I once did, but it is probably more about me wanting a new domain to explore than something systemic. Or at least, it is healthier to believe that.

> knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

The notion of Internet Time, where you're continuously shipping, has certainly changed how we view the development process. I'd argue it is mostly harmful, even.

> perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

I think this is the crux of it: more responsibility, more ownership, fewer software commoditization forces (frameworks), less emphasis on putting as many devs on one project as possible because all the incentives tilt toward more headcount.


Yes indeed could be Dunning Kruger effect.


There wasn’t HN so no distraction to digress to every now and them.

I second this - systems were small and most people could wrap their brains around them. Constant pressure existed and there wasn’t “google” & “so” & other blogs to search for solutions. You had to discover by yourself. Language and API manuals weighed quite a bit. Just moving them around the office was somewhat decent exercise.

There wasn’t as much build vs buy discussion. If it was simple enough you just built it. I spent my days & evenings coding and my nights partying. WFH didn’t exist so, if you were on-call you were at work. When you were done you went home.

My experience from 25 years ago.


I actually used to do 'on call' by having a vt100 at the head of my bed and I would roll over every couple hours and check on things over a 9600 baud encrypted modem that cost several thousand dollars.

the only time I ever had to get up in the middle of the night and walk to the lab was the Morris worm. I remember being so grateful that someone brought me coffee at 7


I have one word for you "Usenet".


That was there. However, where I started 25 years back, they reserved Internet access for only privileged senior and staff engineers. I was a lowly code worm, No Internet, no usenet.


Alot of our modern software practices have introduced layers of complexity on to systems that are very simple at a fundamental level. When you peel back the buzzword technologies you will find text streams, databases, and REST at the bottom layer.

It's a self fulfilling cycle. Increased complexity reduces reliability and requires more headcount. Increasing headcount advances careers. More headcount and lower reliability justifies the investment in more layers of complicated technologies to 'solve' the 'legacy tech' problems.


> A sense of mastery and adventure permeated everything I did.

My experience too. I did embedded systeems that I wrote the whole software stack for: OS, networking, device drivers, application software, etc.

> Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything.

These days programming is more trying to understand the badly-written documentation of the libraries you're using.


I'm younger than you, but one of my hobbies is messing around with old video game systems and arcade hardware.

You're absolutely right - there's something almost magical in the elegant simplicity of those old computing systems.


> A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-)

Are you me? ;) I feel like this all the time now. I also started in embedded dev around '86.

> Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).

I wouldn't want to give up git and various testing frameworks. Also modern IDEs like VSCode are pretty nice and I'd be hesitant to give those up (VSCode being able to ssh into a remote embedded system and edit & debug code there is really helpful, for example).


And it had it's downside too. - Developing on DOS with non-networked machines. (OK,l one job was on a PDP-11/23) - Subversion (IIRC) for version control via floppy - barely manageable for a two person team. - No Internet. Want to research something? Buy a book. - Did we have free S/W? Not like today. Want to learn C/C++? Buy a compiler. I wanted to learn C++ and wound up buying OS/2 because it was bundled with IBM's C++ compiler. Cost a bit less than $300 at the time. The alternative was to spend over $500 for the C++ compiler that SCO sold for their UNIX variant. - Want to buy a computer? My first was $1300. That got me a Heathkit H-8 (8080 with 64 KB RAM) and an H19 (serial terminal that could do up to 19.2 Kbaud) and a floppy disk drive that could hold (IIRC) 92KB data. It was reduced/on sale and included a Fortran compiler and macro-assembler. Woo! The systems we produced were simpler, to be sure, but so were the tools. (Embedded systems here too.)


Yeah, I am almost identical, lots of 6805, floating point routines and bit banging RS232, all in much less than 2k code memory, making functional products.

Things like basketball scorebaords, or tractor spray controllers to make uniform application of herbicide regardless of speed. Made in a small suburben factory in batches of a hundred or so, by half a dozen to a dozen "unksilled" young ladies, who were actally quite skilled.

No internet, the odd book and magazines, rest of it, work it out yourself.

In those days it was still acceptable, if not mandatory to use whatever trick you could come up with to save some memory.

It didn't matter about the direct readability, though we always took great pains in the comments for the non obvious, including non specified addressing modes and the like.

This was around the time the very first blue LEDS came out.

When the web came along, and all the frameworks etc, it just never felt right to be relying on arbitrary code someone else wrote and you did not know the pedigree of.

Or had at least paid for so that you had someone to hassle if it was not doing what you expected and had some sort of warranty.

But also a lot of closed source and libraries you paid for if you wanted to rely on someone elses code and needed to save time or do something special, an awful lot compared to today.

Microsoft C was something like $3000 (maybe $5k, cant rememeber exactly) dollars from memory, at a time when that would buy a decent second hand car and a young engineer might be getting 20-25k a year tops(AUD).

Turbo C was a total breakthru, and 286 was the PC of choice, with 20MB hard drive, with the Compaq 386-20 just around the corner.

Still, I wouldn't go back when I look at my current 11th Gen Intel CPU with 32Gig RAM, 2 x 1TB SSDs and a 1080Ti graphics card with multiple 55inch 4k monitors, not even dreamable at the time.


don't forget the community. it was very much the case that you could look at an IETF draft or random academic paper and mail the authors and they would almost certainly be tickled that someone cared, consider your input, and write you back.

just imagine an internet pre-immigration-lawyer where the only mail you ever got was from authentic individuals, and there were no advertisements anywhere.

the only thing that was strictly worse was that machines were really expensive. it wasn't at all common to be self-funded


> knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

> Schedules were no less stringent than today;

So … how did that work, then? I know things aren't done, and almost certainly have bugs, but it's that stringent schedule and the ever-present PM attitude of "is it hobbling along? Good enough, push it, next task" never connecting the dots to "why is prod always on fire?" that causes there to be a never ending stream of bugs.


With no pms you dealt directly with the boss and you managed your own tasks so you had a hard deadline and showed demos and once it was done support/training. It was waterfall so not finishing on time meant removing features or finishing early meant added additional features if you had time . Everything was prod. You needed to fix showstopper bugs/crashed but bugs could be harmless (spelling fr example) or situational and complex or show shoppers. You lived with them because bugs were part of the OS or programming language or memory driver experience at the time.


As my old boss once said (about 30 years ago actually!) when complaining about some product or the other "this happens because somewhere, an engineer said, 'fuck it, it's good enough to ship'."


I wonder how much of this is due to getting old vs actual complexity.

When I started I was literally memorising the language of the day and I definitely mastered it. Code was flowing on the screen without interruption.

Nowadays I just get stuff done; I know the concepts are similar, I just need to find the specifics and I'm off to implement. It's more akin to a broken faucet and it definitely affects my perception of modern development.


Thanks. I'd forgotten how much the 68705 twisted my mind.

And how much I love the 68HC11 - especially the 68HC811E2FN, gotta get those extra pins and storage! I never have seen the G or K (?) variant IRL (16K/24K EPROM respectively and 1MB address space on the latter). Between the 68HC11 and the 65C816, gads I love all the addressing modes.

Being able to bum the code using zero-page or indirectly indexed or indexed indirectly... Slightly more fun than nethack.


https://en.wikipedia.org/wiki/Rosy_retrospection

I am sure everything was great back then but. I've been coding for 20 years, and there are a lot of problems of different types (including recurring bugs) that have been solved with better tooling, frameworks and tech overall. I don't miss too much


Exactly my experience coming out of school in 1986. Only for me it was microcontrollers (Intel 8096 family).

Thanks for bringing back some great memories!


I miss everything being a 'new challenge'... Outside of accounting systems - pretty much everything was new ground, greenfield, and usually - fairly interesting :-)


I started my first dev job in early 1997 which is more like 25 than 30 years ago but I think the milieu was similar.

The internet was mostly irrelevant to the line of work I was involved in although it was starting to have impact. We had one ISDN 2x line for the entire office. It was set up to open on demand and time out a few minutes later as it was billed by the minute.

I worked on an OpenGL desktop application for geoscience data visualization running on Irix and Solaris workstations.

The work life balance was great as the hardware limitations prevented any work from home. Once out of the office I was able to go back to my family and my hobbies.

Processes were much lighter with far less security paranoia as cyber attacks weren't a thing. Biggest IT risk was someone installing a virus on a computer from the disk they brought to install Doom shareware.

The small company I worked for did not have the army of product managers, project managers or any similar buffoonery. The geologists told us developers what they needed, we built it and asked if they liked the UI. If they didn't we'd tweak it and run it by them again until they liked it.

In terms of software design, OO and Gang of Four Patterns ruled the day. Everyone had that book on their desks to accompany their copies of Effective C++ and More Effective C++. We took the GoF a little too seriously.

Compensation was worse for me though some of that is a function of my being much more advanced in my career. These days I make about 10x what I made then (not adjusted for inflation). That said, I led a happier life then. Not without anxiety to which I'm very prone but happier.


Effective C++ was an amazing book. I bought copies for the entire team out of my own pocket. The Gang of Four on the other hand was an unfortunate turn for the industry. As you say we took it too seriously. In practice very few projects can benefit from the "Factory pattern", but I've seen it used in way too many projects to the detriment of readability. I worked in one source code base where you had to invoke 4 different factories spread across many different source files just to allocate one object.


> As you say we took it too seriously.

The real problem is that many people didn't actually read the book or, if they did, they only took part of it seriously.

Each pattern chapter has a pretty long section that details when you should and should not use the pattern. The authors are very clear about understanding the context and not mis-applying patterns.

But once it became popular (which happened because these patterns are quite useful), it got cargo culted and people started over-applying them because it sent a social signal that, "Hey, I must be a good developer because I know all these patterns."

The software engineering world is a much better one today because of that book now that the pendulum has swung back some from the overshoot.


It's amazing how many times I saw the Singleton pattern between 2000 - 2012 or so, and in almost every case, it degenerated into a global variable that was used by everything in a component or system.

It would have been more apt to name it the Simpleton pattern, after most of their practitioners.

This stuff started to go away w/ modern DI frameworks. In fact, I don't really see much of the GoF patterns anymore, particularly ones for managing the order of instantiation of objects. Everything in the C# world has been abstracted/libraried/APIed away. But I wouldn't be surprised if GoF patterns are still prevalent in C/C++/SmallTalk code.


I am a newly employed engineer and I am assigned to learn Design patterns (the book and all) right as of today. Needless to day, I am very intrigued. Could you expand what you mean by too far, beyond over-applying the patterns?


Don't listen too closely to the issues people have with GoF and Design Patterns. Yes, they have their issues and an over-reliance is an issue. However as a junior engineer you should learn these things and come to this realization (or not) yourself! Also if your company does use these things, code standardization across the org is more important than the downfalls of overly complex patterns.

I read these arguments instead of focusing on learning patterns and it just caused grief until I decided to learn the patterns.


> However as a junior engineer you should learn these things and come to this realization (or not) yourself!

This is such great advice. Wonder if theres even something in there about how learning through experience why something doesn't work being as useful as learning it in the first place.

I do think we throw the baby out with the bath water with many ideas. Despite us learning useful skills subconsciously.


Thank you for the response.

Yes I intend to learn and integrate design patterns and OO programming in general so that I can gain some confidence, and maybe later I can finally understand why my software development professor teachers hated this so much and teached us Haskell and Clojure instead :-)


If you're anything like me, given enough time you wonder if all the 'fluff' of OO is necessary as you seem to be writing code to satisfy the programming style rather than the domain problem. You'll then try FP and find it has its own pitfalls - especially around how complex the code can get if you have a load of smart Haskell engineers - and suddenly they have the same problems ('fluff'). Apparently at some point I'll have a similar move to Lisp and discover how complex the code base can be with multiple engineers (I'm 8 or 9 years into this engineering journey myself!).

My current goal with software is to write this as simply as possible that a junior developer with 6 months experience could read my code and know how to modify it.


Agree with the GP. In a sense even if design patterns are not such a great idea, there's so much code written with those patterns in mind (and classes/variables named accordingly) that it's beneficial to understand at least briefly what the names mean.

(That said, quoting Wikipedia, which I agree with also: "A primary criticism of Design Patterns is that its patterns are simply workarounds for missing features in C++". In particular, these days with more modern languages [and also the modernization of C++] some of the workarounds aren't that important any more)

As for why your professors prefer Haskell and Clojure... for some reason functional programming aligns with the way the stereotypical academia type person thinks. In practice, you should be using the best tool for the task, and learning various aspects of software engineering (as opposed to taking a side) should help you in the long run.


What often happens is you never get to "code standardization across the org" because technology changes too fast. But you still have to deal with overly complex patterns, varying and mis-applied patterns, etc.


Oh I 100% agree but as a junior engineer you're not going to be able to change that, if you can change it you probably won't change it for the better, and using HN comments to fuel debates over long-standing patterns will just cause resentment. These are totally valid opinions to have once you find them out for yourself, IMO.


Design patterns do a good job of capturing some design decisions you'll need to make in your career. They represent a level of architectural knowledge that is often poorly captured and communicated in our industry (slightly above language features, and below whole frameworks). Many people treated the book (either in good faith over-enthusiasm, or as a bad faith strawman) as a repository of 'good' code to cut and paste. Some of those people would end up working in a functional language and claiming they don't need design patterns because they're the same as having first class functions. This is just mistaking the implementation for the design motivation. And even then, tough luck, however clever your language. Monads? A design pattern.

So, I will stress again: design patterns represent decisions you can make about your code, in a particular context. If you want to be able to supply or accept a different algorithm to make a decision, that's the strategy pattern. Maybe it's an object with an interface, maybe it's a function callback. The important bit is you decided to let users of your API supply their own policy to make a decision (instead of you just asking for a massive dictionary called 'options' or whatever). If you want to ensure that all calls to one subsystem happen to a single instance, that's the singleton pattern. Whether you enforce that with static calls or in your inversion of control container, you're still making the decision to instantiate it once and not have every callee set up its own version taking up its own resources (or maybe you are, it's a decision after all).

I get somewhat agitated about this stuff, because people are raised to be very skeptical of design patterns. This means they go through their career building up all sorts of design knowledge, but rarely naming and sharing it at a useful level of granularity. That's a huge waste! But the saddest thing about the whole conversation was that the decision _not_ to use a particular design pattern is just as valid as the one to use it, and _even then_ it's still a superior approach because you can be explicit about what you're not doing in your code and why.

Anyway, good luck in your career!


> working in a functional language and claiming they don't need design patterns because they're the same as having first class functions

The point really was that you need different design patterns in a functional language, and most of the GoF design patterns are useless in a functional language, as they either deal with state, or they deal with something that had some better solution in a functional language (e.g. through algebraic datatypes, which were built-in).

So if you amend "we don't need design patterns" to "we don't need most of the GoF design patterns", it's actually a true statement.

> Monads? A design pattern.

Exactly.

And now the pendulum has swung back, and instead of providing primitive language features that would make using the Monad design patterns easy, we have half-assed async/await implementations in lots of imperative languages, just because people didn't realize async/await is just a particular use of the Monad design pattern.

> This means they go through their career building up all sorts of design knowledge, but rarely naming and sharing it at a useful level of granularity.

Which is really sad, because the GoF book really emphasized this point.

But for some reason programmers seem to have a desire to turn everything into some kind of cult...


The patterns movement, for more context, arose out of the work of the architect Christopher Alexander, who explored the concept of patterns in buildings like “courtyards” or “bay windows”. The problem with the GoF ones, as they’ve been applied to software, is an overemphasis on applying them for their own sake rather than fitting the problem at hand - imagine if every window in a house was a bay window. There are a lot of software projects that end up like that and turn people off OOP in general.


Absolutely learn the patterns. You will encounter places to use them. The book isn't an instruction book on how to write software. It is a "here are some useful patterns the occur occasionally during development". Having a common language to talk about such things is useful.

It is very easy to over-apply patterns at the cost of readability and maintainability. Realize the your code is more likely to be rewritten before the of the features provided by your applications of patterns are used.


Also good to know these patterns if you're dealing with a lot of legacy code, particularly 199x - 201x code that implemented a lot of these patterns.

Some of them are straightforward (factory, adapter). Some of them are almost never used (flyweight). Some are more particular to a programming language; you might see a lot of visitor patterns in C++ code, for instance, but IIRC, that wouldn't come up in Smalltalk because it supported paradigms like double dispatch out of the box.


I will comment: when they came out the idea of patterns captivated a lot of us and it became an article of faith that all code would be designed around a particular pattern.

It's still a great idea to use patterns... but I think people have come to realise that sometimes they over complicate things and maybe they don't always fit the task at hand. If that's what you are finding then maybe don't use a pattern and just code something up.

They are a useful and powerful tool, but not a panacea.


It's that. Over applying patterns in the areas of the code that are unlikely to need the extensibility afforded by employing these patterns. The cost of using the patterns is that they add a level of indirection which later costs you and others some extra cognitive load. By and large though the GoF patterns are relevant today and when applied judiciously they do help to organize your code.


Have these OO-patterns become less relevant or is it just that they were absolutely standard 20-30 years ago- so that they are old and less relevant only in the perspective of their previous dominance?


A lot of is is that modern frameworks include a lot of those behaviors that required you to manually code the patterns back them. e.g., I can create a an ObservableCollection in C# with a single line of code, but in 1996 C++ I'd have to go to the trouble of building out an Observer pattern and it still wouldn't have all the features that IObservable does.


HeyLaughingBoy is right about patterns being built into frameworks we use today (can’t reply to that comment because it’s nested too deeply).

Rails is an example. I’ve seen a number of talks and articles by DHH that emphasize patterns and talking with people who wrote patterns. Rails built those in (like “model view controller”).

Libraries and frameworks weren’t publicly available 30 years ago. Certainly not for free. The patterns are still useful, it’s just that a library or framework is often more efficient than reimplementing a pattern from scratch.


> In practice very few projects can benefit from the "Factory pattern"

The factory pattern in C#:

    public IMeasuringDevice CreateMeasuringDevice(Func<IUnitConverter> unitConverterFactory)
In TypeScript:

    function createMeasuringDevice(unitConverterFactory: () => UnitConverter): MeasuringDevice
Very few projects can benefit from this!?


Would you say the GoF are more descriptive than prescriptive?

That is, not “do these to write good code” but “well written code looks like this”


It is meta-prescriptive.

It doesn't say, "Do this to make your code better." It says, "If you have this specific problem under these constraints, then this can help."


definitely yes, I'd even go as far as to say they're just exemplary, meaning "well written code can look (for example) like this", but since they became "industry standards", they help your code be understood (either by other people, or by you when you eventually forget how/why you wrote it that way), which helps speed up code review and make it easier to maintain / refactor old code...


> The internet was mostly irrelevant to the line of work I was involved in although it was starting to have impact. We had one ISDN 2x line for the entire office. It was set up to open on demand and time out a few minutes later as it was billed by the minute.

Early gig I had in 97 was working on building an internal corp intranet for a prototyping shop. There were around 50-60 folks there - probably 20 "upstairs" - doing the office/business work. I was upstairs. I was instructed to build this in Front Page. Didn't want to (was already doing some decent PHP on the side) but... hey... the IT guy knew best.

Asked for some books on FP. Nope - denied. So I spent time surfing through a lot of MS docs (they had a moderate amount online docs for FP, seemingly) and a lot of newsgroups. I was pulled aside after a while saying I was using too much bandwidth. The entire building had - as you had - a double ISDN line - a whopping 128k shared between 20+ people. I was using 'too much' and this was deemed 'wrong'. I pointed out that they decided on the tool, which wasn't a great fit for the task, and then refused to provide any support (books/etc). I left soon after. They were looking for a way to get me out - I think they realized an intranet wasn't really something they could pull off (certainly not in FP) but didn't want to 'fire' me specifically, as that wasn't a good look. Was there all of... 3 months IIRC. Felt like an eternity.

Working in software in the 90s - a bookstore with good tech books became invaluable, as well as newsgroups. No google, no stackoverflow, often very slow internet, or... none sometimes.


For some things, it was possible to substitute grit, experimentation, practice and ultimately mastery.

But for others, especially being closer to hardware, a good book was necessary. These days, it still might be.


> substitute grit, experimentation

And.. there were far fewer distractions. Waiting for compiling? Maybe you could play solitaire, but there were fewer 'rabbit holes' to go down because... you likely weren't on a network. Even by mid 90s, you weren't easily 2 seconds away from any distraction you wanted, even if you were 'online'.


Similarly I started in 2000, just as internet applications were starting to become a thing. I could list a whole load of defunct tech stacks: WebSphere, iPlanet, NSAPI, Zeus Web Server (where I worked for about a year), Apache mod_perl, Delphi etc. And the undead tech stack: MFC.

Compensation: well, this is the UK so it's never been anywhere near US levels, but it was certainly competitive with other white-collar jobs, and there was huge spike briefly around 2001 until the "dotcom boom" burst and a whole load of us were laid off.

Tooling: well, in the late 90s I got a copy of Visual Studio. I still have Visual Studio open today. It's still a slow but effective monolith.

The big difference is version control: not only no git, but no svn. I did my undergraduate work in CVS, and was briefly exposed to SourceSafe (in the way that one is exposed to a toxin).

Most of the computers we used back in 2000 were less powerful than an RPi4. All available computers 30 years ago would be outclassed by a Pi, and the "supercomputers" of that day would be outclassed by a single modern GPU. This .. makes less difference than you'd expect to application interactive performance, unless you're rendering 3D worlds.

We ran a university-wide proto-social-network (vaguely similar to today's "cohost") off a Pentium with a 100MB hard disk that would be outclassed by a low-end Android phone.

Another non-obvious difference: LCD monitors weren't really a thing until about 2000 - I was the first person I knew to get one, and it made a difference to reducing the eyestrain. Even if at 800x600 14" it was a slight downgrade from the CRT I had on my desk.


I kept buying used higher-end CRTs for almost a decade because their refresh rate and resolution so greatly outstripped anything LCD that was available for sale.


My parents got an early 2002 LCD display... I never knew what I lost... by not gaming on a CRT. Low rez too... sad. All for what, space and "enviroment"?

Like, look at this shit: https://imgur.com/a/FiOf7Vw

https://youtu.be/3PdMtwQQUmo

https://www.youtube.com/watch?v=Ya3c1Ni4B_U


I went to a PC expo of some sort in NYC in 1999 because I was in town for an interview. LCDs had just come out, but every exhibit in the hall had them because they were new but also because you could ship a whole bunch of flat screens in the same weight as a decent CRT.


I was working at an internet startup in 1996. We basically built custom sites for companies.

It’s hard now to appreciate how “out there” the internet was at the time. One of the founders with a sales background would meet with CEOs to convince them they needed a website. Most of those meetings ended with a, “We think this internet web thing is a fad, but thanks for your time”.


It's interesting to consider this viewpoint 30 years later and wonder what will bring about the next age. Is it something in it's infancy being dismissing as a fad? Have we even thought of it yet?


Well, the answer has to be the "metaverse". The jury is of course out on whether it will take root the way the internet did.


I disagree.

"The Metaverse" is still fundamentally the same as other content/experience the same way 3D movies are fundamentally the same as 2D movies.

Being able to see a projection of someone in chair next to you does not really deepen or hasten the sharing of ideas in any drastic way compares to pre-internet vs post-internet communication.

If I had to guess, my suspicion is that direct brain-to-brain communication is the next epoch-definiting development.


Instant 3d printing. It is too costly and slow now but things will change.


> I started my first dev job in early 1997 which is more like 25 than 30 years ago but I think the milieu was similar.

My first internship was in 2000, and I feel like, overall, not a lot has changed except the deck chairs. Things still change just as fast as back then.


Taking inflation and especially rocketing housing plus wlb into account maybe you actually earned more in 1997 than today for the same kind of jobs?


The thing I remember most about those days was how often I went to Barnes & Nobles. You simply couldn't find the information online at that point. I'd go and buy a coffee and sit with a stack of books on a given topic I needed to research; then after ~45 minutes I'd decide and buy one or two books, and head back to the office to get to work.


Or to Computer Literacy bookstore. Each time I attended a conference in the Bay Area I made sure to drop into their store in San Jose to spend hours poring over the multitude of recent books on all the new stiuff happening in computing. I then had to lug 3-5 heavy books back on the plane with me. Then CL opened a store near me (Tyson's Corner in northern Virginia) which I visited at least weekly. I musta spent thousands on books back then, especially from O'Reilly. The world of computing was exploding and just keeping up with it was a challenge but also a major blast.

No source on the changes afoot then in computing was more compelling than WiReD Magazine. Its first 3-5 years were simply riveting: great insightful imaginative stories and fascinating interviews with folks whose font of creative ideas seemed unstoppable and sure to change the world. Each month's issue sucked all my time until it was read cover to cover and then discussed with others ASAP. That was a great time to be young and alive.

But Wired wasn't alone. Before them, Creative Computing and Byte were also must reads. Between 1975 and maybe 1990, the computing hobbyist community was red hot with hacks of all kinds, hard and soft. No way I was going to take a job that was NOT in computing. So I did. Been there ever since.


Awesome to see CL listed here ... worked at computerliteracy.com which eventually became fatbrain.com. Good times!


Or the library! The GIF spec was published in a magazine IIRC. I wrote a GIF viewer that supported CGA, EGA, VGA, ... displays.


The library had some things, but man things were moving so fast in the late 80s early 90s that you often had to buy the books you needed directly; because by the time they appeared in the library you'd be on to something else.

The right magazines were worth their weight in gold back then, for sure.


The MSDN library CDs were indispensable for Windows developers in the 90s. Amazing resource all at your fingertips! What a time we were living in!


A pirated copy of Visual Studio 2005 started my career.

We didn't have internet at home, and I was still in school, so the Knowledge Base articles on the MSDN CDs pretty much taught me.


Oh, you just made me completely melancholic with that atmospheric description! Makes me miss these times a lot. The abundance of information is truly a blessing, but also a curse.


Yeah it was huge for me when books started to come with CD-ROM copies (c. 1997?) and I could fit more than one "book" in my laptop bag.


The O'Reilly Cookbooks were always the best.

I still have most of my dev books. I figure if I ever get a huge bookshelf they'll help fill it out, and give the kids something to talk about.


Or the “Computer Literacy” bookstore in the Silly Valley.


I miss those days. The books weren't perfect but I feel like enough quality was put into a lot of them because it was hard to errata a print run. Of course there is a lot more information out there for free nowadays but it's harder to sift through. I think the nicer thing is that eventually you'll find content that speaks to you and the way you learn.


God damn, I miss those days!


My go-to was Softpro Books in Denver. I would scan the shelves at B&N and Borders too, just in case, but Softpro had a much better selection.


I was a programmer back in the MS-DOS and early Windows days. My language of choice was Turbo Pascal. Source control consisted of daily backups to ZIP files on floppy disks. The program I wrote talked to hand-held computers running a proprietary OS that I programmed in PL/N their in house variant. The communications ran through a weird custom card that talked SDLC (I think).

I was the whole tech staff, work-life balance was reasonable, as everything was done turning normal day-shift hours. There was quite a bit of driving, as ComEd's power plants are scattered across the Northern half of Illinois. I averaged 35,000 miles/year. It was one of the most rewarding times of my life, work wise.

The program was essentially a set of CRUD applications, and I wrote a set of libraries that made it easy to build editors, much in the manner of the then popular DBASE II pc database. Just call with X,Y,Data, and you had a field editor. I did various reports and for the most part it was pretty easy.

The only odd bit was that I needed to do multi-tasking and some text pipelining, so I wrote a cooperative multi-tasker for Turbo Pascal to enable that.

There weren't any grand design principles. I was taught a ton about User Friendliness by Russ Reynolds, the Operations Manager of Will County Generating Station. He'd bring in a person off the floor, explain that he understood this wasn't their job, and that any problems they had were my fault, and give them a set of things to do with the computer.

I quickly learned that you should always have ** PRESS F1 FOR HELP ** on the screen, for example. Russ taught me a ton about having empathy for the users that I carried throughout my career.


> It was one of the most rewarding times of my life, work wise.

Did you feel this way in the moment, or did you realize it when looking back?


I was standing outside the gates of Crawford Generating Station, when I realized that no matter what was wrong, when I was done with my visit, they were going to be happy. It was that moment of self actualization that doesn't often come around.

Looking back in retrospect I see how dead nuts simple everything was back then, and how much more productive a programmer could be, even with the slow as snot hardware, and without GIT. Programming has gone far downhill since then, as we try to push everything through the internet to an interface we don't control. Back then, you knew your display routines would work, and exactly how things would be seen.


Not the person you replied to, but I definitely felt that way in the moment. My success and enjoyment writing Clipper (a dBase III compiler) and Turbo Pascal applications for local businesses while I was in high school is the reason I went on to get a computer science degree at university.


Also not the person you replied to, but yes, in the moment. The feeling was that I couldn't believe someone would pay me for that.


It was awful. And it was great.

The awful part was C++. There were only two popular programming languages: C++ and Visual Basic. Debugging memory leaks, and memory corruption due to stray pointers and so on in C++ was a nightmare. Then Java came out and everything became easy.

The great part was everyone had offices or at least cubicles. No "open floor plan" BS. There was no scrum or daily standup. Weekly status report was all that was needed. There was no way to work when you're not at work (no cell phone, no internet), so there was better work-life balance. Things are definitely much worse now in these regards.

All testing was done by QA engineers, so all developers had to do was write code. Code bases were smaller, and it was easier to learn all there is to learn because there was less to learn back then. You released product every 2.5 years, not twice a week as it is now.


> There were only two popular programming languages: C++ and Visual Basic.

And COBOL. Vast, vast plurality of the business economy ran on COBOL. We also had mainframe assembler for when speed was required, but COBOL had the advantage of portability to both mainframe and minicomputer. Anything fast on the mini was written in C.

When I started we had a PC to use for general office tasks ( documents, e-mails and such ) and a 3270 or 5250 green-screen terminal for actual work. The desks groaned under the weight and the heat was ferocious. Overhead lockers were jam-packed with code printouts on greenbar and hundreds of useful documents. "Yeah I have that in here somewhere" and Bob would start to burrow into stacks of pages on his desk.

Cubicle walls were covered with faded photocopies of precious application flowcharts and data file definitions.

Updates to insurance regulations would arrive in the post and we were expected to take ownership and get them implemented prior to compliance dates. There was no agile, no user stories, no QA teams, no 360 reviews. Just code, test, release.

You knew who the gurus were because they kept a spare chair in their cubicles for the comfort of visitors.

Good times.


And don't forget Perl. :)


Pretty sure Pascal/Delphi was also popular until the early 2000s...


I remember Turbo Pascal 3.0, the one that generated COM files for MS-DOS (like EXE files, but about 2KB smaller).

I loved that Turbo.com and COM files 30 years ago!

later I started to use Turbo Pascal 5.5 with OO support and a good IDE.


Not only that, but Turbo Pascal was very efficient as a linker too, linking only library code that was actually used in the program, as opposed to Turbo C/C++ that would link the entire library. As a result, "Hello, World" was ~2KB for TP vs. ~15KB for TC. I may not remember the sizes correctly, but the difference was dramatic. Of course, for bigger programs the difference was a bit smaller. And it was fast!


Jeff Duntemann's book on Turbo Pascal is still one of my favorite texts of all time. He combined his enthusiasm for the subject with a deft hand at explaining concepts intuitively.

And of course, there was Peter Norton's Guide to the IBM PC. The bible, back then.


Still popular! Where's all my FreePascal nerds?


> The great part was everyone had offices or at least cubicles. No "open floor plan" BS. There was no scrum or daily standup. Weekly status report was all that was needed. There was no way to work when you're not at work (no cell phone, no internet), so there was better work-life balance. Things are definitely much worse now in these regards.

FWIW I have had all of these at every place I've worked, including my current job. Places like that are out there. If you're unhappy with your current job, there's never been a better time to move.


C++ was different on different operating systems (every compiler rolled his own template instantiation model). Portability was hard work.


And you downloaded the sgi stl.


Moving from C++ to Java in 1998 instantly made me twice as productive, as I was no longer spending half my time managing memory.

Together with starting pair programming in 2004, that is the biggest improvement in my work life.


> There were only two popular programming languages: C++ and Visual Basic.

Not really. Back in 1992 I was doing mostly C and second was perl, with shell scripting thrown in the edges.


I didn't start that long ago but at my first fulltime job I also had my own office. An unthinkable luxury compared to now. Also figuring out requirements on my own was nice. On the other hand I think work was much more isolated, the office was in the middle of nowhere. Also during that time it was still normal that every second project failed or became some sort of internal Vaporware. Functioning Management seemed almost non-existent.


PL/1 on Stratus and RPG on sys/[36|38] and AS/400 checking in! :-D


My first job was in 2010, not that long ago but still long enough to experience offices and no standups... definitely good times


Was Valgrind already a thing?


Valgrind was huge when it became available early in the 21st century, for finding leaks but also because and gave us engineers ammunition to use against management to keep our development systems running on Linux.

There were other profiling tools before then, but they were extremely pricey.


Bit surprised - just found out that Valgrind was first released in Feb 2002. It turns out, i've been using it since it's almost first days, 2002 for sure. Had no idea.


Good times. Most of us hadn't gotten "high on our own supply" yet, and the field was wide open.

You got the feeling of a thousand developers all running off in different directions, exploring the human and condition and all of the massively cool things this new hammer called "programming" can do.

Compare that to today. Anywhere you go in the industry, it seems like there's already a conference, a video series, consultants, a community, and so on. Many times there are multiple competing groups.

Intellectually, it's much like the difference folks experienced comparing going cross country by automobile in say, 1935 versus 2022. Back then there was a lot of variation and culture. There was also crappy roads and places you couldn't find help. Now it's all strip malls and box stores, with cell service everywhere. It's its own business world, much more than a brave new frontier. Paraphrasing Ralphie in "A Christmas Story", it's all just crummy marketing.

(Of course, the interesting items are those that don't map to my rough analogy. Things like AI, AR/VR, Big Data, and so on. These are usually extremely narrow and at the end of the day, just bit and pieces from the other areas stuck together)

I remember customers asking me if I could do X, figuring out that I could, and looking around and not finding it done anywhere else. I'm sure hundreds, maybe thousands of other devs had similar experiences.

Not so much now.


Lots of books! We had the internet but it wasn't very useful for looking up information about programming. We had usenet but it would take a while to get an answer, and often the answer was RTFM.

But what we did have were O'Reilly books! You could tell how senior an engineer was by how many O'Reilly books were on their shelf (and every cubicle had a built in bookshelf to keep said books).

I remember once when our company fired one of the senior engineers. The books were the property of the company, so they were left behind. Us junior engineers descended on his cubicle like vultures, divvying up and trading the books to move to our own shelves.

I still have those books somewhere -- when I got laid off they let me keep them as severance!


In addition to the ubiquitous ORA books (really, did anyone ever understand Sendmail config files before the bat book?) there were also a lot of print-outs. Huge swaths of code printed on 132-col fanfold paper. You might have a backup copy of the source code on tape somewhere, but nothing made you feel secure like having a copy of the previous working version printed out and stashed somewhere on your desk or in a drawer.


Lots of coding was done on those printouts also - you'd print out your function or your program and sit back and mark it up - especially if you were discussing or working with someone. Screens were small back then!


oh these books

there are still pretty much all around 'old' IT companies, displayed in shelves and bookcases, as artifacts that explains what were old languages and systems.

I love the retro futuristic vibe of the cover of some of these. And of their content. They invite the reader to leap into the future with bash, explained how Linux used to work, how past versions of .NET and Java were breakthroughs, how to code with XML,...

As a junior who has hardly read any of these, I find them pretty poetic, and I like the reflection they bring on IT jobs. The languages and technologies will change, but good looking code is timeless


Fun!

Precarious. Very slow. Like a game of Jenga, things made you nervous. Waiting for tapes to rewind, or slowly feeding in a stack of floppies, knowing that one bad sector would ruin the whole enterprise. But that was also excitement. Running a C program that had taken all night to compile was a heart-in-your-mouth moment.

Hands on.

They say beware a computer scientist with a screwdriver. Yes, we had screwdrivers back then. Or rather, developing software also meant a lot of changing cables and moving heavy boxes.

Interpersonal.

Contrary to the stereotype of the "isolated geek" rampant at the time, developing software required extraordinary communication habits, seeking other experts, careful reading, formulating concise questions, and patiently awaiting mailing list replies.

Caring.

Maybe this is what I miss the most. 30 years ago we really, truly believed in what we were doing... making the world a better place.


In 1992 I had an office with a door, a window, a Sun workstation, lots of bankable vacation time, a salary that let me buy a house in a nice neighborhood, and coded in a programming language that I could completely understand for a machine whose instruction set was tiny and sensible. Now I WFH with a really crappy corp-issued Windows laptop that's sluggish compared to that Sun, haven't had a real vacation in years despite "unlimited" PTO, and have to use a small safe subset of C++ to code for chips whose ISAs have bloated into behemoths. On the plus side, I now have git, instead of Cray's UPDATE, whose underlying metaphor was a box of punched cards.


I'm sad I missed the office-with-door era. I started working in 2011, and my first company still had some real offices, but you had to be pretty senior. I made it to that level right around the time they got rid of them :-( and no other place I've worked at has had them for anyone but execs / senior managers / sales and the like.

To date the best working condition I got to actually experience instead of just watch others have it, was when I interned at a federal government agency in 2009 and had a real cubicle (and an electric standing desk!) -- the same thing Dilbert comics were complaining about so hard in the 90s, but far better than the open office pits that replaced them.

The best part about offices with doors is you didn't need to worry about formal schedules and meeting rooms as much, because anyone could have an ad hoc meeting in their office any time.


What happened to "that era"? Why did we get rid of it


The price of commercial real estate happened.


Paradoxically, I think that while pay is generally better these days, opportunities were in some ways better back then. If you learnt some good skills (e:g C++, X/Motif, Unix such as Solaris or HP-UX, TCP/IP and other networking) back then, that'd give you a serious technical edge and good (international) job opportunities for quite some time, whereas nowadays its harder for anyone, including younger ones, to stay on top of skills and stand out from the pack. No stand-ups back then. Personally I prefer that now we have agile ceremonies and we actually, you know, talk to each other and collaborate. Back then you could be totally unsociable (desire no interaction at all) or anti-social (sexist etc) at work and get away with it. Good thing that wouldn't fly now, or at least not to same extent. I even worked with someone that had a porn stash on their work computer. Management didn't seem to know or wouldn't have cared if they did. Its kind of mind -boggling when you think that'd just be instant dismissal now (hopefully, in any sane organisation). Work-life balance I think is better now, as flexible working is easier to get, and with WFH. That said, when I was young, in UK, I had no trouble getting a month off to go travelling . (albeit, did request that vacation almost a year in advance). We used to do Rapid Application Development /. iterative design, which resembles agile. Nothing new under the sun, as the saying goes. Appraisal / performance was probably better for a lot of people those days. It pretty much meant once a year pull out your objectives, find out what you worked on had entirely changed, so "reverse engineer" your objectives to be a list of the things you'd achieved for the year, pat on the back, meagre pay rise, but far less of the rivalry and performance nonsense that seems to pervade particularly US companies these days.


> now we have agile ceremonies and we actually, you know, talk to each other and collaborate

I'm not gonna get on agile ceremony topic right now, but I think this is entirely a function of where you were. All of the greatest environments I've ever worked in were more than 20 years ago. Everyone was genuinely into the field. Hacking in the weird offices full of prog and myriad metal genres...That wasn't my archetype, but most were, and I had a great time hanging out in that world.


Oh certainly - the world was so desperate for computer people and there was no time to build up college courses for this stuff (it was way too new) that anyone who could do anything computer-related could easily get a decent job. This lasted until the 2000 crash, but in the right areas continued on for quite awhile.


You could get away with "I know HTML" and suits would call you a "Programmer!"


As someone who started working very young, and who still has dark hair that helps pass IC interviews with 22yo hiring managers, :) I offer one person's boots-on-the-ground perspective from the two eras...

The biggest change is that software become a high-paying, high-status job. (In parallel, "nerd" became a cool thing to be, maybe for not entirely for different reasons.)

I still remember when the dotcom boom started, and I first saw people claiming to be experts on Web development (and strutting about it) while wearing what I assumed were fashionable eyeglasses. I had never seen that before.

Before software was a high-status job -- although in many areas it was dominated by men (maybe because a lot of women were homemakers, or discouraged from interests that led to software) -- there were a lot of women, and we're only recently getting back to that.

About half my team one place was female, and everyone was very capable. Software products were often done by a single software engineer, teamed up with a marketing product manager and technical documentation, and often with a test technician/contractor. To give you an idea, one woman, who had a math degree, developed the entire product that instrumented embedded systems implemented in C, for code path coverage and timings, and integrated with our CASE system and our in-circuit emulator. Another of my mentors was a woman who worked 9-5, had kids, and previously worked on supercomputing compilers (for what became a division of Cray) before she came to work on reverse-engineering tools for our CASE system.

Another mentor (some random person from the Internet, who responded to my question about grad school, and she ended up coaching me for years), was a woman who'd previously done software work for a utility company. Back when that meant not just painting data entry screens and report formats, but essentially implementing a DBMS and some aspects of an operating system. And she certainly didn't get rich, wielding all those skills, like she could today.

Somewhere in there, software became a go-to job for affluent fratboys, who previously would've gone into other fields, and that coincided with barriers to entry being created. Which barriers -- perhaps not too coincidentally -- focused on where and whether you went to college, rather than what you can do, as well as "culture fit", and rituals not too far removed from hazing. Not that this was all intentional, and maybe it's more about what they knew at the time, but then it took on a life of its own.


I was doing Turbo Pascal, C and assembly back then. What I liked better, at least where I was and worked, what you had in your head was it. There were libraries, but, at least where I worked, not very many and it was tedious to get them, so you just rolled your own.

Focus was more (again, for me) on doing clever stuff vs, what I do today, integrating stuff and debugging stuff made by other people.

Compensation was lower than it is now (obviously?), but it was, similar to now, higher than most jobs. 30 years ago, unlike 40 years ago, you already could get jobs/work without a degree in software, so it was life changing for many, including me, as I was doing large projects before and in university which gave me stacks of cash while my classmates were serving drinks for minimum wage and tips.

I guess the end result these days is far more impressive in many ways for the time and effort spent, but the road to get there (stand ups, talks, agile, bad libraries/saas services, fast changing eco systems, devops) for me has mostly nothing to do with programming and so I don’t particularly enjoy it anymore. At least not that part.

Process reflected that; our team just got a stack of paper (brief) and that’s it; go away and implement. Then after a while, you brought a cd with the working and tested binary, went through some bug fixes and the next brief was provided.

One of the stark differences I found, at least in my country (NL), is that end 80s, beginning of the 90s, almost all people told me; you program for 5-8 years and then become a manager. Now, even the people who told me that at that time (some became, on eu scale, very prominent managers), tell me to not become a manager.


    Process reflected that; our team just got a stack of 
    paper (brief) and that’s it; go away and implement. 
    Then after a while, you brought a cd with the working 
    and tested binary, went through some bug fixes and 
    the next brief was provided.
Oh my god I wish I could live in this world for a little while. Like you, I'm not sure I enjoy this any more.


>tell me to not become a manager

Why though?


Take all these stories with a grain of salt because one of the key elements for the things being better (at least for for me) 30 years ago is realistically the fact that I had 30 years less back then. So everything was new to me, I had a lot more time (I was at the end of the high-school when I started programming for money), and most importantly I had a lot more energy (and less patience, but energy made up for it).

Realistically, it was way harder. Tools were rough, basically just text editors, compiling was slow and boring, debugging tools super primitive compared to todays. And to learn programming back then, especially making the first steps you had to be really motivated for it. There was no Internet the way we know it today, so you had to dig a lot to get the info. Community was small and closed, especially to complete newbies, so you had to figure out all of the basic steps by yourself and to sort of prove yourself. In the beginning you had to really read books and manuals cover to cover many times - and then again later you'd end up spending hours turning pages, looking for some particular piece of info. It was supper time consuming, I spent nights and nights debugging and figuring how to call some routine or access some hardware.

On the other hand it had that sense of adventure, obtaining some secret knowledge, like some hermetic secret society of nerds that you've suddenly become initiated into, which to high-schooler me was super exciting.


I loved working in the 90s. Things were less “professional “ and project management often almost didn’t exist. In my first job I was presented a problem and told to come back in a few months. No standups, management was all engineers.

I also feel back then software wasn’t viewed as a well paid career so you had more people who really wanted to do this because they were interested. When I look around today there are a lot of people who don’t really like the job but do it because it pays well.

It was also nice to not have to worry much about security. If it worked somehow, it would as good. No need to worry about about vulnerabilities.


100% agree, I had the same experience


Came here to see the "Old Timers" reply... then realized my first coding job was 1993 (29 yrs).

I recall the big disruptors being a transition to GUIs, Windows, and x86 PCs. DOS and command line apps were on the way out. Vendors like Borland offered "RAD" tools to ease the Windows boilerplate. Users were revolting and wanted to "mouse" over type.

The transition from C to C++ was underway. The code I worked on was full of structs and memory pointers. I was eager to port this to classes with a garbage collector, but there were vtable lookup and performance debates.

Ward's Wiki was our community platform to discuss OOP, design patterns, and ultimately where Agile/XP/SCRUM were defined. https://wiki.c2.com/

Work was 100% 9am-5pm Mon-Fri in the office. It was easier to get in the flow after hours, so 2-3 days per week involved working late. With PCs, it was also easier to program and learn at home.

Comp was ok, relative to other careers. I recall by 1995 making $45K per year.


My first job, in the early-90s, paid $25K/year. I thought I was rich. LOL


I started my career 30 years ago at Apple. There was no internet, no option to work from home. We wrote software that would eventually be burned onto disks. Whatever we used for source code control was quite primitive. Seeing your work go "live" would take months. We were working on cutting-edge stuff (speech recognition in my case) in an environment buzzing with energy and optimism about technology.

Compensation was average: there were no companies with inflated pay packages, so all my engineering friends were paid about the same. Friendly (or not) rivalries were everywhere: Apple vs IBM vs Next vs Microsoft. I'd grown up imagining Cupertino as a magical place and I finally got to work in the middle of it. After the internet, the get-rich-quick period launched by Netscape's IPO, the business folks took over and it's never been the same.


I heard there was a big tech “cartel” in the software engineering labor market. Microsoft wouldn’t hire people from apple and vice versa. That made salaries pretty average for everyone and definitely lower than it should’ve been.


30 years ago? I remember:

1) Burn and crash embedded development – burn EPROMs, run until your system reset. Use serial output for debugging. Later, demand your dev board contains EEPROM to speed up this cycle.

2) Tools cost $$$. Cross-compilers weren’t cheap. ICE (in-circuit emulation) for projects with a decent budget.

3) DOS 5.0 was boss! Command line everything with Brief text editor with text windows.

4) Upgrading to a 486dx, with S3 VGA – Wolfenstein 3D never looked so good!

5) The S3 API was easy for 1 person to understand. With a DOS C compiler you could roll your own graphics with decent performance.

6) ThinkPad was the best travel laptop.

7) Sharing the single AMPs cellphone with your travel mates. Service was expensive back then!

8) Simple GANTT charts scheduled everything, +/- 2 weeks.

9) You could understand the new Intel processors – i860 and i960 – with just the manuals Intel provided.

10) C Users Group, Dr Dobbs, Byte, Embedded Systems Journal and other mags kept you informed.

11) Pay wasn’t great as a software dev. No stock options, bonus, or overtime. If your project ran late you worked late! A few years later the dot com boom raised everyone’s wages.

12) Design principles could be as simple as a 1-page spec, or many pages, depending on the customer. Military customers warranted a full waterfall with pseudo-code, and they paid for it.

13) Dev is much easier today as most tools are free, and free info is everywhere. However, system complexity quickly explodes as even “simple” devices support wireless interfaces and web connectivity. In the old days “full-stack” meant an Ethernet port.


> DOS 5.0 was boss! Command line everything with Brief text editor with text windows.

I occasionally miss Borland Turbo C. These days people would regard it as magic to get a whole IDE and compiler and set of system headers into a couple of megabytes.


30 years ago was a great time, software development was really ramping up. The internet was like a "secret" which not many people knew about, dominated by ftp / usenet / irc/ ftp / gopher / telnet / email. Books and magazines were a BIG thing in the world of software development, they were your main sources of information. Processes were a mixed bag of things, mostly based around waterfall type ideas but most places just did things ad hoc. OO was in its infancy. There was a lot of thinking around "modelling" as a way to create software instead of code. There was a lot of influence from structured programming. We were on the borderlines between DOS and windows, so a lot of stuff was just single threaded programs. There was still a lot of the cool 80s computers around.. amigas, ataris, etc. Apple was off on the side doing its own things as it always has, mostly the thing you associated with it was desktop publishing. Pay is probably better now, I think, work life balance is the same if you go into a workplace, and better if you work from home. While the 80s/90s was fun and exciting, things are really good now.


FOR ME, 30 YEARS AGO:

  - 1 language (DATABASIC) Then it did everything. I still use it now, mostly to connect to other things.
  - 1 DBMS (PICK) Then it did everything. I still use it now, mostly as a system of record feeding 50 other systems.
  - Dumb Terminals. Almost everything ran on the server. It wasn't as pretty as today, but it did the job, often better. Code was levels of magnitude simpler.
  - Communication with others: Phone or poke you head around the corner. No email, texts, Teams, Skype, social media, Slack, Asana, etc., etc., etc 1% of the interruptions.
  - Electronic communication: copper or fiber optic. Just worked. No internet or www, but we didn't need what we didn't know we would need someday. So simple back then.
  - Project management. Cards on the wall. Then we went to 50 other things. Now we're back to cards on the wall.
  - People. Managers (usually) had coded before. Users/customers (usually) had done the job before. Programmers (usually) also acted as Systems Analyst, Business Analyst, Project Manager, Designer, Sys Admin, Tester, Trainer. There were no scrum masters, business owners, etc. It was waterfall and it (usually) worked.
MOST IMPORTANTLY:

  - 1992, I spent 90% of my time working productively and 10% on overhead.
  - 2022, I spend 10% of my time working productively and 90% on overhead.

  Because of this last one, most of my contemporaries have retired early to become bartenders or play bingo.

  1992 - It was a glorious time to build simple software that got the customer's job done.
  2022 - It sucks. Because of all the unnecessary complications, wastes of time, and posers running things.

  Most people my age have a countdown clock to Social Security on their desktop. 30 years ago, I never could have imagined such a state would ever exist.


> 2022, I spend 10% of my time working productively and 90% on overhead

Is it because the nature of work/programming has changed? Or now you're in a more "leadership/managerial" position that requires you to manage people and ergo feels like overhead.


It is because the nature of work/programming had changed.

I got sucked into "leadership/managerial" a few times but quickly escaped.

I just want to build fricking software! It's the coolest thing ever and I was born to do it.

Now I have to do it after hours on my own because I'm so damn busy in meetings all day long.


One thing you almost -never- saw then was this:

"Hey guys, check out this thing X I made with absolutely no reason other than to see if I could and to understand Y problem better"

replies:

- "idk why anybody would do this when you could use Xify.net's free tier"

- "you could have simply done this with these unix tools and a few shell scripts, but whatever creams your twinkie"

- "just use nix"

Instead what we had were cheers, and comments delayed while people devoured the code to see what hooked you in, and general congratulations- most often this lead to other things and enlightened conversations.

Everything's always been a 'competition' so to speak, but we weren't shoving eachother into the raceway barriers like we do now on the way to the finish line. There was a lot more finishing together.


I think much of that really depends on what kind of company/department you worked for, like I'd guess it does now.

Apart from that, there were more constraints from limited hardware that forced you to get creative and come up with clever solutions for things that you can now afford to do in a more straightforward (or "naive") manner. It helped that you could (and usually did) pretty much understand most if not all of the libraries you were using. Chances are you had a hand in developing at least some of them in the first place. Fewer frameworks I'd say and fewer layers between your own code and the underlying hardware. Directly hacking the hardware even, like programming graphics card registers to use non-standard display modes. And I'd guess the chance that your colleagues were in it for the magic (i.e. being nerds with a genuine passion for the field) more than for the money were probably better than now, but I wouldn't want to guess by how much.

Oh, and the whole books vs internet thing of course.


Re >> "I think much of that really depends on what kind of company/department you worked for"

Reminds me of: "I update bank software for the millennium switch. You see, when the original programmers wrote the code, they used 2 digits instead of 4 to save space. So I go through thousands of lines of code... you know what, I hate my job, I don't want to talk about it." ~ Peter Gibbons

I wasn't programming in the workplace 20-30 years ago, but I believe you're right when you say: Depends on the company/department.


30 years ago. Man. I was working at a OpenVMS shop, cranking out DCL code and writing mainly in FORTRAN. Books and manuals littered my wee little cubicle. I had a vt220 and vt420 terminals because Reflections rarely worked correctly on my hardly used PC. I also had a terminal to a HP 3000 system,, running MPE, and had to do code review and testing on a app that was written in BASIC!

Version control was done using the features of the VMS filesystem. I believe that HP MPE had something like that also, but I may have blocked it out.

Around about late '93 early '94 they hauled the HP terminal away and slapped a SparcClassic (or IPX? IPC?) in it's place. I was tapped to be part of the team to start migrating what we could off the VMS system to run on Solaris. So, I had to learn this odd language called See, Sea, umm 'C'?

A whole need set of manuals. A month's salary on books. Then another few books on how to keep that damn Sparc running with any consistency.

Then had to setup CVS. Sure, why not run the CVS server on my workstation!

By the end of '95 I was working mainly on maintaining the Solaris (and soon HP/UX and AIX) boxes then programming.

I still miss writing code with EDT on VMS and hacking away on fun things with FORTRAN. You know, like actually writing CGIs in FORTRAN. But that is another story.


I was working in Pascal, C and assembly about 30 years ago, mostly in DOS and Windows 3.

By 1995 I started dabbling with websites, and within a couple of years was working mostly with Perl CGI and some Java, on Windows and Linux/NetBSD.

Most of my work was on Windows, so that limited the available Perl libraries to what would run on ActiveState's Perl.

I gave up trying to do freelance because too many people didn't seem to understand the cost and work involved in writing software:

- One business owner wanted to pay be US $300 to fix some warehouse management software, but he'd up it to $500 if I finished it in one month.

- A guy wanted to turn his sports equipment shop into an e-commerce website, and was forward thinking... except that none of his stock of about 20,000 items was in a database and that he could "only afford to pay minimum wage".

I interviewed with some companies, but these people were clueless. It seems like a lot of people read "Teach yourself Perl in 7 days and make millions" books. The interview questions were basically "Can you program in OOP with Perl?".

I got a proper developer job on a team, eventually. They were basically happy that I could write a simple form that queried stuff from a database.

Some other people on my team used Visual Basic and VBScript but I avoided that like the plague. I recall we had some specialized devices that had their own embedded versions of BASIC that we had to use.

When Internet Explorer 4 came out that we started having problems making web sites that worked well on both.

Web frameworks didn't exist yet, JavaScript was primitive and not very useful. Python didn't seem to be a practical option at the time.


Actually, I should answer the questions:

> ... terms of processes, design principles, work-life balance, compensation. Are things better now than they were back then?

We didn't have an official process or follow any design principles. There were small teams so we simply had a spec but we'd release things in stages and regularly meet with clients.

I had a decent work-life balance, a decent salary but wasn't making the big dot.com income that others were making.

I think overall things are better, technology-wise as well as some awareness of work-life balance, and more people are critical of the industry.

The technology is more complicated, but it does a lot more. The simplicity was largely due to naivety.


The core difference was that the job required much more vertical reasoning. Crafting things from the ground up was the norm. Starting from a blanc code file and implementing core data structures and algoritms for the domain was oftem the case. Limited resources required much more attention to efficiency and thight constraints. Much weaker tooling required more in depth knowledge rather than trial and error development. There also was no web nor google so finding things out was either books or newsgroups.

These days often the demand is more horizontal. Stringing together shallow understood frameworks, libraries and googled code and get it to work by running and debugging.

The scope of things you can build solo these days is many orders of magnitude larger than it was back then.

Still, the type of brainwork required back in the day most definetly was more satisfying, maybe because you had more control and ownership of all that went into the product.


> The scope of things you can build solo these days is many orders of magnitude larger than it was back then.

This is the most positive phrase on this whole page.

It's easy to get stuck in nostalgia but harder to realise how good we have it now. Thanks.


It was very process heavy in my experience. Because of the available technology development was slow and costly, so the thought was to put a lot of process around development to minimize the chances of projects going off the rails.

We were also in the "object nirvana" phase. Objects were going to solve all problems and lead us to a world of seamless reusable software. Reusability was a big thing because of the cost. Short answer: they didn't.

Finally I am astonished that I'm routinely using the same tools I was using 30 years ago, Vim and the Unix terminal. Not because I'm stuck in my ways, it because it is still state of the art. Go figure.

I'd never go back. The 90's kind of sucked for software. Agile, Git, Open source, and fast cheap computers have turned things around. We can spend more time writing code and less time writing process documents. Writing software has always been fun for me.


What's amazing about the major improvements you list there (besides the cheap computers) is that many of them could easily have been done on 80s/90s equipment. We had rcs and cvs but most people just had a batch file that made a copy of the working directory, if anything.

But then a lot of software was relatively "simple" back then, in terms of total lines of code; most everything today is a massive project involving many developers and much more code. Necessity brought around things like git.


I missed a big change in my list which is the Internet. We were using the internet in 1992. I remember FTP'ing source code from machines in Finland from here in Silicon Valley. I knew it was going to change the world.

Ubiquitous networking means that new developments are now instantly available instead of having to wait for the publishing cycle. People can share instantly.

Open source changed everything. In 1992 you'd need to get your accounting dept to cut a PO and then talk to a sales person to get new software. Most software being freely available today has been like rocket fuel for the industry. If you'd told me in 1992 that in 30 years most software would be distributed open source, and that the industry as a whole is making much more money, I'd have said you were crazy.

That all said, this particular conversation would have been occurring over Usenet in 1992 using almost exactly the same format.


Yeah it's absolutely amazing to think of the huge, HUGE names in the computer world from the 90s whose only product was something that is now entirely open source and given away for free.

I'm talking not only things like Netscape Navigator but all the server software vendors, etc. And even where things still exist (the only real OS manufacturer anymore is Microsoft and Apple, everything else is a Linux) they've been driven free or nearly so. Windows 95 was $209.95 on launch ($408.89 in today's dollars) but a boxed copy of Windows 11 is $120 - and MacOS is now "free".


> We had rcs and cvs but most people just [...]

To be fair RCS and CVS sucked. I remember trying to use CVS in the early 2000s when SVN wasn't even out yet, and if I remember the experience correctly, even today I might be tempted to just write a batch file to take snapshots...


Branching in RCS and CVS was rather confusing.


I worked at IBM as a student intern in the mid 80s and at the Royal bank of Scotland.

Process at these companies was slow and waterfall. Once worked with a group porting mainframe office type software to a minicomputer sold to banks; they had been cranking out C code for years and scheduled to finish in a few more years, and the developers were generally convinced that nothing would ever be releasable.

The people were smart and interesting - there was no notion of doing software to become rich, pre-SGI and pre Netscape, and they all were people who shared a love of solving puzzles and a wonder that one could earn a living solving puzzles.

IBM had a globe spanning but internal message board sort of thing that was amazing, conversations on neurology and AI and all kinds of stuff with experts all over the world.

I also worked at the Duke CS lab around 1990, but it was hard to compare to companies because academia is generally different. People did the hard and complex stuff that they were capable of, and the grad students operated under the thumb of their advisor.

Wages were higher than for example secretarial jobs, but not life altering for anyone, but people didn’t care so much.


I started my first real job in 1995 after dropping out of grad school. I was paid about $40K (which seemed like SO MUCH MONEY) and worked at a tiny startup with one other full-timer and two part-timers writing Lisp on Macs for a NASA contract. We had an ISDN line for internet; I mostly remember using it for email/mailing lists and usenet (and reading macintouch.com). We had an AppleTalk LAN, and IIRC one developer workstation ran Apple's MPW Projector for source control, which Macintosh Common Lisp could integrate with.

Our office was in a Northwestern University business incubator building and our neighbors were a bunch of other tech startups. We'd get together once a month (or was it every week?) to have a drink and nerd out, talking about new technology while Akira played in the background.

It was awesome! I got to write extremely cool AI code and learn how to use cutting edge Mac OS APIs like QuickDraw 3D, Speech Recognition, and Speech Synthesis. Tech was very exciting, especially as the web took off. The company grew and changed I made great friends over the next 7 years.

(Almost 30 years later I still get to write extremely cool AI code, learn new cutting edge technologies, find tech very exciting, and work with great people.)


1995, depends on what stack you were on.

The DOS/Windows stack is what I worked on then. Still using floppies for backup. Pre-standard C++, Win16, Win32s if it helped. Good design was good naming and comments. I was an intern, so comp isn't useful data here.

Yes, things are much better than then. While there were roots of modernism back then, they were in the ivory towers / Really Important People areas. Us leeches on the bottom of the stack associated all the "Software Engineering" stuff with expensive tools we couldn't afford. Now version control and test frameworks/tools are assumed.

Processes didn't have many good names, but you had the same variation of some people took process didactically, some took it as a toolbox, some took it as useless bureaucracy.

The web wasn't a big resource yet. Instead the bookstores were much more plentiful and rich in these areas. Racks and racks of technical books at your Borders or (to a lesser degree) Barnes & Noble. Some compiler packages were quite heavy because of the amount of printed documentation that came with them.

Instead of open source as we have it now, you'd go to your warehouse computer store (e.g. "Soft Warehouse" now called Micro Center) and buy a few cheap CD-ROMs with tons of random stuff on them. Fonts, Linux distros, whatever.


Floppies are under appreciated just how easy and cheap they were. Everyone had a box of (if you were fancy, preformatted) floppies next to their computer, and throwing one in and making a copy was just something you did all the time, and giving the floppy to someone to have and to keep was commonplace. That didn't really get replaced until CD burners became cheap, then everyone had a stack of blank CDs next to them, but they were not as convenient. It was easy to make backups and keep them offline and safe (though people would still not do it at times).

Even today, most people do NOT have a box of USB sticks that they can give away - they can use one to transfer something but you'll want the stick back. Throwing something on the internet and sending a download link is the closest we have, and it has some advantages, but it's not the same.


I recall the time in the 90s when some friends and I left a floppy of Doom as a tip for our waiter.


Thirty years ago was 1992. I was an employee of Apple Computer.

I had spent a year or two working on a knowledge-based machine-control application that Apple used to test prerelease system software for application-compatibility problems. A colleague wrote it in Common Lisp, building it on top of a frame language (https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...) written by another colleague.

The application did its job, using knowledge-based automation to significantly amplify the reach and effectiveness of a small number of human operators, and finding and reporting thousands of issues. Despite that success, the group that we belonged to underwent some unrelated organizational turmoil that resulted in the project being canceled.

I interviewed with a software startup in San Diego that was working on a publishing app for the NeXT platform. I had bought a NeXT machine for learning and pleasure, and had been tinkering with software development on it. A bit later, another developer from the same organization at Apple left and founded his own small startup, and, knowing that I had a NeXT cube and was writing hobby projects on it, he contracted with me to deliver a small productivity app on NeXT. In the course of that work I somehow found out about the San Diego group and started corresponding with them. They invited me to interview for a job with them.

I liked San Diego, and I really liked the guys at the startup, and was very close to packing up and moving down there, but then the Newton group at Apple approached me to work for them. The Pages deal was better financially, and, as I say, I really liked the people there, but in the end I couldn't pass up the chance to work on a wild new hardware-software platform from Apple.

In the end, Newton was not a great success, of course, but it was still among the most fulfilling work I've ever done. Before I was done, I had the opportunity to work on a team with brilliant and well-known programmers and computer scientists on promising and novel ideas and see many of them brought to life.

On the other hand, I also overworked myself terribly, seduced by my own hopes and dreams, and the ridiculous lengths that I went to may have contributed to serious health problems that took me out of the workforce for a couple of years a decade later.

But 1992 was a great year, and one that I look back on with great fondness.


You do wonder about Newton... But for the presence of better infrastructure (esp home networking, wifi, and 'fast enough' cellular), could Newton have become the Next Big Thing in computing after Macintosh? I suspect that Jobs' learning first-hand about the need for essential infrastructure was essential to the iPhone successfully taking flight 15 years after (esp due to 3G).


In an absolute sense, it's a lot better now, but in a relative sense it's a lot worse.

30 years ago the production values on software were a lot lower, so that a single programmer could easily make something that fit in with professional stuff. My first video game (written in turbo pascal) didn't look significantly worse than games that were only a few years old at the time. I can't imagine a single self-taught programmer of my talent-level making something that could be mistaken for a 2018 AAA game today.

The other major difference (that others have mentioned) is information. It was much harder to find information, but what you did end up finding was (if not out-of-date) of much higher quality than you are likely to get from the first page of google today. I can't say if it's better or worse; as an experienced programmer, I like what we have today since I can sift through the nonsense fairly easily, but I could imagine a younger version of me credulously being led down many wrong paths.


> I can't imagine a single self-taught programmer of my talent-level making something that could be mistaken for a 2018 AAA game today.

They might not look like 2018 AAA games, but there are so many indie games that vastly eclipse AAA titles in every other way... even "double A" or smaller big studio titles can be better than AAA titles if you judge by things other than flashy graphics.

And it's gotten much easier to make a game if you're willing to use something other than ultra-real graphics - Unity, for instance, has massively improved the ability for relative amateurs to make great engaging games.

Thinking of the games I play most days now, only one of them (Hunt: Showdown, by Crytek) is anything approaching triple A. I haven't touched a big budget big studio game in years.

So I think at least in gaming, things have gotten better, not worse. Yeah, the ceiling has been raised, but the floor has also dropped such that the level of effort required to make a decent game is lower than ever.


That's fair; I'm too busy to game these days, so I am certainly not in touch with what gamer's expectations are. I was just thinking that e.g. single-screen puzzle platformers were easy to make 30 years ago and wouldn't be too far off from something that would be "cool"


I would like to know! I can go back 22 years. Then, jobs were more likely to be apps running on Windows. People yelling at screen because things aren’t rendering properly (render as in pixels, not virtual doms!). No unit tests. Sourcesafe (old buggy but simple to use VCS). You could exclusively lock a file to annoy other developers and stamp the importance of your work. No scrum and much less process. 9-5 ish and no time tracking. No OKR or KPI. Do everything with Microsoft tooling. No open source tooling. Someone’s job to build an installer and get it burned to a CD (the optical storage medium). There was some automated testing but no unit test or CI/CD. Not so many “perks” like swazzy office, toys, food supplies etc. If there was webdev it would be in ASP or ActiveX!


That's a very windows-centric view of the past. And with good reason too! Windows was utterly dominant back then. Still, Slackware was 7 years old by the year 2000. Running the 2.2 Linux kernel, compiled with open source GCC. Websites were cgi-bin and perl. Yeesh I've been running Linux a long time...

On the windows side, NSIS was an open source piece of tooling released that year. And I was writing Windows programs in Visual Studio with MFC.


> That's a very windows-centric view of the past. And with good reason too! Windows was utterly dominant back then.

Running servers on Windows? Yeah, a few people who didn't know better did that, but it would be completely inaccurate to describe Windows as "completely dominant". It ruled the desktop (and to a large extent still does), but it barely made it to parity with *nix systems on the server side before Linux (and FreeBSD in some cases) punched down.


A few people?

IIS had 37% market share by 2000.

https://www.zdnet.com/article/how-does-iis-keep-its-market-s...


Yep, that's a few people. That was about it's peak market share, until a brief spike circa 2017 and then it crashed and burned into obscurity.


I imagine the crash caused by not needing it anymore to run .NET applications.


A third is a “few”?


No, a third is just not "utterly dominant".


It entirely depends on what you are counting, but I do think your comment is extremely misleading because Microsoft was important for business web servers in 2000. “a few people who didn't know better did that” is outright deceptive.

  The dominant position of Microsoft’s proprietary IIS in the Fortune 500 makes Windows NT a lock for the most used operating system undergirding the Web servers -- 43 percent. But the idea that Sun Microsystems Inc.’s Internet presence is weakening isn’t supported by the numbers. Sun’s Solaris holds a clear second place at 36 percent, with all other operating systems falling into the noise level. Linux showed up at only 10 companies.
That quote is from https://esj.com/articles/2000/06/14/iis-most-used-web-server...

It is fair to say that in 2000 Linux was beginning its growth curve for web servers, and all other OS’s were starting their decline. I do note the Fortune 500 had a lot fewer tech companies back then (zero in the top 10) and churn has increased a lot (perhaps due to not following technological changes): “Fifty-two percent of the Fortune 500 companies from the year 2000 are now extinct.”, “Fifty years ago, the life expectancy of a Fortune 500 brand was 75 years; now it’s less than 15”.


22 years ago I was programming on an almost entirely open source stack. Linux servers, vim, Perl and we paid for sybase. We used CVS for source control and when I heard about sourcesafes restrictions I was shocked.

We had unit tests, though it was your own job to run them before merging. If you broke them you were shamed by the rest of the team. We also had a dedicated lab for automating functional tests and load testing using Mercury interactives tooling (don’t miss that) that we would use to test out before upgrading our servers.

We used the techniques outlined in Steve McConnell’s Rapid Development, a sort of proto-agile (and editorializing it got all the good parts right while scrum did the opposite).


I had all of this 11 years ago and it was BLISS. Oh MS source safe and it's locked files! No merge conflicts or rebasing clownery, ever! It forced two people working on the same code to sync and this avoided so many conflicts. Customers called with small bug reports, I could fix them in 5 minutes and deploy to production right from eclipse.

Modern agile development is hell.


Nothing's stopping you from implementing file locking on a social level!

"Hey I'm gonna be working in the foo/ subdir this week, mind staying out of there till next week?"


Random agile comment at the end of a comment on source control.


I agree with a lot of that - I had to 'invent' unit tests for one of my clients for example for their production code.

I managed to swerve MS tooling much of the time, one way or another. For example, I worked a lot with Sun workstations SunOS/Solaris.


Thats nice. I think MS programming stacks were most popular in the UK outside of universities (universities would also have Unix, Oracle DB and SunOS). I guess in California it would more likely skew Unix/Sun?


I (a) ran a very early Internet provider and then worked in (b) oil and (c) finance where good networking, speed and reliability were enough to make *nix a sensible choice. Though (for example) the finance world tried to move to MS to save money, and indeed I got paid a lot to port and maintain and optimise code across platforms including MS, the TCO thing would keep biting them...


That makes me wonder, without all the unit tests and all the 'necessary' things we do to our codebase, did any of it really help?

Are modern codebases with modern practices less buggy than the ones from 20 years ago?


In 1988, Airbus delivered the first A320 with electronic flight commands, which was safe.

In 1998, the RATP inaugurated line 14 of the metro in Paris, which was fully automated, after formally proving that its software would never ever be able to bug.

Gitlab didn't exist back then, and yet these companies made a code that was safe.

I guess the main driver of code quality is whether the company cares, and has the proper specifications, engineering before coding, and quality management procedures, before the tech tooling.

It certainly is simpler now to make quality code. But don't forget that software used to be safe, and it was a choice of companies like Microsoft, with Windows, or more recently Boeing with the 737 Max, to let the users beta test code and patch it afterwards (Aka early, reckless agile)

So yeah, modern codes look less buggy. But it's mainly because companies care IMO.


> It certainly is simpler now to make quality code.

Just think of the log4j fiasco last year. Or the famous left-pad thing. Perhaps you don't import any dependencies, but just imagine the complexity of (for example) the JVM. Point is, you can surely write "quality code", but even with quality code it's much harder to control the quality of the end product.

Requirements have gotten more complex too. 30 years ago people were generally happy with computers automating mundane parts of a process. These days we expect software to out-perform humans unsupervised (self-driving?). With exploding requirements software is bound to become more and more buggy with the increased complexity.


> I guess the main driver of code quality is ...

picking a task for which can be implemented using the sort of processes you describe.

Lots of things cannot be.


Quality assurance and software engineering can be applied everywhere, no matter the processes you use to create and deliver the code.

Methods and tools would be different, depending on context, but ANY serious company ought to do quality management At the very least, know your code, think a few moves ahead, make sure you deliver a safe code, and apply some amount of ISO9001 at the company level (and hopefully much more at any other level)

Also, a security analysis is mandatory for both industrial code and for IT applications, thanks to standards, laws like the GDPR its principle of privacy by design, and contractual requirements from serious partners. You risk a lot if your code leaks customer data or crashes a plane.

it's the same for having 'specifications'. Call them functional and safety requirements, tickets, personas, user stories, or any name, but you have to do them to be able to work with the devs, and describe to your customer and users what you have actually developed.

the 'lots of things [that] cannot be' scare me as a junior engineer.

I feel like they are made by these shady companies that offer 2 interns and a junior, to get you a turnkey solution within 12 hours. It also gives me back bad memories of homework made at the last minute in uni, and I would never do that again. And as far as I saw in both cases, the resulting software is painful to use or to evolve afterwards.


> describe to your customer and users what you have actually developed.

In the domain I work in, what customers want (and what we provide) changes monthly at worst, annually at best. And in many cases, customers do not know what they want until they have already used some existing version, and is subject to continual revision as their understanding of their own goals evolves.

This is true for more or less all software used in "creative" fields.


I don't understand how this practice makes your modern code more reliable, sorry

I was replying to

>Are modern codebases with modern practices less buggy than the ones from 20 years ago?

I understood that @NayamAmarshe acknowledged about new practices and tools introduced after my examples, in the 80s, 90s, and early 2000s (mostly with agile everywhere, and v-methods becoming a red flag on a resume and in business meetings).

It seemed to be the essence of their question.

So all I was saying was that codes from back then where capable of being safe. Reliability wasn't invented by modern practices.

Modern practices have only changed the development process, as you mentioned. Not the safety. And if it did, it affected safety, as doing provably safe code with new practices is still being researched at the academic level. (check out the case of functional safety vs/with agile methods)

Can you explain how do you make your code less buggy, than a code from 20 years ago, with practices from back then ?


My point was that you cannot use the software development processes used in planes and transportation systems in every area of software development. Those processes are extremely reliant on a fully-determined specification, and these do not exist for all (maybe even most?) areas.

If you're inevitably locked into a cycle of evolving customer expectations and desires, it is extremely hard and possibly impossible to, for example, build a full coverage testing harness.


yup, 21st century practices for 21st century business needs

but they don't make the code less buggy per se. They just allow to patch it faster.


IMO yes. Software is a lot more reliable than it was 25 years ago. This boils down to:

1. Unit/regression testing, CI

2. Code reviews and code review tools that are good.

3. Much more use of garbage collected languages.

4. Crash reporting/analytics combined with online updates.

Desktop software back in the early/mid nineties was incredibly unreliable. When I was at school and they were teaching Win3.1 and MS Office we were told to save our work every few minutes and that "it crashed" would not be accepted as an excuse to not hand work in on time, because things crashed so often you were just expected to anticipate that and (manually) save files like mad.

Programming anything was a constant exercise in hitting segfaults (access violations to Windows devs), and crashes in binary blobs where you didn't have access to any of the code. It was expected that if you used an API wrong you'd just corrupt memory or get garbage pixels. Nothing did any logging, there were no exceptions, at best you might get a vague error code. A large chunk of debugging work back then would involve guessing what might be going wrong, or just randomly trying things until you were no longer hitting the bugs. There was no StackOverflow of course but even if there had been, you got so little useful information when something went wrong that you couldn't even ask useful questions most of the time. And bugs were considered more or less an immutable fact of life. There was often no good way to report bugs to the OS or tool vendors, and even if you did, the bad code would be out there for years so you'd need to work around it anyway.

These days it's really rare for software to just crash. I don't even remember the last time a mobile app crashed on me for example. Web apps don't crash really, although arguably that's because if anything goes wrong they just keep blindly ploughing forward regardless and if the result is nonsensical, no matter. Software is just drastically more robust and if crashes do get shipped the devs find out and they get fixed fast.


It improves velocity not code quality. You can achieve the same quality levels but making changes takes much more time.

Delivery costs of software is way down in many domains (SaaS teams frequently deliver dozens or hundreds of releases a day). That would not be possible without automated tests.


Is that not self evident? Yeah they're a pain in the ass but you need them if you're going to go refactoring around in the codebase.


I think it was worse overall, but it really depends on what you want to measure/value.

I loved having the 6 white "Inside Macintosh" volumes and being able to sit on a couch and read them. I loved that, if you paid to be in the Apple Dev Program, you could get your questions answered by actual Apple engineers working on the bowels of the product you were interfacing to. (We were doing some fairly deep TrueType work in support of the System 7 launch and just afterwards.)

What sucked was there was no web in any practical sense. Code sharing was vastly more limited (CPAN would still be 3 years away). Linux 0.99 wasn't out yet. CVS was state of the art and much collaboration was by emailing patches around. What you were building on was what the OS and proprietary framework/compiler gave you and you were building almost everything else from scratch. Expectations from users were lower, but the effort to reach them was still quite high. Software that I worked on was sold in shrink-wrap boxes in computer stores for high prices. Therefore, they sold few units.

Compensation was a mixed bag. I was making more than my friends in other engineering fields, but not by a lot. The multiplier is significantly higher now in tech.

On the plus side, I was shipping code to go into a box (or be ftp'd from our server), so I can't ever recall being paged (I didn't even ever carry a pager until 1997 and working in online financial services) or bothered when not at work.

I think the world is better today: much easier to build on the shoulders of others, much easier to deliver something MVP to customers, get feedback, and iterate quickly, much easier to A/B test, much easier to advertise to users, much more resources available (though some are crap) online, and the value you can create per year is much higher, leading to higher satisfaction and higher compensation.


You’re lucky to be programming now!

I’ve thought about this a lot because I grew up with an Apple IIc in my house, but didn’t learn to program C until 2001. My parents didn’t know how to program. We learned to use the computer for typing and desktop publishing. Programming books were expensive. Even the commercially used compilers and IDEs were expensive. Mac developers paid for CodeWarrior [1]. I don’t remember source code being available easily. Aside from “hackers” who have a knack for figuring out code, there wasn’t really a path for “read the docs” people to learn unless they lived near a place with lots of physically printed docs (a company, a university, a parent who programmed). Disk space was a constraint. Programs were split across multiple floppy disks. Computers would run out of space. The length of variable names & things like that mattered because shortening them let more code fit on a disk. I’m not old enough to know how much that affected full computers, but it made a huge difference on a TI-82 graphing calculator. That was the first thing I learned to program & cutting lines of code left room for games. I assume a lot of bad habits that make code hard to read came from that time. Oh… there was no source control. Revisions were just named differently or in a different directory, if they existed at all. And… grand finale… project plans included days or weeks to print disks and paper user manuals so apps could be sold in stores :-)

[1]: https://en.m.wikipedia.org/wiki/CodeWarrior


> I’m not old enough to know how much that affected full computers

I recently helped someone open source a roguelike they wrote in the 90s - https://github.com/superjamie/alphaman-src

The source is terse as you describe, some parts are written in assembly, and strings are actually one big string with functions that extract substrings. There isn't a clear PRINT statement in the whole program.

All of this because he wanted the source to fit on one floppy.


I can go back 26 years. Sorry.

It was a mixture of VGA CRTs and VT220's. I just missed token ring,

New graduate, started in Australia at $35k. That's about $65k in current dollars. No stock, maybe a bonus.

Manuals were binders or printed books. Builds were managed by the "Configuration Manager", and coordinated nightly. The languages were C, Ada, Assembly.

The network was BNC cables.

Design reviews were a thing, code walk throughs were a thing. People were trying to work out how to apply UML.

Printers will still a mix of dot matrix and lasers.

Design patterns I think, had just become a thing.

Work life balance was okay. Everything was waterfall, so you ended up in death marches semi-regularly.

Linux was just ascending, but big-iron Unixen ruled the dev environment. Microsoft was just trying to work out itself (Microsoft Mail, Lotus 123, Domino)


> The network was BNC cables.

Don't kick the the cable under your desk, or the network goes down for everyone!


It depends if you were doing things on mainframes or PCs.

Mainframes and business computing was much the same as it is now. Struggling to get to the bottom of requirements, lots of legacy code, significant dev/test/production handover. In the early 90s business was going in the direction of 4GLs (fourth generation languages), which were a more model-driven approach to development. Something that wasn't really abandoned until the mid 00s. There also would have been a lot of off-site training and formal courses. With little more than the formal documentation, training facilities were big business. People were also specialised - so it would be common to fly in a specialist (highly trained) person from IBM to come and do something for you.

PC-based software was great because it was simple. Early 90s were just getting into file-shared networking, so not even client-server. Security wasn't an issue. Applications were simple CRUD apps that made a big difference to the businesses using them, so simple apps had high value. In the 90s, just being able to print something out was a big deal. App environments were simple too. You could have one or two reference books (usually the ones that came with the product) and you'd be fine. You could be a master in one environment/tool with one book and some practice.

Embedded software was a nightmare, and required expensive dev kits and other debugging scopes and hardware. Arduino is only from mid 00s, so the early 90s were highly specialised.

Networking and comms were also in their infancy early 90s, so anyone working in that area had it tough (Ethernet and token ring were still competing). Although the networking people of the day forgot to put in a bunch of security stuff that we are still trying to make up for today.

Not much different to today then. Some boring enterprise stuff, and some exciting new stuff - with all the differences in perks and remuneration.


I agree not much different. I started writing code for a living in the early 1990s. How I wrote code then was not much different from how I write code today: using a text editor, write a little, test a little, keep building up working code. I debug mostly with print statements and logs, which is how I've always done it.

I've never liked IDEs and still don't use them. Version control is much more widespread. My favorite was mercurial, but I'm now mostly using git as swiming against the current there doesn't seem to be worth it.


In addition to what others have added: monthly magazines like Dr.Dobb's Journal, were eagerly anticipated, and this was _the_ way to keep my skills "modern". User Group meetings, and rare conferences which were vendor-driven, were others.

It was you and the black box. Manuals helped a bit. Books helped a bit. But largely it was you and the stupid box.


About 30 years ago, Java hadn't even shipped version 1 yet.

Windows 3.11 was all the range.

Mobile development wasn't a thing.

Networking personal computers was a big deal. There were several architectures up for grabs, and several major players (including Netware) in the mix. It wasn't clear which path developers should follow.

The internet (on personal computers anyway) wasn't really a thing yet.

On Windows, the Component Object Model had just been shipped, and it was all the rage with Windows developers.

Keeping source code in version control was a novel idea. (Mercurial and git were 10+ years away)

Many things haven't changed though. It was important to be current on technology then as it is now.

It continues to be important to be able to communicate well. Software is a domain of the mind because we take ideas in our head and translate them to bits. You need to be able to have a mental model of the problem space and communicate it clearly with peers and clients.


Source code control was a known thing. I had been using PVCS for 6 years by then - and when on Unix RCS.

I got annoyed in the 80s with people who had messed the code base up so we could not compile from a fresh checkout.

Yes communication between people was and still is the most important thing. Note that this usually needs meetings in person


Yeah, I remember switching from CVS to svn in.. the late 90s? I am not sure when? I think cvs was already a well established thing by 1992, with many people using it. (and others using proprietary version control systems). But it's true that it wasn't yet as universal as it is now, small shops might or might not use version control.


> and when on Unix RCS

And SCCS was available before that.


I had forgotten about PVCS! Thanks for that memory. :-)


1995. More fun, less pay and a more distinct subculture.

Usenet was 1995's Stack Overflow. You'd get good answers and perhaps a lecture on not using usenet for commercial purposes or to cheat on CS homework assignment.


I came in around 1998, so 6 years shy of the 30 year mark. I still got to play with a lot of stuff from that era though. Lots of Novell Netware installs. NE2000 NICs with BNC (and 50 Ohm terminators) proliferated. I became familiar with Btrieve. I had a few clients on homegrown DBIII/CA Clipper/Visual FoxPro. The big projects were moving one of them to a nice Borland Delphi 6 app that we were writing. There was VB everywhere. I really hated it but for getting a UI going quickly, it was hard to beat. If you found a control to use in your app, it was most likely something you had to pay for. (A calendar/date-picker for example) I used NNTP/Newsgroups to get answers to my programming questions about Delphi/Object Pascal. The idea of a dynamic web was only slightly on our radar. We tried Python, Perl DBI, and finally landed on PHP. That paid the bills for quite awhile. The internet was so fresh and new. There really was this idea that anything was possible and so much was yet to be discovered. I look at today and think, "are we really done? this is it??"


Where I was there was no "work from home", you carried a beeper and was expected to fix any issues no matter the time. Overnight there would be a full backup to tape, with the system "off-line" for a few hours. 100,000 records was considered huge (this is in a 5 billion USD company). Some data would be sent using dial-up to various financial institutions.

But, the work was very interesting and much more fun compared to today. There were many more different Operating Systems to work with and almost everything was custom code. Now, seems all we do is deal with purchased ERP systems and lots of bureaucracy when issues occur. 30 years ago, we could make changes directly on production systems in emergency situations, now, never.

(edit fixed spelling)


> you carried a beeper

We had alphanumeric pagers and in the office was a terminal where one could type all sorts of stuff. Instead, the boss always typed "call shop". Nothing else. I was not willing to pay for a cellphone for myself, so if I got the page during the commute, I just continued on to the office. Where I'd be harangued for failing to find a pay phone (what‽). He always got angrier when I'd type in what he would have said if I had called in. He also got furious when I identified how one customer stole over $30k/year in inventory. That was the most dysfunctional job I ever worked at.

This was almost 30 years ago. I had gotten run over by a car and while I was healing, I started dabbling in programming around the office to organize some things that were out of control. Like the filing system was a write-only, read-never system. It would have been cheaper to just shred everything.

And to answer the OP...

Since it was all Windows stuff, I had to go to book stores and look for Wrox (red covers). Back then, they were decent, but that publisher turned into "shovelware" (a dozen photos on the cover and every chapter written by someone else, with hardly any coherence in the book). Apress ended up replacing them. And now stuff changes so fast, that books are obsolete before they can get through the publishing process.

"Source control" usually consisted of making zip archives on floppies. If the place had Source Safe, they better have good backups because SS had a problem corrupting files.

At my first "real" job programming, one clown who was responsible for a multi-million dollar product did not like to back up files. Nor use a UPS. So when his hard drive crashed, it turned out that his SourceSafe files were corrupted and we lost about a year's worth of development out of him. That got covered up.


I sometimes got to take the backup tapes to the bank vault. That was fun. SDLC was the dial up, had to set that up for an AS/400 for various things, including downloading patches from IBM.

When I had my first service released to live at AOL, in 1997, they gave me a pager and said welcome to operations. I was so pleased with myself. We did have remote work by 97 or 98 tho, tho mostly just while cranking out code, then you c9me in to get your next project.

AOL had a nice rhythm of projects were in the three month range from design to deployment. You saw your work used at scale but had time to do it well.


I wrote my first code in BASIC on an HP3000 in 1975. There were a couple of CRTs in the computing room but they were hotly competed for by Star Trek players so for the most part we worked on teletypes and stored our code on paper tape.

When I started professionally 15 years later I worked at a desk in an office cubicle. I wore a shirt and tie and kept regular "office hours." Our team worked in C++ on Windows as part of a vanguard effort to adopt object-oriented techniques at the bank. The design processes and project management techniques however were still very waterfall. We had lots of meetings with stakeholders and project managers. We made and revised lots of estimates and the PM used lots of complicated charts and timelines to keep track of it all. The Internet and the web were still a couple of years off and all our work was "client/server" which mostly meant thick clients talking to SQL Server, manipulating the data and then writing it back. Feature requests, code and db schemas tended to balloon and estimates were never met unless they were for very small and discrete changes.

I'm still working in the business today, though now I do cloud infrastructure and systems engineering and use linux. Obviously so much has changed that it's difficult to wrap it up narratively but if I had to pick one dramatic thing it would be access to information. It's hard for me to imagine myself back in those pre-web days even though I lived through them. I am fairly certain I could not do my job today without Google search and other online assets, and yet somehow we made do 30 years ago with actual printed manuals that you had to buy. I paid almost $150 for an IBM manual describing the VGA adapter port/IRQ mapping so I could write some mode X effects. :). I still have it somewhere down in the basement. When Microsoft first launched MSDN and made info available on CDs it was a revolution for me professionally. I suspect engineers who have grown up with all the technical info in the world at their fingertips would definitely feel like 1990 was the dark ages.


RIP San Diego Technical Books 8-( It wasn't just S/W engineering that was different, it was the whole world. I graduated uni, and started work in 1983. There was no internet. Although you could write to the NIST and they would send you free boxes full of books on TCP/IP. The best part was, everyone wasn't lost in make believe. I mean there were books and movies, and of course LSD, but all of that is a temporary escape. After 6 or 8 hours, the LSD wears off. Now, people are lost in make believe 24/7. You think buying a house or being treated fairly by an employer is hard now, just wait. You're gonna be so much more f__ked in 20 years, this is gonna look like the great old days. Just keep gushingly throwing yourself at whatever corp exploitation scheme is popular at the moment (twitverse, twerk tik, app store) it's all gonna work out, for ownership... Best of luck with that...


San Diego Technical Books -- one could easily spend an afternoon in there! However, I couldn't afford to shop frequently there.


40 plus years ago my first programming gigs were for financial institutions creating projections for mortgages and savings plans for IRAs on an Apple II. They paid me an eye watering $50/hr which was an incredible rush for a high school student.

After grad school my first job involved building the equivalent of the WOPR (WarGames ref). The AI was written in lisp, the interface was written in C++ using Interviews/X-Windows, all the simulations were written in Fortran and ADA was used to glue all the pieces together. Except for the simulation code it was all written by a team of 3 of us.

Greenfield projects were truly greenfield. You had to invent almost everything you needed. If you wanted a dictionary data structure you built it yourself. Over the network communication meant having to develop your own TCP/UDP protocols. Imagine doing leetcode problems in C++ without any libraries or templates.

Memory bugs were the bane of your existence. There were almost no tooling for debugging other than gdb. Testing was very manual and regressions were common. Source control was originally at the file level with SCCS/RCS and when CVS came out it was the greatest thing since sliced bread.

Death marches were sometimes a thing. I remember one 6 week period of 90 hours a week. While it wasn't healthy, it was easy because we were exploring new frontiers and every day was a dopamine rush. You had to fight with yourself to maintain a decent WLB.

Like now, there were always battles around what new technology would be the winner. For example in networking we had Ethernet, token ring, fddi, appletalk, netware and a few others all vying to become the standard.

Working from home meant dialup into a command line environment. So every developer knew how to use vi or emacs or both.

The biggest difference today is you stand on top of all this technology and processes which have matured over the years. This means a developer today is MUCH more powerful. But if they didn't live through it, much of that mature technology and processes are just a given without deep understanding, which most of the time is fine, but once in while can be a problem.


$50/hr, good heavens. My salary for my first professional job out of university was disappointing even for the time; it wouldn't qualify as minimum wage these days, some places (U.S.). But I worked with great people on interesting problems (airline reservations in IBM/370 mainframe assembly language). And I had flight benefits, which I made good use of.


Salaries were good but nothing out of line with other entry-level white-collar professionals. So that part is much better today.

Everything else I'll say was better 30 years ago. Despite the lower salaries, programming was a much more respected profession. Today programmers are only seen as unthinking cogs in an agile sprint, where PMs run the show.

Quality was much higher, mostly as a byproduct of engineering being driven by engineering, not PMs. You got to own all the technical and most product direction decisions. Only occasionally someone from sales would come in and say Big Customer wants such feature, let's do it.

Work-life balance was generally much better since you planned for the long haul, releases maybe yearly instead of permanent sprinting without a long term plan as today with agile.


There was a lot of great documentation on the hardware and software (Example: https://archive.org/details/1990-beats-steve-amiga-rom-kerne..., https://archive.org/details/Atari400800HardwareManualNovembe..., similar for Apple II series, schematics, commented OS listings).

There were fewer outlets for information but they were of high quality. So, the "signal to noise ratio" was significantly better.

As others have mentioned the systems were simpler and therefore more understandable. This meant that people's creativity really came out. Look up copy protection or how people cranked up the speed of the disk system on the C64 or Atari or Apple.

Tools cost money - free compilers or assemblers were few and far between. The syntax and usage was simpler and compile times were quite low.

There was no memory protection, so, the application you were development could easily take down your development system.


You could have a highly proscriptive spec, or almost none. It depended what kind of s/w shop you worked in.

I interviewed candidates for a sw position who fronted with serious lisp experience on live deployment: traffic lights control systems. (We wanted C. But it stuck in the mind)

Compile-Edit cycles could leave you time for lunch.

SCCS was still in use, RCS was just better.

You had to understand byte/short/word/longword behaviours in your compiler. Unsigned was tricky sometimes.

FP error was common. Not all the bugs were ironed out of libraries (NAG aside. They were really reductionist)

Use of global variables was not yet entirely anathema

The CPP could run out of #defines still.

Pdp11 were getting more uncommon but not dead. VAX were common. Sun's were 68000 mostly.

There was a gulf between IBM and their seven dwarves and everyone else. UNIX was not quite ubiquitous off campus but becoming so.


> You had to understand byte/short/word/longword behaviours in your compiler. Unsigned was tricky sometimes.

For some of us, this is still the case.


RCS was worse than SCCS on every axis except marketing. It was claimed to be faster but was slower. The code quality was abysmal.


Can't deny, but not what we believed at the time.


I was fooled too.


Compile-Edit cycles could leave you time for lunch.

Database queries at my father's company were started on Friday afternoon and were finished on Tuesday. Those same queries take milliseconds nowadays.


30 years is a bit before my timeline, but the biggest difference I recall from around then is that books were the most important learning resource you had. In some cases they were the only resource you had available. And interestingly, in many cases the books even came with media (often on gasp a floppy disk) that had some code samples, errata, and lessons to follow.

What online resources that existed for programming at that time were few and far between and much harder to find help on. Programming videos were almost non-existent.

Spending time in a bookstore and trying to find a good book to solve your problem was about the best strategy you had in many cases.

Today, technical books still exist in the remaining bookstores of course, but the proportion of genuinely interesting books is much worse. Technical shelves are now filled with garbage mass market books like how to use your iPhone and the like.


Actually BUILDING something that you could point to and say - depending on organization size - I WROTE THAT and it's being used daily across the company and in some case globally. I wrote an entire sub-system that when out-sourced to IBM, was billed in the contract for $500,000 a year to maintain. I was STILL at the company and not making even 20 percent of what IBM wanted. When IBM took over I moved on from development to Unix systems and architecture - BEST decision I made. So, compare being a craftsperson then to a factory assembly-line-worker today.


30 years ago was right around the time I wrote JPEG viewing software so I could look at JPEGs on my computer. :) I mean, I got the actual JPEG decoding from whatever the standard open source implementation was at the time, but then paired it up with my own code to display to my graphics card, which could handle a rather fancy 32K colors. Then I could view JPEGs downloaded from Usenet.

Those were my college years. Prior to that, in the mid-80s I developed in Forth / 6502 assembly on my Commodore 128 at home, and QBASIC (sigh) on IBM PCs at work. After college, we were programming in C++ on Gateway PC clones -- had one at work, and a slightly less powerful one at home. A helpful co-worker introduced me to Perl, which quickly became an essential tool for me. Around this time we finally had version control software, but it was terrible.


I built my first website in 1993, about a year before Netscape Navigator was released. There were no books, there was no javascript, no css. There was no PHP. It was just me, View Source and a netcom html hosting page.

When I went professional in 1998, the .com boom was underway. It was a wild time full of challenging work and exponential rewards. The hours were long, the pay was crap, and the value of programming wasn't fully acknowledged.

Compare that to today. I have banker hours. The pay is great. My job appreciates me. However, my coworkers are all 20 years younger than I am and make similar or more pay with half my experience. But, experience beyond 5 years in this industry doesn't matter because it's irrelevant. Only the mention of huge company's names in my resume are worth something beyond 5 years.


> in terms of processes, design principles, work-life balance, compensation.

Architecture and design is a little bit more defined now, and there's a long history of what works and what doesn't, if you have enough experience to look for it. The actual SDLC and workflows are a joke today, everyone just pays them lip service and most don't even understand why they do them at all. We struggle more today with local dev environments because they've become over complicated. Adjusted for inflation we are making $50K-$100K less today than we did in 1998. We still have devs who don't want to know about how their app runs in production, leading to the same bugs and inability to troubleshoot. Apps are getting larger and larger with more teams that don't understand how it all works.

There's a lot more software out there today, and a lot more information (much of it not good quality) so there is more that can be done easily, but it needs to be managed properly and people struggle with that. Security is, amazingly, probably as bad as it used to be, just the attacks have changed.

There doesn't seem to be real training for the neophytes and they're getting worse and worse at understanding the basics. You can be called a software developer today by only knowing JavaScript, which would have been crazy back then. But now that I say that, perhaps it's comparable to PHP back then.


About 30 years ago was my first for-pay gig. Still a student I worked for 3 weeks at a small software company (100% daughter of Siemens though and using their office). I was placed together with a recent hire in an office larger than my living room today. We each had a glass terminal connected to the group's i486 (not PC compatible, I believe) workstation with a whopping 32MiB RAM running SCO Unix (crashed about once a day but was said to be much more stable than Siemen's own Unix ;-}

I was to port some C code implementing TIFF into their graphics library. This was before the WWW took off (or known to me). They had Internet (with a host naming scheme incorporating building and floor, so byzantine that we found it easier to remember the dotted-quad IP address), but I don't recall for what or to which extend it was used. So a good part of the job was to read printed documentation.

vi was used in that group as text editor and I was expected to do the same. I hadn't seen it before and when given the two page cheat-sheet, I thought it was worse than WordStar in (already long obsolete) CP/M. Learned the basics quick enough though and even grew to appreciate it when using a terminal connected via 9600baud RS232 ...

We enjoyed flexible working hours, with a core time from 11am to 2pm where people were expected to be in the office (in order to ease scheduling of meetings, of which I recall none; we met at lunch in the cafeteria though). I had to leave before 8pm though, as then the porter would lock up the building.


One difference I recall is you had to read manuals first. Now, you just code, and when you get stuck: google or stack overflow. But with no good search engines, you couldn't search books like that, so your chance of finding an answer by searching was poor. You wanted to have read the book first, so it was in your head.

Also: way fewer libraries. You might write ALL of an application. You might call the OS to just to read/write files. Today is much more gluing together libraries, which is nowhere near as much fun.


> in terms of processes, design principles, work-life balance

Nonexistent, nonexistent, nonexistent.

Hope this helps :P

To detail, thirty years ago it was more or less clear waterfall is a dead end but it was not yet clear what to do instead. UML kind of redeemed waterfall but it didn't exist yet.

As for work-life balance, time crunches at some software companies became legendary when shipping time meant shipping time so people slept in the office and crazy stuff like that. At this time there were still one man projects though but that era was coming to the end.


30 years ago the company I worked for had just taken delivery of their first Unix machine (a Sequent Symmetry). It was about the size of a large refrigerator and sat in our machine room blinking at me. My boss, standing in the doorway with a cigarette gripped in his teeth, said "learn how to use this."

There was no internet and we had no books but somehow I figured out the 'man' and 'apropos' commands. From there I read all the section 1 man pages, experimented with the things I found and basically figured out how unix works. Within a couple of months I had a suite of shell scripts doing most of the regular maintenance work for me.

A few months later on a Friday afternoon one of our legacy systems, the one for processing cheques, died (literal smoke coming out of the back of it). My colleague and I had been learning C in our spare time so we volunteered to rewrite it over the weekend. We had no sleep that weekend, but we delivered a working replacement (using ncurses) by Monday lunchtime.

It was a simpler time and a more charming time. The internet has been a game changer, both good and bad. It's easier to learn new things now but there are a lot more new things to learn.

Work-life balance is a hard one for me to answer because my situation has changed. Back then I enjoyed being at work more than I enjoyed being at home so I worked super long hours, I even slept in the office fairly regularly (mostly because of the pub across the street). So there was no work-life balance, but I liked it that way.


So much fun. What people may not remember was that there was a tremendous amount of competition back then. You had Borland and Lotus and many others, and the future wasn't set at all.

There were crazy entrants like Omnis 3/Quartz which had you select statements instead of typing them! So you would pick out "if/then" and then fill out the expression.

Anything you did provided incredible value, so that people were really happy. I was getting into VB right about that time (roughly) and you could build apps so quickly -- and not to replace a solution, but to create the first solution.

And to reiterate what someone else said, I had my own large office in a skyscraper with an incredible view and was treated (not just paid) like a top professional.


The big thing for me was address space size and computer sophistication.

DOS was still popular because it was so much cheaper than a workstation. Single-tasking means exiting your editor to do code check-ins and having to reload your context a lot. Networking was definitely an after-thought that many didn't have access to.

The bigger issue was the device you were programming. Small memory. Pointer sizes that might not be what you expect. Integers that overflow if you fail to think about their sizes. Pointers that cannot reliably be compared without normalizing them first. No threads. An operating system that does nothing other than provide access to files.

I've used C++ for my entire career. CFront 1.2 was a very different language than modern C++. Sometimes people wonder why I use stdio.h instead of iostreams: it's because stdio.h is basically unchanged since cfront and that code still works while and iostreams have changed significantly multiple times requiring re-writes.

One thing I miss is some of the color schemes in the Turbo Pascal/C IDEs. Yellow on Magenta and Yellow on Cyan were fantastic. I'm not sure why those color schemes don't work in GUIs -- I suspect it's because the geometry of fonts is so much different than text-mode graphics. Text mode had super thicc and clear fonts made out of so very few pixels.


Knowledge was everything, I bought tons of books and had fun digging into the most obscure and arcane language features. Just mastering things was very rewarding in itself - gave me a feeling of accomplishment and brought the respect of my peers.

Software architecture was a thing - you were given the full responsibility for your component. Something to take seriously. And something you could take pride in, when your process was the first one to run 24/7 without crashing or memory leaks...

Work-life balance was worse for me because I spent lots of "free" time learning stuff to apply at work. Now I'm trying to do things I enjoy. Coding isn't among those things any more - Scrum and its ilk took all the fun out of that.


Started off writing a typesetting system in BCPL on a PDP-11 with our own custom hardware for display and our own protocol for sending information around with no internet to look things up and a fine of donuts for the team if you blew an overlay with too much code in it which caused hours of downtime. Then Sun workstations came out and we ported everything to C and productivity increased and WYSIWG became possible.

No concept of unit testing, integration testing or CI - customer support gave things a quick look over and the program got sent out so we always scheduled a 2 week Bug Blitz to deal with all the issues that the customers found.

Small company, salary was good, regular hours and a challenging environment with all the rapid tech changes


25-ish years ago is when I started doing OS development work. Mostly device drivers and OS kernel work as research staff at a university.

My environment was almost identical to what I have now. A *nix box with a big monitor (Then: DEC Alpha 600 running OSF/1 with a CRT, using ctwm, rather AMD Threadripper running FreeBSD and an LCD with lxde). All my dev work then and now was done in emacs, driven by Makefiles and built/executed/debugged from a terminal window and managed by a revision control system (SCCS/CVS then, git now). Honestly, most of my dot files haven't changed that much in 30 years.

My compensation is far better now, mostly because I'm senior and work for a FAANG rather than a University.


Finished college in 1999, so not quite 30 years.

It was a wild year or two. If you could spell C++ or Java, you were employable at salaries well above average. Peak dot-com - the web was going to solve everything and make us all rich at the same time.

Then March 2000 hit and it all fell apart. And quickly.

Anyways, salaries were high. Not current SV/Seattle high, but high enough. IIRC, $50-$60k was a common range for new graduates. Equity was there, but again, not to the same levels as today's unicorns or FAANGs (AOL and a few others being the exceptions).

Work-life balance was similar. Lots of start-ups with none at all, but lots of beer and ping-pong. Mature companies (IBM and government contractors) were a bit better. Microsoft somewhere in the middle.

Waterfall was very much a thing at the Lockheeds of the world. Smaller companies were less rigid, but "agile" wasn't yet an industry buzzword (Agile Manifesto was 2001).

IDEs and tooling were nowhere near as efficient and helpful. Lots more plain old editing in vi or emacs or whatever. Compiling code was a lot more manual - makefiles and such at the command prompt. Version control was 100% manual via CVS or similar.

Better? If you got into AOL early, yeah, because you were rich. For the rest of us, it wasn't better or worse, but it was good then and it's good now.


I made the switch from IBM/370 assembly to Java around this time. It was a good six months or so, with Beer Cart Friday, weekly free massages, and massive plasma TVs in the conference rooms (at iXL in Denver). It was a rough few years after that.


Well, there was no WWW (there was the Internet, which meant email - though think of accessing your email from a green and black terminal window - and BBSes, but everything was incredibly incredibly slow), there were no mobile devices (well, there was the apple newton, but practically nobody had one; and there were cell phones - but they were huge devices without a screen like your current phone), pagers were the main way to communicate when away from a landline, etc.

Basically, desktops were where it was at. If you were a programmer, you were almost certainly creating software for desktop computers or mainframes. But, 30 years ago was even before Windows 95. It would have been around the era of Windows 3.1.


WWW existed but only a few dozen websites. I remember it seeming to explode around winter 94.


30 years ago was late 1992.


Yes it was. And the WWW existed with a few dozen websites, as I said.


Compared to how it is now, it's difficult to describe just how small it was. Unless you where at a big tech company, or university it was treated as a type of office job or trade.

I worked on AS/400 accounting software. (I think my boss stole a copy from somewhere else that ran on System/36. ) We supplied this to a few dozen mid sized, blue collar companies in town. (One I remember sold gas station supplies)

Programming would be done on site. Customers would want a new report, or a new field added to their system. I would come on site and code it for them. I guess in a way it was very agile. But I had no team, just me. And once in a while my boss you stop in to see how it was going, and help when I got stuck.


Well, suns and sgi’s. 3D graphics with OpenGL. Supercomputers yielding to mini-supers for a brief time before the hp pa-risc came and killed them all. Superscalar machines and programming - daxpy’s and vector masks. Lots of multiprocessor development. Still a lot of disk striping as both memory and disk space wasn’t “unlimited”. Rs6000’s pushing cad/cam onto workstations. Being able to do fluid dynamics calculations led to soap bar cars.

Lots of good Unix work came from that time which fell into place when Linux appeared. Things ported from there to Linux pretty painlessly which greatly reduced the cost of hardware. And that World Wide Web thingy that started to appear was pretty neat even at 19200.


24(1998) years ago we paying $1000 per MB to access a computer network at 2400 baud for EDI (electronic document interchange) to Sterling Commerce for swapping files with vendors. This was pricing and tech from 1990. The modem cost $~800 as it was a different format somehow. I thought this was all insane having an personal 128K ISDN line at home and Netscape was a 16MB download. This would be $16,000 download, the price of a new car!


What's crazy is that the EDI market is still cornered with VAN's charging absurd rates. It still shocks me everytime I get involved in a project and the EDI portion is like $100,000 a year in VAN charges and they don't blink an eye.


The main difference was it wasn't web development, so the whole process was much more contained. There was usually a smaller foot print of "just your app", so things like configuration or debugging was much easier. The tools weren't as good, and machines were much slower so that made things more painful. Debugging involved knowing assembler more often. Planning was more waterfall based, but really for a developer, i'm not sure that mattered all that much. Teams tended to be much more regional, with much less remote work, or global teams. As i recall work life balance was better, and compensation was better back then.


Let's see (wow, is it 30 years already?)

- I worked on C code, it was nicely logically divided into libraries and folders and you could build one folder at a time to save time.

- I was still young and not exposed to processes but there were signs (paper signs) in the corridors about RAD (Rapid application development) and QA was a separate department, only my manager talked to them.

- Compensation was rather good and very few years afterwards it become even better

- WLB was non existent, but again I was young and didn't care

Things were simpler, I knew the code down to which bits the CPU flipped, debuggers used primitive DOS GUIs and source control was something we considered starting using.


it was nicely logically divided into libraries

Sounds nice, but don't you remember DLL hell?


Windows-only phenomenon.


WLB means?


In my opionion, WLB = work life balance


What they said, work life balance.


I assume Work-Life-Balance


My experience starting fresh out of college in early '94 was pretty similar to what others here are saying. Parts of it were great - work life balance was definitely better. Systems were simpler and easier to understand. Parts of it were terrible - no static analysis or memory analysis tools, no unit or integration tests. (Well, we had integration tests that were humans running the final product to see if it all integrated together well enough!) Businesses were still very old-school. There were dress codes (though thankfully I didn't have to wear a tie).

What's interesting is that there were still fads of the week. The main product I worked on (a cash register system) had originally been generated by a code generation tool that someone had sold the company as a way to reduce their programming costs. Its main event loop was something like 50 screens long (though in their defense, screen were only about 800x600 pixels then).

We had internet, but it was different than today. Instead of HackerNews, we had Usenet News. It was a pretty good system for what it was. I had a specialized reader just for Usenet. There was IRC and AOL for chat. There was no music or video, though. Images just barely worked.

Because I worked for a large company, we had source control, but it was something written in-house and was pretty bad. (But better than nothing.) We wrote in Visual C++, which was not visual at all in the way that Visual Basic was.

In terms of simpler systems, one thing I just remembered was that because I worked for a large corporation, we had a hardware analyzer that we could use to see the exact instructions the CPU was running in order to debug really hard problems. I only used it once or twice, but it worked really well. (I mean the UI was terrible, but you could get some great info out of it.) I think someone said the device cost like $50,000.


There was no formal process. No dailies, no backlog. Deliverables were a date circled on a calendar. It was still all bullshit and lies.

No emails, no smartphones. Support people had pagers. But you weren't expected to fix world-serving infrastructure remotely. If it failed during the weekend, you'd most likely fix it on Monday. Unless something bad* happened.

There was no Internet, unless you were in academia. Knowledge came from books, magazines and from what you remembered if you didn't have these with you. Having a good network (the human kind) was essential. Although some hermit types made it very far alone... some descended into madness.

There was no Linux. A lot of software was expensive. Most software you could pirate if you knew the right people or had an account on the right BBS. "Shareware" was a where value was. You knew nobody would come after you if you never posted a paper check to the guy who made it to get a license.

Most viruses were not subtle. They would piss you off and trash your stuff. You had to reformat. You learned quick to have copies.

On the "sneakernet", floppies were king. Backpacks full of plastic. Systematic copy sessions between friends. Swapping sequenced media had a rhythm of it's own. A good excuse for drinking.

Hardware was expensive and evolved fast. That new 486 was _impressive_. People gathered around it to see it compile and do "3D" TWICE as fast as the fastest 386 before it. Moore's law in full effect. If you had the latest shit, you could boast about it with _numbers_. Not just some new color.

I knew people who printed pieces of code to study on the bus. Printers were way more important than they are now. They still haven't died and that's too bad. I hate printers.

*I was the guy with the pager that had to leave the bar at 2AM to feed printers. Fuck printers.


I could write a big, long, treatise, but I think others have done better than I.

The main thing was, for me, that very, very few of us actually had any CS/SWE training. Almost everyone I worked with, had schooling/training in other disciplines, and migrated over to SE (I was an Electronic Technician, then an EE, and then an SWE).

It was pretty cool. We didn't have the extreme conformity that I see so much, these days. Lunchtime conversations were always interesting. We also approached problem-solving and architecture from various angles.


A rite of passage, that current generation of Python programmers will not experience thanks to it's insanely better error handling, is the missing semicolon problem. Being a software developer 30 years ago included losing 4 hours of your life debugging a program, only to find it was because you missed a semicolon somewhere.


There was still not much cargo cult on how to develop software.

We knew what we knew, by reading programming magazines, books, or joining some local club.

The more lucky ones could eventually connect to some BBS, assuming they could also cover the costs of long distance calls most of the time.

You really had to account for how many bytes and cycles the programming was taking if doing any performance relevant application.

However if that wasn't that relevant, it was still possible to deliver applications in more higher level languages.


It was a lot easier; practically, extremely simple.

1. Processes: in smaller companies there was very little paperwork/red tape. You got the requirements, do the design, have a review, start coding, test, deploy. In most cases I've seen, there were just 2 environments, development and production.

2. Design was very, very simple: not many choices and coding was straightforward, no OOP, a Hello World program had a single line of code. Almost no libraries, frameworks, dependencies. (maybe an oversimplification, but you get the point)

3. Work life balance. We had Duke Nukem 3D parties in the office in the evening, that was the only cases where people did not leave at 5PM. There was no rush, overtime, emergencies - except my team that was doing support 24x7, but that was still fine.

4. Compensation. It really depends on the country, but at that time I had the best pay in my life as ration between my salary and country average. It only declined over time, even if in USD it is a bigger number today. Taxes also raised a lot.

5. Productivity and performance of the code is a lot better, but the life of the developers is a lot harder; the area is just too complex to be really good over time, the number of changes for the sake of change is enormous, the fragmentation of languages, libraries and frameworks in insane. There is no good way to do things right, there are 1,000,000 ways to do it and nobody can compare them all.

6. Not asked, but ...: people were a lot more competent on average. At least what I see in the market todays is developer by the kilograms, with very good developers lost in a sea of sub-mediocrity. Also, the pace of change is so fast, most people never get to become experts in something before it changes. It is like working while running.

I am not a real developer for over a decade as I do architecture and management, but I am the most technical in my area of over 1000 IT people; even if I don't write code full time, I am very close still.


1992... Let's see. I had a fun job as a software developer at a university. I was a single person team (for the most part), writing C (for the most part), PC, Mac, embedded platforms. Designed my own electronics for various lab projects.

Did some work on various larger systems, IBM (VM/CMS), DEC (Unix).

Shelves with reference books. Lots of low level code (graphics directly on EGA/VGA etc.). Late nights on BITNET or the Internet. USENET/Newgroups.

Remote access with slow modems.

Borland/Microsoft tooling + tooling from embedded vendors (assemblers etc.).

Compensation: low. Work-life balance: None (I didn't have a life back then ;) ). Design principles: Some OO ideas, break things down to reasonable pieces, nothing too structured. Processes: None.

Are things better? I had more fun back then, that's for sure. For some definition of fun anyways ;). At work anyways.


That was when I put aside Fortran and Pascal, and switched to Lisp, C and Perl. With some Visual Prolog also. That was the last time when I still did some UI programs. The editor was still emacs, nothing much changed.

Processes were easier, but I tried to stop people from editing as root on live machines, to have a test setup, to start using CVS (a big step forward from RCS), to start using i18n (multiple language support), and to start engaging in online communities and conferences. There was not much open source but the few known BBS boards and sources, GNU, Freeware and Shareware.

Design was waterfall, proper WBL as of now, compensation was worse, but I did work mostly as engineer and did SW programming only for the job.

Better? Better tools for sure. Tools and compilers are now free, and you have a huge free support system now.


* The Web didn't really exist so no phoning a million stackoverflow friends for help and code snippets. So...

* Reference manuals shipped with compiler and good reference books were gold

* You could learn the entire API/SDK/Framework and keep it in your head

* Blocks of assembly were a legit way to improve your code's performance

* OOP was hitting mainstream along with all the new ideas and approaches (and pain) it entailed

* Turbo Pascal rocked for DOS development

* Source control was iffy/non-existent

* I loved it


Not sure about 30 years ago, but 24ish years ago, I wrote software and did systems admin stuff with a thick walnut creek book of linux howtos resting on one leg and a thick book about Perl on the other.

Pay was similar to today if you normalize as a function of rent and gas.

I don't recall seeing a test in the wild until I picked up Ruby on Rails several years later, but there was a lot of manual effort going into QA.

I remember there were a lot more prima donna developers out there who tightly controlled swaths of code, systems, etc. and who companies were afraid to fire.

From my perspective, the software development pipeline is much improved thanks to tests and devops and the move to cloud infrastructure has added a ton of stability that was largely lacking back then.


I just realized I can answer this. That raises some unexpected feelings.

I was just a kid 30 years ago learning to program, by curiosity mostly, on an Amiga 500 using AmigaBASIC and some assembly.

It was neat. The manuals were helpful and nearly complete. You just can't get that on modern computers. The Intel manual is a monstrosity.

Sure, if you made a mistake in your program you generally crashed the entire computer. But that didn't really matter much back then on those machines. You just turned it off and back on and tried again.

It will always feel novel to me though because I had plenty of time to grow up in a world without having a computer before they came into my life. When they did it felt like I was a part of a secret world.

Less so these days of course.


You had to read the documentation thoroughly and hack around to master a language/product/library. You couldn't get the gist from a YouTube tutorial and then search StackOverflow or Usenet for your specific problem. We barely had O'Reilly books. If you were doing C on PC/Mac (after installing from 10 floppies) and you messed up a pointer, your machine froze.


  > If you were doing C on PC/Mac (after installing from 10 floppies) and you messed up a pointer, your machine froze.
I've always been curious about this. I figured that if you were manually touching memory in systems languages there was a chance you could crash your system

Why doesn't this happen now, what changed?

And could you potentially fry even your hardware in the old days?


memory protection is what changed. there are different ways to set it up and others here will be much more knowledgeable, but the OS works in conjunction with CPU features to generate an error if a program attempts to access memory not allocated to it.

before (I think) the 386, microprocessors didn't support proper memory protection, any OS that attempted to run multiple programs was at risk that one program would crash the system. You might see the 'bomb' dialog on Mac or it might just freeze. Blue screen I think started with Windows 95. Until (I think) Windows NT and Mac OSX the OS didn't implement memory protection. Original Mac had cooperative multitasking, each program was supposed to run in a loop that yielded to the OS so it could e.g. move the mouse pointer, process clicks. Of course at that time DOS didn't have any multitasking. although there was the infamous "TSR" terminate and stay resident feature which was used by e.g. DesqView to allow you to switch programs via a hotkey.

typically software can't fry hardware, even the famous 'halt and catch fire' instruction didn't literally cause a fire https://en.wikipedia.org/wiki/Halt_and_Catch_Fire_(computing...


Google up the Joel Spolsky 12 questions about software development from about a quarter century ago and the answer to most of the questions was mostly no.

No version control means merges were ... problematic at best, so you developed software without merges, branches, or pull requests, and surprisingly you can run a pre-2K economy pretty well off software written that way ... People tended to own a file at any given time and that meant we organized files along project tasks. We didn't have "no version control" we just didn't use version control tools because none had been invented, well, maybe RCS or CVS but nobody used that outside academia. We could still use backups and restores if we needed an old version. Also all filesystems had something.c, something.c.December.12, something.c.old, something.c.old.old.old, and something.c.bobs.copy. Often source code would entirely fit on one floppy disk so you'd just have a disk with a name and version written on it, and that was your "version control" if you had to review version 1.3 from the past. Also network directories with zip files named projectname-1995.03.12.tar.gz

One step builds were theoretically possible, "make install" was old stuff 30 years ago, but in practice, unit testing and distribution and approval processes made it simpler compared to now. Not every platform language and technology used makefiles, of course.

Everyone had quiet working conditions 30 years ago compared to "open office" so productivity was higher. Things are better now with full time remote.

Some of the old questions are kind of invalid, instead of the best IDE/compilers money can buy, all that stuff is generally free now.

Another eliminated question: As an industry "hallway usability testing" has been eliminated from the profession. There are too many middlemen "project manager" types who get a salary to keep programmers away from users. When someone's salary depends on preventing enduser input, you're not getting enduser input as a programmer. Maximizing enduser happiness is no longer the most profitable way to sell software, so goals are a lot different now.


> Some of the old questions are kind of invalid, instead of the best IDE/compilers money can buy, all that stuff is generally free now.

I wouldn't say this question is completely invalid now, it's just evolved from IDE/compilers to SaaS software. You can use open source equivalents and/or host your own, but using clouds (AWS/GCP/Azure) and well-supported full-featured SaaS softwares like slack, sentry, ngrok, readme, postman, intercom, mixpanel, notion, asana, netlify, figma, zapier, etc. can save your team hundreds of hours of frustration...


emacs and gcc are "free" free.

I use and like asana, but I assure you it is not free, my company pays a lot for that. I think we get much more value than it costs, but that doesn't mean the cost is zero, I think its nearly $200/yr/seat maybe more. Those products have a drug-pusher business model where the first hit is free then the invoices start rolling in.


I remember UNIX workstations with giant 19" displays, roughly 2 feet deep (60cm), and so heavy you almost needed 2 people to move them. Desks were always 30" deep to accommodate them.

Later, NEC Multi-sync monitors. And Sony Triniton vertically-flat displays.

RSI became a thing, along with ergonomic Kenisis keyboards. And upscale companies got Herman Miller chairs and padded cubicle walls.


speaking as a webdev:

> in terms of processes,

process is a human problem not a tech problem. Process is equally good to crap depending on who you work with

> design principles,

a lot of the questions around design principles have been offloaded to the framework you choose.

> work-life balance

again, human problem. There are crappy bosses and companies and good ones. Hasn't changed.

> compensation.

pretty sure the average dev salary is even stupider now than it was then. Like new grads getting $100k+ at some companies is ... just WTF as far as i'm concerned.

> Are things better now than they were back then?

it was pretty awesome. You could actually be the one person who managed everything on the server.

everything was radically simpler. We used CGI to solve simple problems instead of massive frameworks. JavaScript in the browser was a collection of targeted functions to solve your specific interactive needs instead of a 2nd massive framework.

Even ~17 years ago when Rails was released (and then its clones), it was still just 1 framework that you could wrap your head around that did _all_ the things from the DB to the HTML to the JS.

If I had to summarize I'd say, we've forgotten how to do simple things with simple tools. We've decided that how much RAM you use doesn't matter, and we very rarely care about CPU. We just add more servers.

DevOps is now it's own thing because... there's just sooo much more complexity around hosting that it requires enough knowledge to have become it's own specialization. That being said, I think we've just started assuming that it needs to be that way. That we _need_ to have all these complicated cloud things to serve sites that rarely need it.

Also email's gone to crap. You used to be able to send email from any server. Now you have to use email sending services or you're very likely to have your email eaten by spam filters.

The human social bits haven't changed.


One thing I can tell you is that in my first job in 1977 right out of university I had an office with a door that shut and quiet space to work, which I needed because I was working in assembly language and as there were fewer machines that developers, I had to debug code on paper. I do think this strengthened my debugging skills, in part because as machine time was rare I had to plan ahead for which interventions would get the most return on my time.

I had an office at my next job, too. Needless to say that's all gone now, except ironically the work-from-home has once again given me a mostly quiet workspace.


My first job (hardware engineer) after college in 2007 gave me my own office with a window! I got really lucky, and not being willing to take a job where I work in an open office full time is one of the things that led me to insisting on (mostly) remote work since that job.


You had to know things and keep them in your head, and have a pile of text books otherwise! Plus no normal app had access to gigabytes of storage, and indeed that was a vast amount for many years more[1]. Skills learnt then in terms of memory and CPU efficiency are still valuable now, though your typical smartphone is more powerful than the supercomputer replacements I used to look after for an oil company...

[1] https://www.amazon.co.uk/Managing-Gigabytes-Compressing-Inde... (1999)


My mother used to be a developer, she worked with .NET mostly for a company called ITC Infotech about 20ish years ago. Anyway, I found a old paystub of hers which was for 20K many years ago while helping her sort out some financial stuff. Which was interesting cause that was what I got as an intern with zero work-ex about 18 years later. (Obviously 20K was a lot more then than it was now). Her work was definitely frustrating as she jumped jobs to a couple of companies, Satyam and then later,Dell before deciding to do an MBA sometime in the 2010s.

EDIT: The values are in INR.


20 years ago was 2002. Even back in 1995 as an intern as a computer operator I was making the equivalent - $10/hour.

If she was working in .Net, it had to be after 2001. Your standard enterprise dev in any major city in the US was making on average $60k-$80K.


This might be in India, so converted from INR


We are in India.


Many of the core challenges that are fundamentally about human factors and collaboration haven't changed, such as the conflict between business requirements, engineering approaches, and the desire for no-code solutions.

Ex.: People were writing about agile, no-code, and the challenges of reducing cost and complexity before we had the terminology for it and before a whole industry of consultants existed to explain it.

Application Development Without Programmers https://a.co/d/2kKeOTx


In October 1992 I was mostly programming in Fortran on a Dec VAX, writing scientific simulation and design software for electronic and optical components. About this time we bought some large CRT monitors, partly so that we could remotely read system documentation off CD's rather than a 1/4 mile round trip to look something up in the paper manuals in the IT office. We also spent > £20k on a PC with 20M memory for doing stand-alone modelling (not allowed to be connected to anything else).


In 1992 VAX/VMS from Digital Equipment Corporation was beginning to feel the hurt from various Unix systems that were becoming popular. In 1993, I worked at a US Army contract that still had MS-DOS and we had to do some creative accounting things to get them some systems that could run Windows 3.11.

There were still a bunch of proprietary systems like Apollo that had were used in a lot of engineering places. Unix came in two flavors, BSD System V and Berkeley. Linux hadn't happened yet but open source had made its presence known by all the utilities that ran under Unix with familiar names prefixed by a g- like gawk, gcc, gtar, etc. Before the GNU project, each vendor had their own C compiler that came with their version of Unix. The GNU utilities slowly made those obsolete as gcc became the standard C compiler.


I had to wear a shirt and tie to work. An analyst programmer would turn the requirements into a written spec, I would write the code and someone else would write the tests. We had no internet, so if I couldn't work it out on my own or get it from one of the books we had on the shelf then it wasn't happening. No email, no internet, so no working from home. I would leave at 5, get home, take off my tie and stop worrying. The pay was terrible. I loved it.


I personally enjoyed coding very much 30 years ago - first professional gig doing C (moving on from Basic / Turbo Pascal). The good: Fun projects to work on, and no cargo cult programming BS. Lots and lots to learn, programming books were your life. The not so good: No Stackoverflow for answers to problems other have already uncovered and resolved. Reinventing the wheel, making the same mistakes other had already learned from...


Then: adventurous, off-beat, exciting, fun.

Now: mainstream, boring, pretentious, infantile.


I don't miss code merging via zip files - but sort of miss not having to depend on a search engine to code. Coding magazines and books were the stachoverflow of today. It also meant you had to read source code to learn stuff... I still do so.


The code was COBOL. You programmed on a green-screen terminal, or a gold-screen one if you got a newer model.

You earned your pay at night, when a program blew up. The mainframe would provide a core dump, which was printed out on hundreds of pages of 'green bar' paper by the operator. You would go to the source library for an equally gigantic (like 900 pages) of program listing to find the source of the problem. This was accomplished with some hex reading and an IBM card that explained computer instructions and arguments.

'Networking' was always SNA and always secure. TCP was a few years ahead.

C was available on the mainframe a little later. Linux on the mainframe likewise happened soon after, too.

You mastered your application domain by dragging paper listings home to read frequently.

Hot shot programmers may write 'online' programs (CICS), Assembly-language modules (for speed), or maybe do 'systems programming' (mainframe system admin).

It all seems pretty ok now. Probably it wouldn't be fun to go back to it, though.


It was more of an engineering position compared to a manufacturing position today. The work was pleasant and the corporate environment was not toxic like it is currently. Before scrum and rigid agile methodologies destroyed work life, we had few meetings and were left to perform our tasks without constant badgering. Basically we were more respected back then.


I started working around 29 years ago, pay was not great as it was a very small company and my first job out of University.

I was writing assembly code for a fire alarm interface system. Code was assembled using a DOS based system, flashed onto an EPROM for testing on the boards. Debugging consisted of tracing signals through the system, or if lucky, the boards had a spare serial port we could use to print messages.


For me it was mostly the same, but I do prefer simple minimalist systems for my own work.

I started in 1995 building an in-house Windows app for support staff at Iomega. I was a support person, and not a professional programmer, though I had been writing code for 10 years. The project was part of my launch into professional development.

It was a simple program not unlike the type I create today. It did one thing and it did it well. Support staff used it to log the root cause of incoming phone calls. It was used by about 200 employees and then we used the data to try to solve problems that were our top call generators.

Build systems for some languages are much more complex now and the Internet was just getting revved up back then. The best systems to work on seem to be the small simple ones, for me.

Edit: Learning from books instead of the Internet was a major difference. I had some wonderful coding books. A giant book store opened in the mall where I worked (just prior to Iomega) selling discount overstock books. I acquired several dozen computer books and I still have many of them.


RIP San Diego Technical Books 8-( It wasn't just S/W engineering that was different, it was the whole world. I graduated uni, and started work in 1983. There was no internet. Although you could write to the NIST and they would send you free boxes full of books on TCP/IP. The best part was, everyone wasn't lost in make believe. I mean there were books and movies, and of course LSD, but all of that is a temporary escape. After 6 or 8 hours, the LSD wears off. Now, people are lost in make believe 24/7. You think buying a house or being treated fairly by an employer is hard now, just wait. You're gonna be so much more fucked in 20 years, this is gonna look like the great old days. Just keep gushingly throwing yourself at whatever corp exploitation scheme is popular at the moment (twitverse, twerk tik, app store) it's all gonna work out, for ownership... Best of luck with that...


I was seeing the question and started wondering how that time might have felt for those people… only to notice a few seconds later that I already did software development 30 years ago. Oops.


i guess my first "writing software" job was in 1995, so, that's nearly 30 years ago.

Although that was a job while I was still a university student. (Creating the very first web pages for the university). So I was at the very beginning of my career at that time... and to this day my career has mostly been working in academic and non-profit settings, so not typical to compare, so my memory looking back may be colored by that.

But I'd say it was... "smaller". By and large salaries were smaller, not the crazy salaries we hear about on HN (which I don't really receive, to be honest). Work-life balance was a lot more likely to be there by default, things were just... smaller.

There were fewer jobs, fewer programmers, a higher percentage of them in academic settings. Other programmers were mostly geeks, in it for a fascination with programming, not dreams of being a billionaire. (Also mostly all white men, which I don't think is great).

Even 30 years ago there was (the beginnings of) an internet for learning about how to do things from your peers -- largely via mailing lists or usenet groups though. There was a large sense of camraderie available from such text-based online discusisons, you'd see the same people and get to know them and exchange ideas with them.

And sometimes exchange code. I think in some ways 30 years ago may have been close to the height of open source culture. (Raymond wrote the Cathedral and the Bazaar in 1999). As a thing aware of itself, and a culture based around taking pride in collaborating and sharing non-commercially, rather than figuring out how to get rich off open source or "open source". Open source culture was essentially anti-commercial mutual aid.

Also, you still often learned from books. There was always A book about any given topic that had gotten community acclaim as being great, and you'd get THAT book (say the Perl Camel book), and read it cover to cover, and feel like you were an expert in the thing. Which meant other people in the community had probably read the same book(s) and had the same basic knowledge base in common. There weren't as many choices.

I would say things were just... slower, smaller, chiller, more cooperative.

But is this my unique experience or colored by nostalgia?


I started exactly 30 years ago, in 1992. Honestly, I remember it being pretty much the same as it is now - management gave you vague descriptions of what they wanted, demanded precise "estimates" (but they didn't want estimation, they wanted unbreakable promises), changed their minds about everything but still insisted that you were constantly behind schedule. People were constantly quitting in frustration and you were constantly inheriting their work. You were expected to work unpaid overtime constantly to meet your "estimates". If your estimated was longer than they wanted it to be, they changed your estimates and told you to meet them anyway. They were always complaining about how hard it was to find and retain people.

We did it on 386 or 486 machines in DOS with no internet, though. So that was different.


When I started at Apple in 1995, I was assigned to work on QuickdrawGX. You could do incremental compiles but everyone knew the proof was in the clean compile.

Since a clean compile of the framework took like 8 hours, we would often kick off a clean build from our Macs before heading out in the evening.


You would spend a lot of time and effort doing very simple things by today's standards. There was also a lot less information, especially if you didn't have an Internet connection.

But it was also fun and worth doing.

Personally I wasn't working yet, 30 years ago, so that was just my own side-projects when I had a chance.


Around 30 years ago, I was working for Apple and writing software for internal consumption. I was coding C/C++ in MPW/MacApp and even Lisp using Macintosh Common Lisp. I was also writing some stuff in HyperCard, if I remember correctly. The documentation I had access to was awesome, and I could even talk to the developers who wrote those tools if I needed to (especially the MCL team).

While I did have a lot of information at hand, it was really very little compared to what is available to a coder today when it came to algorithms. But that freed me to experiment. I could step back and consider a problem that was (to me) unique, formulate a possible solution, and test. It was fantastic! I felt like I was always just around the corner from inventing a New Way Of Doing Things, something that would be simply amazing.

Today, you just search StackOverflow, implement something, then move on. Maybe it is more productive, but it is certainly a hell of a lot less fun.

Work life balance? I coded all the time, and I wanted to. I look back on those times and shake my head; I would never put those hours in like that, now that I'm in my 50s.

Compensation was comparable, considering the economy, I think. I was well paid, and I'm well paid now.

Processes? HAHAHA. Build it, test it, ship it. But then again, my team and target audience were small and I could literally walk over to another building to find out what I needed. I think today's processes are much, much better for overall code quality.


That's why your location was so important back then. Less so now, but it still makes a difference.


No. I've been a programmer since 1975, and unfortunately, none of the above have progressed very much. all the really important stuff was done by Knuth, K&R and Xeros Parc around them. Process are still religious rather than efficient. And so forth. OTOH, I still enjoy the code...


No Google. Comp Sci bookstores only in major cities with universities. 4GLs were state of the art (I worked on DataFlex) and OOP was new and strange to the broader market. 90-hour work weeks and constant crunch time. Pay divided by hours barely better than McDonalds


To get a sense of the industry in 1992, browse through BYTE Magazine from that era. There are quality scans available to read online via Archive.org — I find this index to be useful but I’m not sure how up to date it is:

https://anarchivism.org/w/Byte_(Magazine)

Many of these tomes are huge (300 megabyte) PDF files.

I’ve been re-reading these via another retro computing site, lower resolution scans, but a complete set and optimal for downloading and offline reading.

https://vintageapple.org/byte


Dollar budget at least $500 a year of O'Reilly books. Time budget a couple nights a month at the bookstore looking through O'Reilly style books. Experiencing fantom Blackberry buzz. You could understand the full stack (and often had to), but you also experienced it in reduced, elephant eating chunks. The last time I was managing people it was tricky, how do I fill all the gaps in my peoples knowledge? I went from serial terminals to thicknet XWindows, to hubs, to switches. I went from writing your own OS and writing to floppies in your own format to server farms. For someone today, you can't possible just know all those pieces that I slowly got introduced to.


OK, so let’s temper all of this nostalgia with the lack of search engines and online API documentation — you had to go out and buy expensive physical copies of documentation and look things up in dead tree form.

And floppy disks? Sucked. Sucked big time. Get too near a magnet and your floppy is toast. Get a box of new blank HD floppies and one or two in the box might be bad. Slow as fuck, too. Everybody should thank Steve Jobs every day for killing floppies.

Total compensation is better now, as long as you get RSUs. But you used to be able to rent a solo apartment near work without roommates, and good luck to junior engineers trying to do that now.


A good book on this topic is Coders at Work. It is just a bunch of interviews with 15 OGs including Donald Knuth and Douglas Crockford.


I think that Fabien Sanglard's Game Engine Black Books about the creation of Wolfenstein 3D and Doom are good examples of software development in general and game development in particular at that time. He also has a book about CP-System.

https://fabiensanglard.net/gebbwolf3d.pdf

https://fabiensanglard.net/gebbdoom.pdf

https://fabiensanglard.net/cpsb/sbook.pdf


I have held every job in IT except (officially) security engineer. Software developer is by far the worst one. Grinding story after story and it never ends, no way to make an impression outside the dev circle due to being siloed behind PMs, you are so busy grinding stories you can't explore the business or collaborate with other departments. It is hell and most devs don't even know it because it is all they know.

I have been a dev in places where they didn't know what a story or Jira was...heaven. That having been my experience, I made the mistake of taking a standard corp dev job...fml. Career death.

The pay is good but my soul is rotting.


Oh, there was a lot of LAN gameplay among engineers on my team at Apple. After 5:30 or so Carmageddon or Marathon would fire up.


Editors were crufty and basic Build tools were low functioning and extremely obscure. Graphics were limited and handling color was hard. Fonts were a dark art as was sound. Even basic libraries had lots of gotchas and the common to interface library at the time was Motif which was full of bugs and leaks. Management such as issue tracking was often absent--not even much trickle, let alone a waterfall. And pretty much all interviews were leetcode based. Maybe my experience was odd but it seems strange so many have found memories of such primative circumstances.


The operating system, all the drivers, editor and my program had to fit into 65,536 bytes of memory. I learned C++ by buying a heavy (~5kg) box of Turbo C++, which contained diskettes and books, for $145.


I was in high school and Turbo Pascal was my favorite language.

If you encountered a bug, and it wasn't in the manual, well, then you're stuck until you think of a workaround.

We had an IPX Novell network at school. We developed a chat application called "The Mad Chatter". And it was so much fun being in a class where a teacher trusted us to work on our pet project.

Ridiculously goofy fun.

I got a demo of it running a few months back with DOSBox. https://www.youtube.com/watch?v=fxlie0f7pkE


Thirty years ago, I was thirty years closer to being a junior developer. That changes a lot of things. It makes it hard for me to judge.

Back then, I got to work on one thing, because one thing at a time was all that they expected of me. Now I get pulled in many directions at once, and it's really hard to focus on one thing. But that may not be so much because the processes changed, but because my role did.

Thirty years ago, the problems and projects were simpler, but the tools were worse. It kind of evened out.

I think the processes have gotten more complicated because the projects are more complicated. For the same complexity of project, I think the processes have often gotten simpler, because the stakes are smaller. You're building a web page front end for your ecommerce site? Your process is a lot lighter than when people were building NORAD air-defense systems, even if the complexity of both is about the same.

I still have the same limits I did. 100,000 lines of code is about all I can comfortably deal with; after that I start getting lost.

As I have gotten older (and I worked with older co-workers), the level of interpersonal maturity has gone up. That's been a good thing; both I and some others used to be jerks, and now we're not (or at least not as much). Again, though, that's just tracking with my age and seniority, not the industry as a whole.


I kind of blanked that part out. yes its true. not so much in the 80s, but by the 90s it was quite common and expected that programmers were all self-important assholes competing for the title of alpha-asshole. glad that's finally gone.


In 1996 I was the junior developer in charge of source control. The developers would all email their source code to me and I would work through it and check their changes in to CVS.


I got a business license in 1984. Microsoft abused independent contractor arrangements but not everybody did.

I worked primarily on an operating system which supported key-value stores as native file objects (DEC VMS/RMS).

I worked on business apps in VAX BASIC. Actually, I spent a fair amount of time abusing compiler pragmas so that I could e.g. do memory management in VAX BASIC. One of my long-term client was the guy who invented the report wizard. Literally. The prototypical cockroach company, at one point he travelled around the country in his van helping people install his software so that they'd try it out. There wasn't much in the way of advertising for software, recommendations were largely word of mouth.

I helped write SCADA systems in VAX Pascal. I wrote a DECNet worm; no it didn't escape. I'm probably (at least one of) the reason(s) that DEC implemented high water marking in RMS.

I did LIMS in HyperCard. Very primitive by today's standards. Things like ELISA notebooks. Makes me wonder why with the CI/CD fetish these days there is no GUI for building web GUIs: why are people writing code the old way? (I have more than opinions about this, and I know of one high quality GUI for writing web GUIs.)

There wasn't "open source" as we know it. As a contractor I developed a doctrine of "tools of the trade" because there aren't many good ways to write e.g. string compare.

My first ever CGI was written in DCL, the VAX VMS command line language.

About a third of my work was firm bid. Contracts were generally reasonable, clear, and written to keep the parties out of court.


I remember loading Linux from 200 little floppies and was absolutely thrilled to see it boot up.


Especially since Linux at that time would let you do things that would cost many thousands of dollars to do with other operating systems. Samba on Linux as a fileserver for Windows clients was probably the main gateway for millions of future Linux sysadmins.


Before Google, and before there was actually content on the Internet, it was the MSDN documentation CDs and about 20-30 very large books.

People scoff these days about stack overflow, but we were more reliant on examples in print that sometimes wouldn't work. Stack overflow is just a resource only slightly les trustworthy than a first edition manual. On the flip side, most coding was quite simple, but for internet work you needed to know perl or c++ as well.


Started working in tech in 1996. All desktops, no laptops, no mobile phones. Worked your time, maybe more if needed, then you left the office and didn't take work home.

References were books. "UNIX In A Nutshell" and "PERL In A Nutshell" were pretty much on everyone's desks, in addition to the K&R C book and various C++ books.

Systems were simpler in that you didn't have multiple frameworks to deal with.

My teams built supporting tools for a system on OpenStep so we coded some in ObjectiveC and PERL to support file systems operations. Then our lead found Python and migrated our PERL stack to Python which was great but it was Python 0.5 so no published documentation at that time. I wish I kept a copy of my emails and responses with Guido Van Rossum because he was the only support available!

Very fond memories ...


Can't say for 30 years ago but 22 years ago it sucked.

- There were no containers/Docker yet so everything had to be installed on server every time manually or by a hand-hacked script and not always easy to repeat on the next server. Produced mystical bugs. Glad we have Docker today.

- Source code control system of the era sucked vs git. Back in the day it was CVS and VSS, then everyone got hooked on SVN about 20 years ago, then there was a duopoly of Mercurial and Git, and finally, Git won about 10 years ago. Life got a lot better once we got Git.

- There was no virtualisation yet. VMWare just appeared around the time, being a very slow software emulator, barely usable. So you needed to have several physical computers to test things on, necessitating an office for at least management and testers, others could work remotely.

- Internet sucked, was good only to send mail and commits, and some task tracking systems. Voice communication relied on phone calls before Skype, video communication was impossible, so physical meetings were necessary for remote and partially remote teams.

- Worst of all, there were no package managers yet so building something from source required an awful lot of work. This is why Perl was so popular in the day even being so terribly cryptic - it had a package manager and a big library of packages in it, the CPAN. It was almost as good as npm today and used same way, and now this is expected from any language or platform, but back in the day, was unique.

(and yes, at least Linux was already a thing. i can only imagine how bad it was for those who worked just 5 years before when they had to rely on commercial server OSes)


Played around with the very first IBM PC, running CP/M. I value these experiences - computing was much more direct than it is today.

I probably would not enter the field today.


A consulting firm called PeopleSoft(? I think that was their name, someone will correct me if not) did a fantastic study of developer productivity in the late 80's or early 90's. They found that developer productivity was most strongly correlated not with education, years on the job, title, salary, or any of the obvious stuff but with:

1) Square feet of desk space

2) Ability to divert incoming phone calls to voice mail

Square feet of desk space might sound absurd and pompous, but it was a real advantage. Large monitors and powerful IDE's and code search tools were still decades in the future. As a programmer you used a printer and you kept stacks and stacks of printed code on your desk. The more square feet of desk you had, the more efficiently you could lay out and sort through the printed code you needed to look at (this is why Elon told Twitter engineers to print their code - he's a boomer programmer who grew up in this era when that really was how you read and reviewed code).

Phone calls to voice mail is more obvious today, but that study was I think the first to point out that programming was a flow state activity where it typically took about 10 minutes to get your mind loaded with all the info and context required for peak programming productivity. They observed that one two minute phone call was enough to wipe all the context out of your mental stack and drop you from peak productivity back to minimal productivity. If you had one small interruption every ten minutes you never hit your peak flow state as a developer. (With the massive improvement in IDE's and etc, I'm curious if that preload timescale remains consistent or has dropped but I'm not aware of anyone doing more recent work on that question).


> this is why Elon told Twitter engineers to print their code - he's a boomer programmer who grew up in this era when that really was how you read and reviewed code

Just to clear up a misconception: Elon Musk isn't and never was a programmer. He never went to school for it, never had a job in it, and his wikipedia mentions nothing about even having the ability to code. He at one point needed some of his devs' help running a Python program. His reasons for having them print out the code at Twitter are almost certainly not due to "that's how it used to be".

TO BE CLEAR: This comment isn't a dig at Elon. I personally dislike him, but this fact about him doesn't make him a worse person. Just clearing up an obvious misconception that @yodon has about him.


Musk has made reference to programming C++ in the past.


One big thing I remember is how small the community of tech nerds was back then. I used to get excited when I'd meet some rando SWE or other tech nerd. "OMG! You work on VAX/VMS?! Me too! Let's talk about it for hours!" Now, it seems like everyone works in tech these days; in the Valley it's hard to meet someone that doesn't work in tech.


Working as a test automation engineer in '91/92 (although as an intern), I wrote a program that controlled a test rig for seat belts.

The machine was basically a home computer running a repl with BASIC, beefed up with a handful of input/output ports to get data from sensors and operate pneumatic actors on the rig.

No graphics beyond what you could do with ASCII on a monochrome monitor.

In the end it came to ~2800 lines of code including comments.

Design principles: a) make it work and b) make the code as readable as possible.

Work-life-balance: it was a normal engineering job, so basically a 9-to-5 thing. Although I did put in a saturday appearance at the end to get everything done (including manuals) before my stint ended.

All-in-all it was as much fun as one could have in test automation at the time; writing the program from scratch and adapting the rig where necessary.


If you wanted to use a new piece of software or library you had to buy at a store or order it by mail. So you ended up writing a lot of stuff like your own collections.

No internet or even cell phones, I didn't have a computer at home - if something broke you had to wait until the next day when people were back in the office.


One main word: Slower.

These days, if you strike a problem, you can get feedback from thousands of guys on the Web who have been there before you. Back then, you didn't have those guys and you had to gradually work out your own solution to the problem. Sometimes you didn't. And had to abort that project.


A note about the world before GIT:

Before coming into contact with CVS or GIT, the method I used when I was unsure a change would be productive was to simply copy the block, comment out the copy, then make modifications, along with notes about what was different, and when

There were lots of lines like

  ; moved xxx  92-02-01 MAW


sorta related: I remember first day of CS101 class circa 1998... professor starts off with question, "why are you here? why are you all here?"

After going around and replying "no" to a half dozen or so different individuals with increasingly philosophical responses he broke it down for us with, "you're all here because 60k! You're going to graduate with a computer science degree and make 60k a year!"

to even imagine that was supposed to be an inspiring number back then is pretty laughable. I was already making as much doing chinsy web dev stuff.

It was a different time for sure. There were no (or very few) laptops, you took notes on paper, you went to the computer labs to do work (with old sparcstations and such) or you remoted in to your shell and used emacs and gdb. Pretty simple times.


I'm too young to have formally "worked" 30 years ago. I did create a lot of code for fun, and some of that saw practical use for a business within the family.

Therefore, can't say much about the work part. It may be result of having grown up, but the most significant change I feel is perspective. Back then there was no tomorrow. No worries (or hopes, or plans) that anything created would need to maintained, or obsoleted/replaced by anything in future. Everything was right here, right now. Today, anything comes with an expected life-cycle, at least implicitly. Constant worries that the next minir OS, browser, language, ... update is going to break something. Back than, if it ran on MsDos 3, it woukd run on 6. And most likely in a command window of windows, and OS/2, too.


30 years ago was probably the only time in my life I wasn't really programming. But 40 years ago, programming (for me at least) consisted of going to my local library, bringing home a computer magazine, then manually typing in the lines of codes to run BASIC programs. Good times!


From what I remember, it was way more cool people in tech and way less management overhead and processes.


Perhaps I was a bit non standard. But worked at NASA and programmed networked Unix workstations in C. You had to worry a lot about memory management. The only source of information was other people’s code, man pages, and trips to the nerd bookstore.

Work wise it is still the same deal.


Speaking only for my own experience: everything was written in C, and took 7x longer to build.

UI toolkits cost money.

So did compilers.


The nostalgia effect will make it sound like there were better times in 1992.

However, the issues then are still the issues of today: * software maintenance is not exciting but is critical. * designing software & systems still requires critical thinking and writing skills. * communication is a skill most people still need to learn and practice. * yes there are better technologies and artificial intelligence will save the day, but people are at the heart of all of the needs software is trying to satisfy, so understanding human behavior is very important.


Started interning in Feb '95 during my final semester in college. Windows3.11/Windows for Workgroups on the PC, MSVC1.5 (C++) and Sun/Solaris on the servers. Novell Netware was big around that time. TCP/IP was still not ubiquitous; we had to use 3rdparty libs for the network stack code. X11 cross-platform windowing and Telnet to servers and FTP for file exchange was common. My first job involved writing image processing code in C/C++ and mucking around with TIFF files/headers. The Internet and Netscape Navigator and the early '90s web was cool.


I was a pretty lightweight developer, coding mostly in Powerbuilder and Visual Basic. What I really enjoyed was getting to know the language/platform deeply and being able to deliver full stack solutions quickly. Back then the client would hook directly to the DB, so you created the UI using pre-built components, added some logic to the ui events and then ran the SQL to update the db. Felt good to create something useful in a day and since they apps were usually for internal company use, you would work directly with the person's who needed it.


Exactly 30 years ago I started my career programming in FoxPro ! https://en.wikipedia.org/wiki/FoxPro


Problem solving is problem solving, so I'm not sure there's much of a substantive difference.

There's this: back then I made good hey from my excellent memory for APIs and minutia of system behavior. That's a completely unimportant skill now. Now you can look any of that up in a few keystrokes (at worst -- probably your text editor just pops up the relevant info as you type).

But my main skill is as useful now as it was then: I find solutions to problems. And I don't fall in love with a solution before I fully understand the problem (which is really never).


Tooling wise, a Mac IIcx, a 'full page' portrait display, Think Pascal and a complete set (vol. I to III at the start) of 'Inside Macintosh' books were programming nirvana.


Bad enough to make you want to quit.

People nowadays complain about algos and red-black tress but honestly, that was the easy bit. Wasn't much open source, so you had to build pretty much from scratch. Internet was young and empty, so big fat books were how you learned. No condensed version or easy trouble shooting. C and C++ were as dominant as Python is today, but nowhere near as fun. (https://xkcd.com/353/)

In short, the deal was to be a cog. If you were good as a small cog, you'd move up to be a bigger cog. Then you could manage a few cogs doing a tiny bit of a huge machine. The scrappiness of just throwing things together and getting something meaningful quick simply wasn't there.

I left, spent 15 years of my career doing decidedly different things, and when I came back I was overjoyed with how little code you actually needed now to get stuff done.


> I left, spent 15 years of my career doing decidedly different things, and when I came back I was overjoyed with how little code you actually needed now to get stuff done.

Must be a function of whatever you've did when you came back.

The amount of code you need to implement the stuff I work on has probably increased (partly because user expectations have expanded).


I was inexact with my wording. I don't think you need less code overall, just that so much of it has been written by others. For sure a lot of the reference depends on what I was doing then and now. Both times involved data and ML, though different domains. *Shrug*, I like it better now.


Did a my first paid software development pretty much exactly 30 years ago, specifically I was hired to port a word processor from the Amiga to the Commodore 64. So my experience is mostly related to the word of 8-bit home computers, already a dying world by then, and I wouldn't be able to tell how working in an office was like as a was still in school at the time and it was a side project.

The source code is here, by the way: https://github.com/mvindahl/interword-c64

Still, a few general observations about that particular corner and that particular time of software development:

- There were multiple successful 8-bit platforms, all of which were very different from each other. Different makes of CPU, different custom chips, different memory layout. You could be an expert in one and an absolute novice in others.

- The platforms were more constrained, by magnitudes. A very limited color palette, far fewer pixels, far less RAM, and far slower CPUs. For a semi-large project, it could even become a challenge to keep the source code in memory and still have room for the compiled machine code.

- On the upside, the platforms were also far more stable and predictable. A Commodore 64 that rolled out from the factory in 1982 would behave identically to one built five years later. Every C64 (at least on the same continent) would run code in exactly the same way.

One thing that followed from the scarcity and from the stability is there was an incentive to really get close to the metal, program in assembly language, and to get to know the quirks and tricks of the hardware. Fine tuning an tight loop of assembly code was a pleasure and one could not simply fall back on Moore's law.

It was a simpler world in the sense that you didn't have to check your code on a number of machines or your UI on a number of window sizes. If it worked on your machine, it could be assumed to work everywhere else.

Another thing that I remember is that there was more friction to obtaining information. The internet wasn't a thing yet but there were text files flowing around, copied from floppy to floppy, and you could order physical books from the library. But a lot of learning was just opening up other people's code in a mchine code monitor and trying to understand it.

Some of these things started to change with the Amiga platform, and once PCs took over it was another world, with a plethora of sound cards and graphics cards and different CPU speeds that people had to deal with.


I wasn't a software developer at that time, but the improvements in scalar CPU performance were insane. Clock speeds frequently doubled (e.g. 33 MHz to 66 Mhz).


If you made business apps, then in some ways it was quite similar to today. You built a backend server sitting on top of a database, and and client app with a UI that made calls to the backend. You would have backend and frontend developers, just like today. The difference of course was that you built it on a totally different tech stack. If (like me) you worked with the Microsoft tech stack then you built everything as a native Windows app.


How about '77? That's when I started. I still have my original K&R (ratty though it is). Worked up from RPGII to COBOL and eventually C. Learned things by reading code as there was precious little in book stores and libraries. When I discovered Unix I was hooked. Eventually wrote and maintained a 4*1/2 gen language comprising 1 million lines of K&R C. That was intriguing. AUD$6m on ASX. in '86. I had to write what is now called a garbage collector. I got sick of debugging and wrote a test system in '87. It was insane, but understandable. Once stood on a stage at UnixWorld and said: "When demonstrating systems, never say anything more predictive than: 'Watch this'" And hit return. The code core dumped. It actually crashed, but everybody in the theatre thought it was a stunt and laughed uproariously. I smiled and re-typed the command. This time it worked. And the demo went on. Studied the Dragon Book and others and dug deep (way too deep) into the electrics of IP. And I do mean electrics - CRO and all. Figured how fast electricity travels in a circuit to time latches to build my own boards. The thing was... I felt obsessed. I had to learn. And there where no books. Reading code was like Neo: "I know kung fu!" You had to read code. And assembly. After getting the TCP/IP Illustrated books I dug deep into protocols. Contributed to RFC 2812. It was fun!!! I had found my muse. Spending hours studying hex dumps of network interactions to find timing issues over copper. Writing manuals using the AN macros (Hint. nroff -man). Writing a sysadmin manual in the 80's. Contributed to a game named XBattle. And then... I learned C++ and Perl and Ruby and... and... And then... I got old. I found that "rush, rush, hurry up and wait" started. I contributed to the Agile movement. I worked at a company just 5 years ago that "embraced" Agile. And they got everything completely and utterly wrong. And pushed, and pushed until one day my Scrum Master clicked her fingers at me and whistled and waved me over to a standup. I told her I wasn't a dog. She said "What are you going to do? You're 60! Are you going to quit?" I did. It used to be fun. It used to be enriching. It was wonderful to be able to figure out how to transfer data over UUCP and help build the 'Net. To contribute to the bazaar. To contribute. To increase understanding. To increase the wealth of knowledge for future programmers. And now... Well I'm past it now. I'm 66. I still configure our home network. I still write code. Home automation is a joy. Garden automation is a joy. Corporate IT? Not for me.


Wut the heck happened to my formatting? :-(


hn needs two line-breaks for new line


I was still in HS, but I did write a bunch of programs in Calculator BASIC on my Ti-85 and eventually sold a chemistry program for $1 to a classmate.

The big thing is, the internet was new, nerdy, most people didn't even have it, and I ended up carrying around the big thick manual my calculator came with because there was nothing else. Google didn't exist. And in fact I'm not sure what if any search engines there were at the time.


I developed software in the 80's but it was similar as it is today. Not much has changed. Books/co-worker talks/seminars became Stackoverflow. But programming languages in those days look the same as the ones we have now.

Developing agile as is done today is better though.


I was about 25 years ago, the main thing I remember was having a "The <programming language Bible" on your desk. Several inches thick and really before there was anything like Stack Overflow there was this and maybe the occasional rudimentary blog you found to assist.

It felt like true discovery back then.

Oh and before PHP came about it was mostly CGI-Bin <vom>


Human factors are pretty much the same, read the MMM by Brooks for an eye-opener.

You’d often get, “can you start on Monday?” during interviews, 100x better.

Today we have better tools, folks are more serious about reducing bugs, projecs are better about avoiding well-known hurdles, all good.

Compensation is theoretically better but due to rises in housing, education, medical costs I’d say it’s a bust.


There's legendary stories of the very few software developers out there, having huge "conferences" (parties) across Las Vegas and almost every Ski Resort on the planet. To be in demand meant that if you were an Ad Exec or a Software Engineer in the 80s and early 90s, you were set.


I'm very curious to hear about what the interview process was like and how that has changed over time


started as a C programmer on DOS servers (believe it or not) The network was a token ring. Used Brief as an editor and we had some key-value database that was always getting corrupted (can't even remember the name)

I do remember liking Brief a lot though

Didn't exactly position me well for the future, haha


I was small kid back then but playing around with QBasic on our Windows 3.11 was great fun back then! Also used it to create music: https://m.youtube.com/watch?v=5pwxjJAMjo8


One person was expected to know everything and code anything. There wasn't much specializations.


No leetcode. Your actual skills mattered a lot more than toy interviews with unrealistic edge cases.


- Desktop software was sold in boxes at stores.

- No SaaS.

- No “information super highway”.

- Bigger focus on D2C marketing due to information asymmetry


I was a young developer in a major US city, but on the other side of the country from Silicon Valley.

The best part was a feeling of hope. The industry was full of "evil" companies like Microsoft and IBM but there was a pervading sense that everything was headed in a fun and open direction which would surely lead to a better tomorrow. Nobody loved Microsoft per se but they were absolutely more open and fun than IBM, and Linux and the interwebs were on the horizon and surely we'd have a decentralized digital utopia in another decade or two.

You felt like you were on the cutting edge of that, even if you weren't working on something particularly cool. Kind of like Neo's boring day job vs. his night life.

    processes
Source control and automated tests were rare/nonexistent at many companies.

They were surely standard at larger software companies but not in smaller shops.

    design principles
It was an inversion of today, in a way.

In the 1990s developers had a more solid grasp of computer science-y fundamentals like algorithms and data structures. But we were far less likely to know anything about best-practice design patterns, architecture, and that sort of thing.

People (including me) complain about how modern software engineering is less programming and more a matter of choosing a framework and fitting existing bits together like lego bricks without really knowing how anything works.

What gets talked about less is how frameworks like Rails, React, etc. are generally (generally!) built around tried and true paradigms like MVC, and how this is something that was much more likely to be lacking in software development projects 30 years ago when everybody just sort of "rolled their own" version of what architecture ought to look like.

You had genuinely smart people trying to do things like separate app logic from presentation logic, with varying degrees of success, but even when it was a success it was like... some random nerds idea of what an architecture should look like (possibly loosely based on some architecture book) and it was a big learning curve between projects.

    work-life balance
Working from home wasn't a thing, which was both good and bad.

A lot of late nights in the office. This was perhaps more true for folks like me doing early web stuff. The technology was changing so quickly under our feet, and was so inconsistent between browsers and browser versions, everything was a mess.

It was probably a little more sane for the guys working with Visual Basic, Delphi, FoxPro, whatever.

And when I say "guys", I really mean guys. It was incredibly rare to see women developing software. It was cranky old greybeards and pimply-faced college geeks who drank a lot of Mountain Dew. Just way more nerdy in general and not necessarily the good kind of nerdy.


Slow network (1Mbps), slow computers (16MHz?), 1MB RAM, spinning disks, tiny screens. Only thing you might recognize from a developers desk back then was the keyboard. Even the mouse was mechanical and had a cord and wasn't USB.


Funny you should ask, this just showed up in my youtube feed yesterday:

"What was Coding like 40 years ago?":

https://www.youtube.com/watch?v=7r83N3c2kPw


For the first few months of my first job in 1996 people were using dialup internet.... and I spent a lot / most of my time reading documentation from a giant set of manuals that came with the devkit we were using.


I guess you had to learn things since there was no Google and Stack Overflow.


Our small department of ~20 people had a few book shelves with stuff like Clipper, Turbo Pascal, FoxPro, Novel etc. They were used very frequently.


It was a special club that you had to earn your way into.


We were 30 years younger so all was fun. The day I got a fax machine in my apartment and work followed me home it all changed.


Jira didn’t exist, so it was much better.


emm386 Error #06

And a subscription to the Microsoft knowledge base which came on CD (in addition to the books others have talked about)

And I vaguely remember that the debugger inserted itself between DOS and windows, when meant that it could bring crash windows if something went wrong.

Fun, but slower than today.


I started 38 years ago. If you can remember 38 years ago, you weren't there ;-)

No internet. Same problems.


Microsoft is about the take over the world. C++ is then what Rust is now.


It was definitely more primitive, but that didn't bother us. Here are some of the things I remember.

When I started, I was coding on a 24x80 CRT connected to a serial-port switch so I could connect to several different systems (e.g. PDP/VAX). A bit later I got a 50x80 terminal with two serial ports and that was considered luxurious. I also worked on Macs and PCs, which were a bit better but less than you might think.

Related to that, lacking anything resembling a modern code-navigating autocompleting tooltip-providing IDE, people actually wrote and updated and read technical documents. Reviewing them was also a Real Thing that people took seriously. Also, books. Another commenter has mentioned Barnes and Noble, but if they even existed then I wasn't aware of them as a resource for technical stuff. The one I remember was Quantum Books in Cambridge MA (later Waltham). It was common to have a bookshelf full of manuals for various UNIX flavors, or X (the window system), or Inside Mac, depending on what you were doing. If you didn't have the book you needed, maybe a coworker did so you'd walk around and ask/look.

There weren't a bazillion build systems and package managers and frameworks. You had tarballs (or the cpio equivalent) often used with for-real physical tape, and good old "make". Automake and friends might have existed, but not near me. It might sound limiting, but it was actually liberating too. Just unpack, maybe tweak some config files, and go. Then see how it failed, fix that, and go around again until you got something worth working with.

A lot of things that seem "obvious" right now were still being invented by people much smarter than those who now look at you funny for suggesting other ways might even exist. I actually worked with folks who developed now-familiar cache coherency algorithms and locking mechanisms (now embedded in ISAs) and thread models and so on. Not much later it was distributed-system basics like leadership, consensus, and consistent hashing. These kinds of things either didn't exist at all or weren't well known/developed. And it was fun being part of that process.

It was possible to know all parts of a system. At Encore I was able to describe everything that happened from hitting a key to a remote file being written - keyboard drivers and interrupt handlers, schedulers and virtual memory, sockets/streams and TCP/IP, NFS, filesystems and block-device drivers. Because I'd worked on all of them and all of them were simpler. I was far from the only one; those were not far from "table stakes" for an OS person. Later in my career it was practically impossible for one person to understand all of one part such as networking or storage. That makes me kind of sad.

Work/life balance was, perhaps surprisingly, not all that different. Crunch time was still a thing. I worked plenty of all-nighters, and even worked nights for extended periods to have access to scarce new hardware. I had more intense and less intense jobs over the ensuing years, ending on one that was at the high end of the scale, but those were all choices rather than changes in the surrounding environment.

Compensation was good, but nowhere near modern FAANG/unicorn levels. Maybe founders and near-founders haven't seen it change as much, but for the rest of us the difference between a top-paying and a median senior/principal/staff kind of position has become much larger. 90% of developers used to be in the same broad economic class, even if some were at the low end and some were at the high end. I'm not sure that's really true any more.

That's all I can think of for now. Maybe I'll edit or reply to myself if I think of anything else later.


You had to wear a tie but your could smoke at your desk.


There was a lot less googling for sure.


Too young to know but my guess: People smoked inside and made funny jokes. And there was way less garbage code.


I got my first job out of college 29 years ago. For the first 10 years or so, the process was I guess you could say agile-like in that we certainly didn't do waterfall, we just talked about stuff, built stuff and banged it straight into production. Testing was not an official thing at all, somebody had a quick look-see if they felt like it.

My first 10 years on the job was Turbo Pascal and Delphi for various shops. Working in that old DOS-based Turbo IDE felt like magic, I remember plumping for a shocking blue, yellow and pink colour scheme - I miss Pascal. The move to Delphi was a huge change, OOP, native strings over 255 long and some truly unbelievable drag-and-drop GUI building functionality.

We had no source control until we started using Delphi, I think it was Subversion but there might have been something before that. Prior to SVN it was a case of baggsying report.pas for the day.

Thinking back, and maybe I've forgotten, but I don't think we shipped anything that was particularly worse than stuff I see getting shipped today. Yeah, stuff went wrong, but it still does. Without reviews, Git, CI, etc we still shipped something that largely worked and kept customers happy.

Code quality was bad. No standards were followed across the team, so files all had different styles. It wasn't uncommon to see procedures that were 100s, maybe 1000s, of lines long. Turbo Pascal's integrated debugging was a life-saver.

Unit testing was not a thing.

I think we wrote far more stuff ourselves, whereas today there's a lot more libraries and systems to use instead of building.

Obviously there was no Stack Overflow, I signed up to that when it first came online, it has been a game-changer. I read a lot more programming books back then, you had to. I think there was a lot more try-it-and-see work going on, I used to write many small apps just to work out how some thing needed to work before touching the main codebase, that's something I still do a lot today, I'm not sure the new-bloods do?

Work-life balance was absolutely fine, there was no pressure to work extra hours but I don't find that there has ever been. I've always prioritised family-time over work, I put in full effort for my contracted hours, the second they are up, I am gone.

I certainly enjoyed programming a lot more back then, it felt closer to the metal, it felt like a voyage of discovery, I couldn't just Google to find out how to pack a file down to 50k whilst keeping read-time quick, I mostly had to work it out myself, read a book or ask colleagues. You had to work harder to learn stuff, and I don't know, it felt like that hard-won knowledge stayed with me more than something I googled last week.

Moden languages have abstracted away a lot of the complexities and that is of course a good thing but I kind of miss the pain!


In some ways better, in some ways worse. Most corporate environments not specifically geared to creating software (so, outside of places like Microsoft and Borland) were probably a lot more waterfall-y than today, and prone to doing things like using "lines of code written" as a KPI (even though experts knew even then that this was a bad idea). The tooling was a lot more primitive unless you were fortunate enough to work at a powerful, dedicated workstation like a Lisp Machine or a Rational R1000. For PCs and the like, IDEs existed but didn't have autocomplete or refactoring tools. Matter of fact they were kind of necessary unless you enjoyed the cycle of go into editor, change code, quit to DOS, run code, crash, quit to DOS (or reboot), fire up debugger, find problem, go into editor... Things like Turbo Pascal shortened that completely.

Thankfully, the much smaller size and scope of a typical software project partially offset the lack of sophisticated tooling.

In 1992, everybody knew that object-oriented programming, with languages like C++, was The Next Big Thing, but that future was not here yet and most everybody grovelled around in C or Pascal. This kind of thinking led to object-oriented bags on the side of even Fortran and COBOL.

Oddly enough, no-code tools did exist, but they were called CASE tools and were just as bogus then as today. GUI builders like Visual Basic hit the market.

On the upside, if you had a knack for computers it was easy to start a career in them. Programming was seen as a great career path for smart people who were "on the spectrum" (Asperger's syndrome was just barely entering public awareness). It was a lot more technical then than now so you really had to know how things worked at a low level. These days the "people persons" have really taken over and you really need to be one in order to thrive in the field.

Plus it was just a lot more fun then. People thought computers becoming mass-market devices would change the world for the better. Ads were for television and newspapers, not manipulative "tech" companies, and most programmers in the industry -- yes, even Microsoft, who weren't the evil empire yet in 1992, that would've been IBM -- really wanted to produce something useful, not just something that drives clicks and engagement. People also wanted to just mess around a whole lot more. Sometimes you'd hit the jackpot with your idea, but the important bit was getting it out there, not necessarily making millions with it. A college student named Linus Torvalds started writing an operating system just to see if he could. (Back then, operating systems were A Big Deal and could only be written by Real Programmers with decades of experience. Word was that Real Programmers working on Real Operating Systems would put the Linux source code up on a projector after hours, crack open a few beers, and have a laugh.)

It was a lot more "wild west" back then, but easier to find a niche, especially for individual programmers. Sometime in the late 90s this consultant douchebag for Microsoft decreed "beware a guy in a room" and so the focus from a management standpoint became "how to build the best software development hivemind" and we've been dealing with the effects of that since. That and the phenomenon of people who want to sit at the cool kids' table, but can't, so they take over the nerds' table, because better to rule in hell than serve in heaven.


I don't go back quite that far professionally -- I guess my first official software job was in 1996. My perspective was a little different because I was in biotech at the time, not in an actual software company. But we had in-house dedicated software people, and we worked with a lot of contractors, so I guess I'm going to pile on here.

1. Processes: mostly, winging it. But since I worked in a regulated industry, there was a lot of very specific stuff the Feds expected of us; and almost no internal documentation to get someone up to speed on it. Downside was you often did stuff wrong once before some Senior (which in those days meant more than a couple years out of college) made you do it again. But this was part of learning, and I'm sure they learned it that way too.

2. Design Principles: um, mostly winging it, but I'm sure it was less freewheeling in parts of the company like manufacturing. For those of us working mostly with data, we were all very interested in outcomes, and some of us cared a lot about the users, but a lot of the software was solving problems that had never been solved with software before, and we made a lot of it up as we went along. Towards the end of my first job I went to a big HCI conference and thought, hmm, there is a lot of jargon and long-winded academic presentation going on -- but is anyone outside the bubble listening? (I guess OS designers were listening, and I thank them for their sacrifice.)

3. Work-life balance: we worked hard, and yes we played hard (not just the young) -- but it was pretty much in line with everyone else in the SF area working in high-energy industries at the time. There was no expectation of better or worse working conditions if you were doing software versus anything else you might do. You had an office if your peers had offices. Then later, with the startup craze, all-nighters and lofts and so on came into vogue, but that was self-inflicted (and at that age, I really enjoyed it).

4. Compensation: we got paid as much as the other professionals, and not more, but usually also not less for our relative place in the org charts. It was nothing like the situation now, where there are maybe a dozen professions that get way higher pay than their equals in other professions; nor was there such a big gap between the US and other rich countries. But then, SF was a cheap place to live back then. Much changed and it changed fast. (For one actual data point: when I finally made it to "official computer guy" was when I started making more than my local bartender, who until then had made the same as me but only worked three nights a week.)

And as a general riff:

Almost nobody was in it for the money, which is not to say nobody was trying to make a lot of money. Rather, everybody I encountered who was doing software was more or less destined, by their talents and affinities, to be doing software at that moment in history. The people who were primarily about the money got out of coding as fast as they could, and did management or started companies, and that was for the best.

Everything was magical and new magic was being invented/discovered every day! I myself got to go from working on character terminals to Macs to Irix and back to Macs again; to discover Perl and write ETL programs as what was then a "junior" -- but in today's world would probably be a "senior" because I had two years' experience. I would read man pages like people read (used to read) encyclopedias, which is kinda what they are. I had a housemate who would get high and write what we would now call the "AI" of a famous video game. I wrote my first CGI programs on a server at best.com. Every step that now seems mundane was, at the time, amazing for the person taking it for the first time -- writing a crappy GUI! client-server! relational databases! friggin' Hypercard! A friend was given a do-nothing job and became an expert at Myst before he quit, because... magic!

The path from writing code to business outcomes was a lot shorter than it is now, and I speculate that this put a limit on the gatekeeping attempts by folks with whatever "elite" degrees or certificates were in vogue.

And there were a lot more people (relatively speaking) who came from different backgrounds and, like me, had found themselves in a time and place where "playing with computers" suddenly was part of the job, and a springboard from which to grow into what we eventually started calling, correctly IMO, Software Engineering.

But back then, in biotech and in the bright shiny new Internet world I joined around 1998, boy oh boy was it not Engineering. It was inventing! It was hacking! And it was a lot of fun.


30 years ago in 1992 I was working mostly with C for Acorn RISC OS machines, assembler for ARM processors (RISC OS and embedded) and various other assemblers for Z80, 64180, 8052 for embedded work (no C). For a general purpose glue language I used BBC Basic which came with RISC OS which was a fine language. You could also embed ARM assembler in it to make it run faster. I wrote 100k lines or more of assembler in those years! ARM assembler in particular is so nice - I miss it sometimes.

My main development machine was an RISC OS machine with a 20 MB hard disk (which crashed the day after I got it - glad I made backups on floppy disks!). Back then I would make printouts of programs to refer to on tractor fed dot matrix paper. The idea of printing a program seems very old fashioned now!

My editor of choice was called !Zap which was like someone ported the essence of emacs to RISC OS. It wasn't until 1998 that I used actual emacs and felt immediately at home.

I had lots of reference books. The giant programmers reference manual for RISC OS, and dozens of CPU manuals (if you asked the chip manufacturers nicely they would send you a datasheet for free, which was usually more like a book). I had a few books on C but I didn't consult those much as C is easy to keep entirely in your head.

As for process, and design principles - they could be summed up as "Get the job done!". I had very little oversight on my work and as long as it worked it was up to me how to make it work.

Compensation was excellent, programmers were in demand. So much so that I set up my own business.

Computers have got so much faster since then, but it doesn't affect the job of programming much - it is still the meat component that is the limiting factor.

The internet (which I didn't get access to until 1993) has completely changed programming though. So easy to look something up, whether that is the arguments to memcpy or an algorithm for searching strings. It wasn't until the internet that the open source movement really took off - distribution was a real problem before then. I used to download open source/freeware/shareware from BBS or get it from disks on magazine covers.

Having access to high quality open source libraries has made programming much better. No longer do I have to write a bug ridden version of binary searching / quicksort / red-black trees - I can just use one from a library or bundled with my language of choice.

Not having to write everything in C / Assembler is a bonus too! I didn't meet Perl until 1998 and then it was transformative. I soon moved onto python which I still use a lot as my general purpose glue language. I'm a big Go fan, it seems like C without the hard parts, with concurrency and a massive standard library.

Are things better now? Mostly I think, though there is often far too much ceremony to get stuff done (here's looking at you Javascript ecosystem!). Programmers in 2022 have to build less of the foundations themselves which makes life easier, but it takes away some of the enjoyment too.


A lot less complicated!


Better than now.


msdn subscription cd binders... everywhere


It was different. The amount of resources were zero. We had books. Like real thick books with lengthy explanations of each command. Sometimes it was the complete opposite and there was a command without any explanation at all. Good look with that, without decent debuggers or clear error messages.

Since my 14th I joined a computer club. Basically a bunch of nerds from the age 14-60 working with computers 24/7. Some ran Unix like systems, most had a Windows system or RedHat. You had to take your desktop computer and (huge) monitor with you to every meeting. It was a whole operation just to get there.

Some guys worked at Philips (now ASML) and had very early version of COAX nic's. We built our own LAN networks and shared files over the network. We played with network settings, which was basically what we did all day. Lots of terminals and Dos screens. Later Netscape was the browser of choice. When the Windows era really took off some of us started to hack Windows 95 machines. Often with the use of telnet and a bunch of commands you were able to find a lot of information about a computer. Sometimes it was just as easy as sending a picture with a hidden backdoor with tools like Cult Dead Cow. We did it for fun. There was no scamming involved.

We used ICQ and IRC to communicate online. Later there was MSN Messenger which was great to communicate with classmates. There was no cable internet. You had to pay per minute for internet over the telephone line.

Free software was available on Warez sites or FTP sites. Also newsgroups but this was mostly for reading news. I had FTP servers running everywhere. People connected to them through the whole world. Some left a thank you note after downloading music or the newest Winamp version. There was no Napster.

We went to LAN parties. Basically a university opening their auditorium for 150 nerds with huge desktop machines drinking Jolt coke all night. Some companies sponsored the events and tried to recruit java developers. There were games to hack a computer. Big part always was the opening where the (hack) groups presented themselves with 3d movies. The quality was insane for the time.

Also during Windows 95 lots of software ran in a DOS prompt. BASIC and Pascal were languages often used for this. The applications were not structured in a great way. You had open each file and analyze it. I can't remember developers used many comments in the code. Files were fairly easy to read and understand. There weren't that many references to other files.

If you wanted to have a DOS prompt with your application you had to write everything, even the mouse cursor moving over the screen. There were no packages or other predefined codelines. There was no autocomplete, code checks, stack overflow, or decent debugger. There was even a time without line numbers.


In 1992, I was at the tail end of my career of writing System/370 assembly (2 years later I would move on to "cooler" technologies). I worked in an old-ball shop that had its own flavor of VM/370: we had our own file transfer protocol, our own terminal support, our own usage accounting, various scheduler modifications, and tons of other changes sprinkled throughout the hypervisor and guest OS code base. I worked on the hypervisor part (known as CP).

It was the most fun I ever had on a job, despite working on an "uncool" (non-Unix) system, largely because of the really smart people there, and the opportunities to do fun stuff (e.g., writing Rexx plugins to allow direct control of devices, including handling device interrupts, for use in tooling). Also, being young and less experienced -- so everything seemed new -- helped.

Processes: Until 1990 or so, we had a dedicated person who served both as "the source control system" and release manager. Once a week, we submitted changes to this person, she merged them with other people's changes, and put the merged files onto the official source disk. She also built the weekly releases (which were installed on our hardware every Saturday night). I am not sure what happened after 1990... I think we rotated that job between each member of my team.

I also believe, maybe incorrectly, that stuff was far better documented back then. We had a giant rack of IBM manuals that pretty much had most of what you needed. Some of the more experienced workers served as human Stackoverflows. We also had access to some bulletin-board like system that served as a discussion group for VM/370 systems programmers, although I only used that once or twice.

Design principles: I don't really remember much about that, but for big changes we did a lot of writing (by 1992 we might have starting an intranet-type thing, but before that we just distributed these proposals as hard-copy). I remember we had tons of memos in our memo system, with keywords cross-referenced in an index. I used to peruse them for fun to see how something ended up the way it was.

In general, we documented a lot: we wrote memos for change proposals, for what we actually ended up doing, the root cause of system crashes, tools, etc. We would often update those memos (memos had versions, and all versions were kept). I guess our memo system was sort of like a company intranet, but somehow it seemed less crufty and out-of-date, but maybe it only seemed that way because there was no search to turn up deprecated documents.

Work-life balance: Not great, I guess, but that partly could be on me. I loved my job, so I worked too much and I think it did have some long-term negative consequences. But there were deadlines and system crashes that needed to be figured out. There were periods with lots of weekend work: We had a hypervisor, so we could test hypervisor changes in a VM whenever. But for big changes, we needed to test things on real hardware, and we could only do that when the production systems were shut down late on Saturday night/Sunday morning.

Compensation: I really have no memory of what I was making in 1992. If I had to guess, I would say around 45K, which is about 81K in today's numbers. So it was a little on the low side, I guess, for a programmer with 7 years experience. But I didn't know any better, so I was happy (I had no idea what anyone else was making, and I could afford to live on my own and have the occasional electronic toy).


Things were different, but I don't think it's universally better or worse.

In 1992:

- Every programmer had at least seen, and possibly even coded in, _every existing language_. I had college classes that touched on or taught COBOL, FORTRAN, Pascal, BASIC, Ada, C, C++, and even a couple different assembly languages. The current proliferation of languages can be a bit overwhelming when you're striving for the "right tool for the job."

- Using C++, especially cross-platform, was an act of frustration due to lagging standards adoption. Like others have said--you only learned from books, and stuff that the books said you could use, like templates and exception handling, just didn't work on a ton of different compilers. gcc/g++ was still rough--using the compiler provided by the OS was still the norm.

- UNIX boxes like Sun or SGI workstations were _crazy_ expensive, but if you knew the right people, you could buy used stuff for _only_ the price of a Honda.

- There were 20+ different UN*X variants already, all with only a modicum of compatibility. 16-bit SCO, 32-bit SunOS, and 64-bit DEC/Alpha architectures made porting...a challenge.

- 1996 was the first time I saw my first Linux box in the wild (at a UNIXWORLD expo, I believe? The star of that show was Plan 9.)

- Agile/XP/scrum was years away from common adoption. Code reviews were nonexistent. Pair programming nonexistent. Continuous Integration nonexistent. Unit, system, and integration tests were a decade away still: QA was a team, and _manually_ tested builds using a test plan.

- To be productive within a code base took time due to lack of standards, making then somewhat impenetrable (sometimes by design for job security), and system setup took days (no docker!). Some people used `make`, but there were a ton of other home-grown tools as well.

- Source code control wasn't widely adopted. Some companies didn't see the need. This is pre-git and pre-subversion--some people used CVS/RCS. To get a tree, you had to rsync from another dev's tree, or a master shared filesystem.

- Having someone else look at your code was unusual, compared to typical pull-request-style workflows of today. No bikeshedding issues back then, but also, no real opportunity for mentorship or style improvements.

- There wasn't really an idea of "shared open source libraries" yet. CPAN (for perl) was one of the first, and started in 1993.

- You had to actually _buy_ tooling, like a compiler (like SUNWspro, Visual Studio, or the X-Motif builder toolchain). It made coding at home on a budget a lot more limiting.

- Work/life balance back then, just like today, varied widely in the silicon valley. You had to pick a co that met your needs.


Not exactly 30 years ago, but I started my career in early-mid 2000s at a IT services firm. We were working on fixing bugs/building small features for a telecom network equipment of a major multinational ( at that time).

Work was mostly well distributed/planned well in advance and overtime was only around field testing/release dates/acceptance test etc, none of this on demand agile bs that we've these days. Every major/large feature was broken into requirements (or use-cases in some projects) and these were were written by a very senior person (or a group of them) and it would be reviewed for every spelling mistake (I kid you not) before handing it over to the dev team. We used to have workshops where people from different modules (old name for the modern micro services) would sit around a physical table (not a slack/teams meeting) and would pore through the printed document or on their laptop and one person would literally take notes on what was discussed, what were open issues for the next meeting etc. Only when every requirement was completely addressed it would get a green signal to move to dev.

Test / Dev were different teams, and in companies where I worked V model was popular where a test team would write test cases and dev team would write code against these requirements. Testing was a vertical in itself and while devs usually handled Unit/Module testing, system integration testing, field testing, customer acceptance testing were done by dedicated teams. The goal was to capture 90% of defects in MT, some 7-8% in SIT and only 1-2% from field and theoretically nothing post release. We (devs) used to have goals given on how many bugs can be expected from each of the phases to determine our quality of coding. Code reviews had a reviewer, moderator, approver and so on making it a very important event (not the offline bs that happens today). A post release bug would be a nasty escalation on both dev/test teams.

Did I also mention that the MNC had a tech support team who had good knowledge of most systems at high level, worked in shifts and unless there was a bug which required code change, would be able to handle & resolve most escalations from customer. Bugs requiring code change would be sent to the dev team only after a formal handshake between dev and support teams. The bugs would get treated the same way like a feature, and went in maintenance packages released every once in a while (same cycle of dev/testing as features)

There were separate teams in some projects, one for bug fixing of previous releases and one for building new features for an upcoming release and they used to be rotated out after a release !

I always thought that moving to agile/scrum would make life easy and fast. While it shortened release cycles, software quality has taken a huge hit, most code these days are copy/pasted , reviews are mostly lip-service and the end result is that most engineers are forced to do pager duty and are called to fix the mess they made round the clock. Interview processes are mostly focused on irrelevant ds/algo questions and abstract design problems with little to no emphasis on a candidate's experience. I had one interviewer tell me that they really don't care about a candidate's experience but only his performance in the interview matters (yeah, no sh1t sherlock, explains why the company needs 24x7 on call dev support !)

Call me old fashioned, but I do miss the old way of building boxed software (plan/analyze/design/code/test/ship and maintain). Work was relatively more predictable and office felt like a place where people actually collaborated and worked together to build something they could be proud of !


Thirty+ years ago was a time of big changes. There were a bunch of companies competing for the "supermicro" and workstation business, both in hardware, and software. If you wrote that kind of software, you might have done it in C. If you were on the business side, you were working w/mainframes, or "Basic 4" type business computers, or maybe CP/M or MP/M computers. IBM PCs were mostly still using DOS, and programming on those was often in DBase, FoxPro, Turbo-C, BASIC... etc.

In 1988 I was a 25 yo working on for a 10ish person KP funded start-up that wrote a Mechanical CAD package that ran on Microsoft Windows 3.0. The premise was that PCs would take over, that mini and micro segment would disappear, that VARs would no longer be necessary to sell hw/sw and train people to use apps.

The application was written in C (not C++) for Windows. It took significant parts of an hour to compile (see the XKCD comic on sword fighting during compiles). Some of the demos we'd do would be on the COMPAQ luggable machines. We'd find bugs in the Windows API. We'd write our own object-oriented DB that lived in memory and disk. The "Algorithms" book was 18 years away. Most of the team had been through 6.001 (THE 6.001) and had that as a basis. We had to solve pretty much everything -- no real libraries to drop in. Our initial network had a single 68000 based Sun machine with SCSI hard drives (10MB then 100MB as I recall) running NFS, with PC-NFS on all of the PCs, connected via coax cable ethernet. We used CVS as our source control. We later got a SPARCstation to do ports to Unix, and it was very much a thing to port separately to Sun, Intergraph, and SGI workstations since the OSs were different enough.

The first version took about 2 years (hazy...).

And after you'd written the product on Windows, to get it to RUN well we would write programs to do runtime analysis of typical app usage (watching swaps in and out of memory) to build custom linker scripts to pack code in a way that minimized the amount of program paging in and out of memory, since PCs didn't have much memory in those days. I'd find out a couple of years later this is how MSFT did it for their applications; they didn't tell us, we had to figure this out. Developers were Developers. Testers were Testers. Testing was done primarily with running through scenarios and scripts. We were date driven, the dates primarily driven by industry events, our VC funding, and business plan.

As we got ready for releases, I recall sleeping under my desk, and would get woken up when bugs were found related to "my area." That company was pretty much everyone's life -- we mostly worked, ate, exercised, hung out together, and we were always thinking and talking about "the product." There was this thing called COMDEX that would take over Las Vegas each November, as the SECOND biggest show for that town. The first was still the Rodeo :-). If you were in PC hardware or software, you HAD to be there. Since some of the team members comprised core members of the MIT blackjack team, when we went to COMDEX there was some crossing of the streams.

Design principles? Talk it over with the team. Try some things. I can't recall compensation levels at all.

That company got purchased by a larger, traditional mainframe/mini CAD/CAM vendor, about the time that I was recruited to the PNW.

Things better, or worse than today? That REALLY depended on your situation. As a single young person, it was great experience working at that start-up. It was a springboard to working at a mid-size software company that became a really large software company.

Today, it CAN be more of a meritocracy, since there are ways to signal competence and enthusiasm by working on open source projects, and communicating with other developers. It's easier to network now. It's HARDER from the perspective of there are larger numbers of developers in nearly any area now than ever, and geography just isn't as important. But I also perceive that most people are less willing to make trade-offs like spending extra time today finishing something while it's still top of mind, vs. "knocking off" and doing it tomorrow. That could just be my perception, however.

I still like working hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: