Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Old-school computing: when your lab PC is ancient (2021) (nature.com)
79 points by goldenskye on Dec 28, 2022 | hide | past | favorite | 72 comments


I've literally been that engineer, manning that computer, in that room at White Sands, for that instrument on SDO. I was the telemetry/power systems engineer for one of their launches to do underflight calibration on the EVE instrument. It was an absolute nightmare of a job.

The PCs were Dolch lunchbox style single board computers. I had to scrounge the one together for the mission I worked in 2013 by assembling it from a couple of others that were broken. Then I got the joy of installing DOS, to run good old TDP502.exe to handle the telemetry stream coming in. Every fifteen seconds I had to hit "Print Screen" to make the tractor feed printer attached to the machine do a print out of my screen. Kind of insane given that the ground station was making a Chapter 10 format recording of the same data, but NASA allowed no deviation from the way they had always done things.

Worst job I ever had.


My main workstation is a ThinkPad manufactured in 2009 (retrofitted with even older model's keyboard), and I have a stack of backup units, and shoebox of parts.

(It's now supplemented with a beefy GPU self-hosted server, and I can also use cloud servers.)

One thing you can do to keep old production computer hardware going is to stockpile your backup/parts units and individual parts now. Many times I've noticed how some old hardware I used to see a lot of on eBay has disappeared, or the little remaining is in much worse condition, or only available as a single example priced like a museum piece that sits on eBay for years. Some small parts seem to be available despite what I'd guess is low market demand, but entropy gets a lot of old gear (increasingly stuck in collections, discarded, worn out, etc.).


Im slowly making my 500 MHz 4 Gb SGI fuel my main workstation. I added a sata card to replace the spinning rust, and a gigabit ethernet. Swapping out the fans and the PSU to silence it and Im throwing my Ryzen workstation into my closet.

Why use a 20 year old 500 Mhz dinosaur? Time. This machine can do everything I need (terminal, emacs, ssh, old modeling software that has a better GUI than any modern counter parts, etc) and cant do anything I should not do (browsing the internet)


I'd be very interested if you chronicle this somewhere. A part of me long wanted to do the same with an O2 workstation, but in more recent years it became much less tenable due to the modern Web (half the web is already broken in minor modern browsers like Firefox), modern languages (Rust? or even Python, now that half of its ecosystem is dependent on Rust by way of cryptograpy), and modern tools (I'd be fired for disuse of Slack, which isn't open-protocol so that it could be used with a portable client).


When Im done I might put an entry on sgi forums. Ill post a link on HN.

Just use the O2 as a very powerful "dumb" terminal that keeps you from wasting your time. Its a great little machine.


Clearly it doesn't keep you off hacker news. Haha.

But I would be very interested in pics, screenshots, and a rundown of your workflow if you have the time.


:D I was on my phone.

Seriously though, I never tried HN. Should work.


> and cant do anything I should not do (browsing the internet)

I’m increasingly shifting towards that side of the fence, looking at the web browser as the gateway to a toxic waste dump.

Luckily, I’m not doing frontend stuff, so I can survive (and blossom even) without a running browser.

The computer “feels” empty and quiet that way, like it used to feel back in the day when the modem was off.


Neat. One thing I've wondered about SGI workstations especially: have you noticed any eBay deals on SGI add-ons that were once prohibitively expensive, but now very affordable because they're obsolete to the kind of people who used them?

(I occasionally window-shop vintage Sun gear like this, but SGI seemed to have more exotic unobtanium artifacts.)


I noticed that someone with a bunch of O2 parts seems to have finally given up and started lower the prices. I've bought a new CD driver cover (they break notoriously easily) and a long coveted 300mhz CPU upgrade. I haven't gotten around to installing them yet though. I haven't used the machine in about 12-15 years.

I probably should be keeping an eye out for maxing out my Octane and Fuel as well now before it is really too late.


You know, there's an after-market upgrade to 700 Mhz (and 4 Mb cache?) for lower speced O2 without using expensive SGI parts. Kinda hacky, but it seems to work.

If you google it you'll see what it is. I dont have an O2 so I dont remember the details.

EDIT:

http://www.sgidepot.co.uk/o2cpumod.html


Depends why you want the machine. I doubt you want to re-live 90s TV editing, so all that stuff is out.

Full machines with graphics are expensive. Luckily I got my fuel for $200 back in '09. I scored a V12 from a broken machine at the time cheap.

Parts are generally cheap. At least for the Fuel, the community now knows which DELL or some other server parts are equivalent to the SGI branded ones (SATA or Gigabit cards) and a lot of stuff is useless (why care for a SCSI card?). RAM is cheap if you see this as largely a vanity thing.

Finally, if you're good at debugging X (Im not), an origin 300 is a great machine. I bought one but I have the fuel for X client (works great). If I understand the problem, around 2007-2008 something changed in X11 (Xorg??) that broke compatibility with the SGI for 3D graphics.


As far as i remember SGI 3D stuff was based on PEX and xorg decided to drop this extension around that time.


That's awesome! Thanks.


Are you still using your SGI box with IRIX, or did you move to something more modern like NetBSD?


IRIX 6.5.30. I have a bunch of software that runs on it that I prefer to modern equivalents.


Very cool. Care to share more details on setup? What’s the power consumption like?


What is the current state of community support for SGI stuff and availability of hardware? I stopped following around the time of the demise of Nekochan.


I envy you.


You actually being serious or snarky here ???.


A priest once advised me to just take people on their word; the cognitive load to second guessing them is not worth it.


Is that, by chance, a T500 with the T60 keyboard swap? I ran with one of those for some time, before the lack of graphics horsepower (I think it had a Radeon 3650?) ultimately did me in.

If you're looking for another cool upgrade, I also did a 1920x1200 screen upgrade on mine. I wasn't able to find a Lenovo-native part, but I did find a pin-compatible screen out of a Dell mobile workstation that fit mostly perfectly, after sawing off some tabs with a Dremel.


Yep, T500 with the highest (stock) resolution display, with original T60 keyboards (not the Lenovo T500 keyboards that flex too much).

(While deciding what to move to after T60, I tried a T400, T400S, T420, T520, T420S, T500, and some others, until I decided I wanted Coreboot, and liked the T500 display aspect ratio.)

I'm good for high-res T500 displays, but if/when I upgrade to a slightly newer ThinkPad platform, I'm targeting 1920x1080, and I'll probably have to screen-swap those, because that res. is a stock option, but rare.


A hard part of preparing for hardware failures is knowing what things to get. I have spare power supplies and hard drives, but wouldn't know what else.


Yeah, this is one of the ways that entire spare units come in. Something on an old workstation breaks, you always have a spare of every part, tested together. (I actually just move the SSD/HDD to the spare, and then worry about repairing dead machine.)

Other than that, I can only guess. PSU and HDD is good idea. If you have an old industrial device that really needs floppies, buy them now, and also be looking into the retrofit options. For laptops, right now I've guessed at stockpiling keyboards, fans (but not the whole heatsink assembly FRU, just the fan part), backlight tubes, display inverter boards, AC adapters, the laptop-side AC adapter connector, HDD caddies and doors (often left out of replacements), upgrades for RAM and for faster/cooler CPUs.


I liked Windows 3.1 so much that I used it as my main OS until 2003. The combination of Windows 3.1 and MS Office 4.0 was great. I don't think there's been a significant improvement in Word or Excel since then.

If pressed, I'd say that the greatest changes for general office use in the last twenty years (for me) involve multiple monitor support and touchscreens.


And no modern OS does those two things consistently or correctly.


I don't know about correctly, but I do find that there is some mildly buggy behaviour with multiple monitors on Windows 10. I haven't had great trouble with my touchscreen either, but you're right that there are occasional hiccups.

That being said, I'm much happier with those features than without them!


Would NT 3.51 be a sweet spot for you then? Unless you play DOS games, it should run everything Windows 3.1 will run but without crashing.


I didn't experience a lot of crashes (though I do remember the General Protection Fault screen).

Today, I'm happily using Windows 10 with Open Shell at work and Debian/LXDE at home.

I don't have the option of doing anything too out of the ordinary at work, because they've adopted MS Teams, rely on our calendars for scheduling, etc. I also make extensive use of a touchscreen and pen combination for annotating PDFs and I doubt (but you never know!) that any of the old OSes could handle that well.


Word 97 seems to be the peak of office development.


Public science and art academy in my country recorded a lot of interviews on DAT digital tapes, which are now in the process if being digitised. The casette reader, however, does not report error metadata via any reasonable means, so it was modified a long time ago to report errors to an external device.

Wires are connected directly to the PCB of the casette player so that with the help of the external device, metadata about the errors when reading can be stored. But the external device (DATerr/DATerrMON/DATerrLOG) is ancient. It's not mentioned in Google's search results, luckily there was a copy of the printed manual somewhere.

It talks via RS-232 to a regular office computer from the ninetees running windows 95 to an application running in DOS mode.

It's time to modernize the setup, as currently the win95 PC's disk already failed and had to be replaced. Luckily the program works perfectly under DosBOX, but the environment has to be tweaked a bit (NumLock must be on, or the program hangs, etc :))

Edit: I said windows 93 instead of (probably) 95.


>on DAT digital tapes, which are now in the process if being digitised.

DAT tapes are, as you mention, already storing digital data to begin with. Rather than "digitised", the data should be copied to modern storage as-is.


My guess (never worked with DAT) is what they have them in audio format, not DDS, which is digital but...

Judging on some cursory search[0] the only way to transfer the audio is to hook the drive through S/PDIF and play the whole tape. Looks like the ability to treat audio as the data (similar to the way we can treat AudioCDs as just a data disk filled with PCM) is rare and not quite the official [1]

[0] https://web.archive.org/web/20071007220855/http://homepage.n...

[1] https://web.archive.org/web/20071102094856/http://web.ncf.ca...


The drives with DDS support might be rare, but they're the only correct way to extract audio for preservation.

Open Source software exists[0] for the purpose.

0. https://github.com/andrew-taylor/read_dat


The other way - a DDS drive with DAT support.

Anyway, you now need to acquire such drive, get to @jesprenj institution, tell them they are doing it wrong, threw out the current setup, replace the drive and the 'recording' station....

With a 30 y.o. tech which is obsolete for 20 years it's all.. questionable.


I was told they are capturing digital data. But I have no idea how, I was just trying to make sense of the error stream coming from the hacked drive.

Next time I'll check if they use SPDIF or what. But the drive is a consumer-grade Sony. Shame I didn't take note of the model.


When I was there, I saw that they played the tape at regular speed, not slower or faster. The drive's time counter was steadily increasing.


Reading the quoted sentence really is weird (:


Are you sure you mean windows 93?

Not, say, 95?


Lol, perhaps I got it confused with the website https://en.wikipedia.org/wiki/Windows_93

I have a disk dump on my server. This is worth checking.



The things I've done to keep my labs instrument PCs working...

ISA with DMA is probably the most frustrating to deal with because it's the most common. There's a host of ancient spectrophotometers and what not all needing HP's custom IEEE GPIB connection. Next most frustrating is still great, expensive equipment with a computer running Windows XP, and no upgrade path. Or the worst, an $800,000 direct electron detector purchased new from the manufacturer LAST YEAR running Windows Server 2012. This thing is 96 megapixel, 1500 fps, it NEEDS to be on the network.

Things were't as bad before they were digital. For instance, I was able to digitize a chart recorder for a liquid chromatography system from the early 80s with just a Raspberry Pi and Arduino. But a spectrophotometer from the early 90s? All non-compliant GPIB with no documentation. No realistic hope there.

With the amount of government money being forked over essentially to pay for or work around obsolescence, there should be a federal investigation.


You can buy IEE 488 to USB adapters. The device appears as a serial device on your /dev, like a TTY.

I was very glad to discover that the Raspbery Pi Pico had an actually good GPIO capable of interfacing with basically anything. But on your case, the standard is so old that any computer can deal with, using a software defined bus over a common serial line.

VMs can map a serial device into a COM port, so I believe it's transparent for Windows.


I think OP's point is that it's not GPIB, it's "GPIB", a.k.a. something that kind of looks like gpib if you squint, but really is some custom derivative that is only supported by one obscure piece of DOS-only software that doesn't work with more then 8MB of ram.

Those USB-GPIB tools work reasonably well for getting plots, but if you have an instrument that does weird things you're generally SOL.


Unfortunately, one instrument computer is running DOS and I've never found a way to get USB working. I do have one of the adapters and it works seamlessly for newer implementations. If it works for DOS, it is beyond my skill level. By the way, this instrument is still world class in terms of accuracy and precision. To replace it would be 6 figures, not to mention having to deal with modern software, which is so much worse.

I did think I could potentially use the USB to GPIB and interact directly over serial. There's one paper from the early 90s that describes the existence of a programming manual for the instrument, and gives a couple BASIC examples for how to do a certain type of analysis. But the actual programming manual itself seems lost to time. So for now, we are just limping along from vintage PC to vintage PC. We finally got one that uses ATX power supplies, so it should be easier to maintain.


A few of my chromatographs are over 30 years old and were made before there was even an HPIB adapter available.

Just doing the same old thing decade after decade.

However when I got to the employer the instruments had long been equipped with the HPIB interface option, but there was only one PC with an ISA HPIB card on one instrument. The others had HPIB networking a few instruments together amongst themselves, but interfaced to the PC through a modern HPIB-to-USB adapter.

One mass spectrometer just plain needs Windows 2000 for its software. Nothing newer will do. Lots of others are on XP.

The IT operators had begun moving office PC's to virtual, so virtual desktops were what they had for two newly purchased lab instruments.

It was a shitshow and eventually one of the robotic samplers destroyed itself not just virtually. It was never going to work again in reality.

This was not acceptable, labs have had computers a lot longer than PC's, and PC's a lot longer than offices have, and offices have had PC's a lot longer than they have had networks. And IT already had it's hands more than full with office machines and internet alone, where they were gaining expertise for our particular type of business which originally "networked" offices around the world quite well using Teletype. It was plain to see that network experts were only going to need a few years of chemistry lessons before they will have anything to offer the labs.

One of the most valuable assets is to have pioneering experience with computerized chemistry successfully for years before the mainstream arrival of desktop networks. How else would you know what computers would be capable of except at times when they were nothing but helpful?

So I kicked out IT and got 100x reliability. Didn't need their network printer anyway, and especially not the internet or remote access, focusing instead on making vintage instruments have more productive uptime than warranted purchases under vendor support.

Now it's all on my Windows 11 isolated lab network where I have integrated the latest vendor software packages to meet our particular client requirements like IT never would be able to. The older instruments each have as new a PC as I could get to run a proper vintage software for that particular model, this wasn't easy but it just works now.

Turns out the 30-year-old industrial hardware is way more reliable than the best IT can do with what they have to work with.

But we knew that.


Most scanning electron microscopes and other large appliances in my PhD lab had computers with Windows 95 or sometimes Windows 3.11 systems to control them, as they were purchased together with the systems and often had custom-made ISA cards that wouldn't run on modern hardware (at least not easily).


Old machines and OSs are also common in medicine. At the hospital/clinic where I worked, we had a total of around 300,000 devices on our network, and at one point five or so years ago, counted over 5% of them running out of support OSs, often on hardware that was a couple of decades old. We had to maintain multiple WEP wireless nets in order to connect many of them, because they did not support WPA in any form, let alone the WPA-AES we specified as the minimum standard. The oldest OS was a pre 1.0 version of Linux that ran a 25+ year old fluoroscope (a kind of X-ray device) that worked fine in its clinical setting, and would cost millions of dollars to replace, which cost the clinical department had no wish or intent to incur. The largest by count, though, were medical devices that embedded long dead Windows OSs.

The security implications of this mess were enormous. It's impossible to track the vulnerabilities on this old stuff, and challenging even to determine across the enterprise what is exposed to what.


I helped with a PC rollout to a hospital, replacing orange-on-black thin clients. Thin clients (or modern equivalent) are definitely the best way to run these large campus systems.

I sometimes wonder why there isn't a market for an enterprise-specific/secured/stable web browser for just such applications, which isn't subjected to the churn of consumer chrome and firefox.

Hospital apps could safely target and rely on dated support + feature agreements and stuff.


Thin clients are fine for the general purpose computing devices, at least where they work. But in a medical and research setting, you have multitudes of devices that have an embedded OS and their own user interface - everything from infusion pumps to giant MR or CT scanners (which may be half a dozen computers networked together, with multiple NICs on the intranet, and multiple dedicated UI devices). There is no replacing these, you just have to figure out how to make them work nicely together. Beyond that you have multiple real PCs running fat control software for machines (mass specs, blood analyzers, flow cytometers, ...) that uses the network to run the device, and cannot be replaced with a thin client. And finally, of course, you will have a few fat client-server apps that don't run in a browser, and connect directly to services from the PC.


I know that the process outstations in a very large oil and gas pipeline uses a bunch of absolutely pristine MicroVAX 3100s in each building to drive the SCADA systems, which themselves are made up out of VMEBus or CAMAC crates with fairly simple 1980s electronics in them.

I feel confident that I could keep that old stuff running well into the future, long after a more modern system was no longer repairable. The main fault with the capture cards seems to be that the input multiplexers fail, and while they can still be repaired by the manufacturer the two types of chip they use are something I keep in stock by the hundreds for building audio circuits.


1. Purchase only that hardware for which the software interface is well-documented; universities and other public resources should drive the bus, not corporations, when it comes to interoperability. 2. Stop relying upon externally-developed software to do your dirty work; eventually, some error will cost you your livelihood or reputation.


In an ideal world yes. But you're often lumbered with software that is already decades old and "in house" usually means developed by a self taught post grad (or undergrad if you're really unlucky). And let's not get into the fact that you have to use whatever the I instrument manufacturer provides.


Easier said than done when there's only one company that makes what you need. The government needs to break up Thermo.


A.K.A. "I've never bought specialized test/measurement equipment."

The problem is if you require well documented, available software interfaces, your available options do not exist.


Bad capacitors -- particularly electrolytic ones -- are the biggest enemy of electronics. The early 2000s was the worst period for that. There's even a site dedicated to fixing them at badcaps.net. Other components tend to fail far less often.


Between the caps and the early ROHS solder formulations, the 20-ohs was kind of a low point for electronics.


I recently learned that it was a common cause of hardware failure on IMac G5 (2004) but I am surprised to learn that it was part of general trend


There used to be a website dealing with just this issue on old T-series Toshiba Satellites. Actually, I found it: https://www.vobarian.com/toshibaProblem.html

I've seen a lot of discussion around bad capacitors on forums dealing with old stereo equipment too. So it really does seem to have been a widespread issue.


relevant link https://en.wikipedia.org/wiki/Capacitor_plague?

clickable link from your post http://badcaps.net


I’ve worked on two different SSL boards ( G series and K series ) that had the 1980s computer that controlled automating the console. They took up a 6 foot tall rack. And of course, you had to have a dedicated room for them so you didn’t pick up the noise in the studio. In the 2010s, these computers were still worth around $15k


I remember scrapping up a bunch of PC parts in 2010 to fully load a 900Mhz Athlon machine because it still had an ISA port and I could throw an extra GPIB card in there to interface with an even-older HP spectrum analyzer. Ran whatever ubuntu was out at the time decently enough.


> For Brembs, older PCs offer another crucial feature that was lost when Microsoft replaced its text-based operating system, MS-DOS, with Windows. MS-DOS “handles data as they come in with no buffering delays”, says Brembs, who exploits this feature for his fruit-fly flight simulator. “In Windows, so many things are constantly happening in the background,” Brembs says. You might want to take measurements at intervals of precisely 50 milliseconds, but the operating system might be able to manage only an average of 50 ms, with intervals ranging from 20 to 80 ms, depending on what else it has to do.

This is pretty interesting, I wonder if there are special ways to get around this, or if you’d need something like an fgpa.


Got a chuckle out of that section - DOS as an RTOS. But yeah, makes sense, about the only thing on the metal besides your application would be the timer interrupt, and didn't they use a DMA channel for dynamic RAM refresh? You wouldn't have some preemptive scheduler slicing your time with 50-odd instances of svchost.exe, there'd be no virtual memory page swapping going on. With just your application code running, the caches should even stabilize.


This is what an RTOS solves in embedded systems.

If you want to know exactly what clock cycle/time interval something is going to happen on, the easiest thing to do is just use a microcontroller.


Reminds me of some surveying data in the UK that was maintained by a BBC micro and a laser disk system.

Yep found it - BBC Domesday Project


These problems always occur at the junction between technologies with different lifecycles.


Sounds like a lot of these problems could be solved by running the old software in a VM.


I work in the lab, and have relatives and friends who have labs chock full of equipment -- often chained to old computers.

Often, the old software interacts with the old hardware via bespoke physical interfaces that aren't supported by the VM in a way that actually works. And it's the hardware that's expensive to replace. Sometimes it's multiple interfaces, e.g., video is a separate interface than the internal controls of a microscope. If the hardware is old enough, the interface is an ISA or PCI card with custom drivers. My spouse has had more than one occasion when her lab was able to keep an instrument running due to the kindness of a service rep having some discontinued parts on hand.

"Pure" software that could easily run on a VM is more likely to be supplanted by new software that's rarely as expensive as hardware.

I have personally revived some old hardware by finding an interface spec and writing my own support code in Python. That might sound like a big effort, but often, a particular use of a hardware device involves a tiny subset of the device's full feature set.


Very few of the problems described in the story are solved with a virtual machine to run older software on newer computers. Lots of specific add-in cards, precise timing requirements for sampling, and high-altitude, high sunlight, high vibration conditions where an LCD screen does the job.


Stares in (Virtual Machine)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: