This is a relevant article to me since I also make 20yo (or older) computers run legacy stuff ... but not for fun.
Factories and labs frequently have machines or instruments which are controlled by a computer. They are run off control cards which are inserted into ISA or PCI slots in the computer and are commanded by the legacy software through old, proprietary drivers. Examples are cards from National Instruments or Galil. Such equipment can cost tens of thousands of dollars (or more) when new. Also, decades-old software written by long-gone engineers at the factory still runs on the equipment, and nobody understands how the stuff works nowadays. Therefore, there is plenty of incentive to keep the old systems running and running and running.
Unfortunately, old computers sometimes break. That's where I come in -- I do a side consulting business with a colleague where we refurbish the old computers -- replace parts as necessary, install old versions of the O/S, replace mechanical hard drives with SSDs, and do whatever else is needed to keep the computers running for the next few decades.
I know we're not alone out there since we're aware of other small businesses which provide a similar service. It's an important thing since -- as many point out here -- modern software companies don't make backwards compatibility a priority, but factories and labs have equipment which need to run for decades, so the computers controlling them also need to run for decades.
>Also, decades-old software written by long-gone engineers at the factory still runs on the equipment, and nobody understands how the stuff works nowadays. Therefore, there is plenty of incentive to keep the old systems running and running and running.
Vernor Vinge figured this out 25 years ago. A Deepness in the Sky depicts a human society thousands of years in the future, in which pretty much all software has already been written; it's just a matter of finding it. So programmer-archaeologists search archives and run code on emulators in emulators in emulators as far back as needed. <https://garethrees.org/2013/06/12/archaeology/>
(This week I migrated a VM to its third hypervisor. It has been a VM for 15 years, and began as a physical machine more than two decades ago.)
> ... but factories and labs have equipment which need to run for decades, so the computers controlling them also need to run for decades.
for PCs it's obviously not a problem (there are just so many parts out there for just about everything and anything) but what about these proprietary expensive ISA (or PCI) cards you mention? Are replacement available for decades too? Or are they easy to find second hand at reasonable prices?
You take them to someone who can do component level repairs.
And the basic hardware is more difficult than just the parts and these nuances can make certain parts expensive and rare.
You have to know all kinks of things like irqs, dos commands, memory layout, jumper settings, just an insane amount of stuff compared to modern plug and play equipment.
That insane amount of stuff was just par for the course back in the day. I still have my old spreasheets and technicians toolkit of 5.25" floppies with all the old school utilities (check it - I forgot about it until I came across my old suite of floppies while moving a few years back)
I'm curious if you do this alone or with another full time job. I've been searching for a gig company that allows one to turn old PC knowledge into a side gig.
It's a side gig for me. The biggest hurdle is finding a bunch of companies fielding the old equipment and then becoming known to them. In that sense it's like any other consulting gig -- it's mostly about who you know, not what you know.
Will it help that this increasingly affects less specialist stuff (household appliances, cars...) that the general public use? It makes it more of a known issue.
With more things being network connected, how will we deal with the problem of keeping things not just working but secure? Is it generally practical to air gap equipment in factories and labs?
Older equipment is often air gapped just because it really had no reason to be networked, and even if you DO want it networked, it's relatively easy to air gap.
It's the new stuff that is all "cloud based" that will have real problems in 10 years.
Yes, this will be a major problem for factories. Any "cloud based" software is guaranteed to become obsolete and incompatible well within the lifetime of the equipment it controls. If you're dealing with real manufacturing or lab equipment you should avoid "cloud based" stuff like the plague.
Most of the equipment I deal with is not on the net. It doesn't need to be since it's just controlling some machine which is busy making something real. If you need to move any data on or off the computer you use USB sticks (or sometimes CDs or floppies).
Some newer test stands are interfaced to an internal LAN so they can provide process control data to a database. In that case one either needs to provide some sort of firewall between the equipment's computer and the database, or just bite the bullet and redesign the whole thing for compatibility with, say, Win 10 (at major expense in time and money).
I wish I could convince people to target their automation stuff to Linux so the backwards compatibility problem would be easier, but corporations still balk at that.
I've been thinking about the idea of easily replaceable, minimal-dependency crypto modules along with an interface that shims them into existing OS implementations. In the meantime, my MITM proxy that I force everything on my network through will take care of HTTPS and upgrading to TLS 1.2/1.3.
I saw that, the work done to get WolfSSL to work is truly examplary on how to keep things going.
Do post a link to that SHA256 patch. I suppose it is possible to compile and generate an unsigned 3rd party cryptographic services DLL (CAPICOM) if someone truly wants really old software to work everywhere.
TLS needs to be re-vitalized in general for SMTP and other such protocols as well.
At some point, the efficiency of new hardware completely outweighs the benefits of keeping old hardware running. Obviously there's a valid need to preserve hardware for historical purposes, but for any other reason I think it's a false economy.
The CPU on a Raspberry Pi 4 is nearly three times faster than a 3.2GHz Pentium 4. A Raspberry Pi draws no more than 10W (excluding USB devices), but the Pentium 4 has a TDP of 100W. We could draw a parallel with incandescent vs LED lightbulbs, but I think that would be too kind to the old computer - an old motherboard and a discrete GPU will add substantially to the idle power consumption.
I think that new software bloat far outweighs the benefits brought in by new hardware as well.
You would also need to ask how many people here actually use Raspberry Pis as their daily driver to claim the power benefits that they should be realizing today.
I think that viewing this as a pure power consumption exercise is very reductive at best. Then, in that spirit, you should also be going behind the people running the H100s today and asking why they aren't running their model training on a RPi 4 yet.
> I think that new software bloat far outweighs the benefits brought in by new hardware as well.
While software bloat is undeniably a problem, in my experience it's not that clear-cut.
E.g., a messanger app built on Elector is a pure bloat. At least, from technical perspective -- maybe Electron brings in business value that outweighs the technical downsides.
But an IDE from around 2005 is nothing more than a dumb notepad in comparison to a modern JetBrains IDE. Even though I often find their IDEs extremely frustrating in terms of consumed computational resources, I'd rather upgrade my hardware than go back to something less "smart". After all, hardware is just a means to an end of solving whatever task at hand.
An IDE from even 2001 isn't as dumb as you would think at all. In fact, I was one of those early adopters of Websphere Application Studio (Eclipse) and I thought it was bloated. 20 years later, it's the same argument, same stack (Java).
Cut to 20 years from now, you may remember JetBrains as the best thing you had worked on - and that some other JavaScript IDE in 2043, which requires a bare minimum 128GB of RAM to run - will be the means to that end for those developers.
> But an IDE from around 2005 is nothing more than a dumb notepad in comparison to a modern JetBrains IDE.
Which IDE is this?
I've used IDEs from the mid 90s (Watcom C/C++, Visual Studio 5.5, Delphi, C++ Builder) and from 2005, and they did a lot more than just let you edit code and run make.
In 2005, all major IDEs had autocomplete, for example. They also had jump to definition, jump to help (remember help pages? .chm files?), syntax highlighting, source-level debugging, inline with the source code ...
I mean, what exactly do you think they are missing that makes them a rudimentary text editing tool?
That's a good question. Admittedly, my original comment is informed by the general impressions IDEs gave me back then and now, so I'll try to be more specific this time. Of course, I haven't used every existing IDE from mid 90s, so some of them might actually have had these features. If you're aware of any, I'm curious to hear about them.
# Syntax highlight
Something seemingly as basic as syntax highlighting can be implemented based on syntactic or semantic analysis of the code.
There's even a more primitive way of implementing it, where an editor would just highlight the language's keyword and not much else. If memory serves, back at the time, most editors and IDEs would do the latter -- just highlight the keywords and be done with it.
Nowadays, semantic highlight is usually the default. Each identifier gets colored with the color of the symbol it resolves to. A constant name is rendered as constant throughout the file, not just in its declaration. Same applies for parameter, property, class, function names and so on.
# Semantic "grep"
As in, find all usages of, for example, a function across the project:
- If the function is passed as an argument somewhere (to be invoked as a callback later) this is going to be included in the search results.
- If another class has a function with the same name, this isn't going to end up in the search results.
- If the function is overloaded, it's possible to search for usages of a specific overload or all of them.
- And so on.
I don't remember this being a thing in old IDEs either at all or it didn't work for anything but the most trivial code. Although software like GNU Global did aim to implement this functionality, my memory is it was very limited in practice.
# Semantic code completion
While IDEs from mid 90s did have code completion, a modern IDE improves on that too. E.g., if you invoke auto-completion while writing a bit of code that passes an argument to a function, a modern IDE is able to form a list of suggestions containing only symbols that are currently in scope and match the function's parameter type.
# Refactoring
I think this goes without explanation, would you agree? Modern IDEs offer more refactorings, theses refactorings are way more complex and work much more reliably.
# Dependencies source code browsing/decompilation
Editing to add in this one. How could I forget? Basically, if I want to go to the definition of a symbol that comes from a library, the IDE will either download the source code if available or decompile the binary and and take me to the definition. To me personally, this is huge.
I agree broadly that the modern QoL improvements in IDEs are better, but not better enough to make those old methods even close to useless!
Syntax-highlighting - sure, nice to have based on semantic meaning of code, but keyword syntax highlighting works fine +90% of the time when you are reading code.
Semantic search - once again, now it's better than simple grep, but ctags provides 90% of that functionality too (I still use ctags daily in Vim).
Semantic code-completion - nice, but code completion in VS circa 2005 (and other IDEs, like eclipse) were partially semantic as well. They did not autocomplete a `#define` keyword when you typed the '.' after an instance variable, the completion was still limited to only those fields that were relevant to those instance variables. Once again I feel that the partially semantic autocompletion did maybe 90% of what Jetbrains does today.
As far as decompilation/third-party library support goes, in 2005 the languages did not have package management, so not automatically downloading the 3rd party library is not a fault of the IDE - where would it download from?
In those cases, like VS, where you were using the Windows SDK, VS itself knew where to get those libraries from (because they were stored locally), and I have many memories of the VS debugger in 2005 stepping into the Microsoft SDK libraries.
To me, comparing 2005 IDEs as the equivalent of a dumb text editor when they had 90% of what you have now, is an inaccurate comparison.
(Of course, all of the above is just my opinion, so feel free to disregard it with prejudice :-))
I'd respectfully disagree on VS 6. It was OK for its time, but hardly a piece of art, in my experience.
Please excuse me copying the relevant portion from my other comment.
VS 6's support of C++ back in 2005 wasn't that great, at leat the way I remember it now.
Code navigation was very primitive, and you were lucky if it didn't consider the code too complex to offer any navigation around it at all.
Its built-in debugger often wouldn't let you inspect a string's content because it was just another pointer from the debugger's perspective.
And there was a bug, where the editor would slow down so much it would be littery unusable -- e.g., it'd take a couple seconds to react to a key stroke. The reason was it kept a file with the workspace's (solution in today's terms) code metadata and that file grew too big over time. So you had to remember to delete it regularly.
But VS 6 had a great plugin -- Visual Tomato, if memory serves -- that made things so much better in terms of code navigation/refactoring/etc.
Compared to modern IDEs it won't do very well, but do you remember better alternatives back then, at least if you wanted a "friendly" UI instead of a command line one? Would you choose something different if you go back to 2005? how about 1998?
Oh, back at the time it was a good. IDE. There was also C++ Builder, but it had its own quirks, of course. Picking one of them was, I guess, a matter of personal preference, the project's requirements, etc.
Anyway, I sure see how my comment may sound that way, but I didn't really mean to contrast JB vs others. Comparing a modern IDE to its old version in the context of software bloat is the whole point I'm trying to make.
I think bloat is not the only reason for increased hw requirements. Modern software is often way more capable than its old versions and adding capabilities seems like a good use of added hw power.
I see how my comment may sound that way, but I didn't really mean to contrast JB vs others.
A better idea woud've been comparing a modern version of VS with VS 6, for example.
Anyway, my point is bloat is not the only reason for increased hw requirements. Modern software is often way more capable than the old and adding capabilities seems like a good use of added hw power.
This is a bit similar to the concept of accidental vs inherent complexity in sw engineering. There's accidental bloat and inherent "bloat", so to speak:)
My impression is people don't usually acknowledge the existence of inherent "bloat" in discussions like this one.
Can't comment on Java support in Eclipse back then.
VS 6's support of C++ back in 2005 wasn't that great, at leat the way I remember it now.
Code navigation was very primitive, and you were lucky if it didn't consider the code too complex to offer any navigation around it at all.
Its built-in debugger often wouldn't let you inspect a string's content because it was just another pointer from the debugger's perspective.
But VS 6 had a great plugin -- Visual Tomato, if memory serves -- that made things so much better in terms of code navigation/refactoring/etc.
Writing C++ in Eclipse CDT or KDevelop was simply a pain.
Actually, a plain editor with syntax highlight was probably a better option because it was not even pretending to have "smart" features that would inevitably break on each non-trivial piece of code -- better no promises than false promises.
Delphi, and C++ Builder for that matter, were great as a package: fast GUI builder, editor, debugger, etc. That said, I don't remember them offering much in terms of working with code: finding an identifier usages, refactoring, etc.
As to praising JB, it wasn't my intent to single them out, surely there're other great IDEs. Their products is just something I'm personally familiar with. If something better happens to cross my way, I'll have no problems switching.
There were 2 releases between VS6 and VS2005, so slightly slower than the current cadence of a new release every couple of years but not by much. I think VS6 remained popular for a long time because it was the last release before they switched to the heavier .NET-based IDE.
> I think VS6 remained popular for a long time because it was the last release before they switched to the heavier .NET-based IDE.
The in my opinion much more important reason was that Visual Studio 6 was the last version of Visual Studio that was capable of running under Windows 9x (every version of Visual Studio that was released after required some Windows-NT-based Windows version; Visual Studio .NET 2002 was the last version to run under Windows NT 4.0: Visual Studio .NET 2003 required at least Windows 2000; keep in mind that Windows XP only came out end of October 2001).
> there is a lot of old software that is still updated and not bloated.
The only things that really need more resources are video capabilities and web technos.
For example there are window managers that were considered bloat 20-25y ago and that are considered lightweight. Icewm takes something like 16MB on a 64bit distro nowadays. It would have been considered bloat back in the days but is very far than what you are using with a default kde plasma or gnome desktop running idle without apps. There are still cli or gui lightweight music players, image viewers, word processors, spreadsheets tools, file managers, etc. Until you start a web browser and as long as you are using a lightweight DE you can run many modern linux distros with only 256MB of ram.
> The CPU on a Raspberry Pi 4 is nearly three times faster than a 3.2GHz Pentium 4.
It sure doesn't feel like it. It was never questioned if a Pentium 4 would make an OK desktop, of course it did. The Raspberry Pi 4,... It feels like it should be completely fine for email, Internet browsing, word processing and pretty much any office work, but it's just to slow. People are even maxing out a Raspberry Pi 3 (or 4) running home assistant... Yet an old DOS machine can run an entire factory production line (or multiple).
The problem is that we don't optimize the software (most desktop apps), or don't understand the processing power required to complete a task (i.e. home assistant).
I have successfully ran DNSMasq, WebFS, qBittorrent, VSFTPD and Syncthing at the same time on a OrangePi Zero with 4 cores and 512MB of RAM. That thing is barely bigger than a postage stamp.
I'll switch to a much more powerful device, because I need more hardware acceleration for video encoding and such, but other than that, the same Pi Zero can still handle the tasks needed.
Back in 1999, me and two roommates ran ipchains, Samba and an FTP server on an old PC, it had a 120MHz Pentium and 32MB of RAM. The only reason it didn't run more service was: What services would that even be?
Looks like we did similar things in the similar time frames. I was doing live broadcasting over a simple webcam over Apache, for example.
I think you can still run modern versions of these software on that hardware, albeit with a bit lower performance due to all added features. Kernel would require a bit more resources, too, possibly.
The biggest problem on the OrangePi Zero was to run SSH at full speed, due to required encryption/decryption on the fly. DNSMasq and VSFTPD were essentially invisible, qBittorrent requiring 85MB of RAM to handle all the state data.
While the applications are tied down with cgroups to prevent pushing each other to the reaper of OOMKiller, none of the applications have died because of insufficient memory, which is telling, in a good way.
It shows that my point still stands. Basic functionality on a software is very cheap, but when you add cryptography, or a couple of compute-intensive features, the old hardware breaks down instantly. It's of course possible to optimize these up to a point, but dedicated cryptography accelerators and other instruction sets are really helpful.
>If you run a P4 with modern distributions of linux, you're likely in for a bad time.
What universe are you operating in? I can run multiple domains worth of services just fine. Yeah. Compiling at -j4 at the same time can be a PITA. Running the desktop over X with compression over ssh works just fine as well.
The key to outpacing a pentium with a Pi4 is entirely dependent on structuring your workloads correctly. Given a heavily serialized load, a Pentium'll smoke a Pi 4 before it even gets it's britches on. Throw in enough parallelizable workload, however, and the Pi4'll pull ahead promptly.
Isn't most of this is because people are running their OS from an SDcard, with a heavyish desktop environment? Are people using libreoffice Writer or abiword for word processing? libreoffice Calc or gnumeric for a spreadsheet? Thunderbird or claws-mail for email? What about the DE they are using?
Have you tried running home assistant from an old Pentium 4?
For many people in the world, possibly the majority, whatever old hardware they happen to have, that is what they have, and there is no replacing it.
There is also something to be said for treating any functioning hardware as worth preserving, just because it already exists, just like a human's life is worth preserving, just because they're already alive.
Also, for environmental reasons, because the environmental impact of manufacturing an entire new computer from scratch is much higher than the power saved by running a more efficient one.
Finally, of course, there is simply personal preference, whether due to unique features (as mentioned in a sibling comment), lack of features you do not want, and sentimental value.
That's not how the real world works. A lot of manufacturing equipment and test stands were developed using LabView and used NI cards to control equipment and take data. The people who developed all that stuff are long gone and the new people neither understand the old stuff nor do they have time to spend many months spelunking into the guts of the old software to maintain it. Their managers are also very reluctant to change anything since it means very expensive downtime for an entire assembly line. Downtime can cost tens or hundreds of thousands of dollars per month. Everybody wants to keep the old stuff running as long as possible -- generally several decades.
I have no experience in the banking industry but I do hear the same considerations I see in manufacturing exist in banking software. Any change is fraught with major risk (millions of dollars in loss) so change is avoided as much as possible. I see lots of people on HN talk about how change and churn is avoided in banking software ... the same applies to manufacturing.
The Rasberry Pi comment is laughable. I'm not talking about toys for hobbyists. I'm talking about computers running software controlling cards costing tens of thousands of dollars running physical equipment of equal or greater expense. Nobody will hire a hobby hacker to light a few LEDs with a Pi when doing factory automation.
Also, it's laughable to talk about power consumption when the PC is controlling an industrial system that's using many kW or even MW of power. At that level, 10W vs 100W is noise.
>At some point, the efficiency of new hardware completely outweighs the benefits of keeping old hardware running.
Agreed. That's why the best solution is to run old software on new hardware.
Vernor Vinge figured this out 25 years ago. A Deepness in the Sky depicts a human society thousands of years in the future, in which pretty much all software has already been written; it's just a matter of finding it. So programmer-archaeologists search archives and run code on emulators in emulators in emulators as far back as needed. <https://garethrees.org/2013/06/12/archaeology/>
(This week I migrated a VM to its third hypervisor. It has been a VM for 15 years, and began as a physical machine more than two decades ago.)
OTOH you can get a complete computer with a pentium 4 or core 2 duo / athlon64 complete with keyboard, mouse and sometimes even screen for free or pennies because people want to offload that big tower and a 19" screen they don't use anymore while a 80€ Raspberry Pi 4 doesn't get you anywhere.
Discounting storage[1], you'd have to buy a keyboard, mouse, screen and usb power supply and possibly a case to have a comparable raspberry PI 4[2]. I see PI 400 being sold at 130€ with a mouse these days.
[1] because you would also want to upgrade said vintage computer with an SSD anyway
[2] and in the second hand market raspberry pi 4 aren't going much cheaper, you just usually get an additionnal case and/or sdcard for the price of a single new board.
The problem is that, while the RasPi may be 3x as fast, it doesn't necessarily have the software ecosystem you want.
If all you have is a closed-source Win32 program that barely runs on XP, at best, you can emulate the P4, thus likely eating up much if not all the RasPi's performance increase, and diving deep into the realms of undocumented behaviour and incompatibilities.
I'm surprised we don't see more moonshot efforts into improving decompilation/reverse-engineering toolkits. There's a huge and obvious need for "I have this software, the developer's dead/bankrupt/disinterested, can you get me something close enough to human-readable C that I could compile it against winelib and get it running on a more modern machine?"
yet at some point we tend to throw efficiency at the wall for raw compute.
it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU. Yet today it is not uncommon to hear of 850W PSU's. Of course such PSU's are undoubtedly more efficient at doing the conversion. What I mean is that the argument of (power) efficiency doesn't hold when we actually end up using significant more power anyway.
Also, to play devils advocate a bit, that's a 20yo PC: how many times would you have upgraded in 20 years- and what is the carbon cost of all of those upgrades? Some PC manufacturing is pretty bad for the environment. I'd guess once every 3-5 years?
According to "8 Billion Trees":
> the Carbon Footprint of a Computer? A desktop computer emits 778 kgs of CO2e annually. Of this, 85% results from emissions during manufacture and shipping, and 15% results from electricity consumption when in use.
So you have to really ask yourself: when is that upfront cost paid for by the inefficiency of energy use (which can be sourced sustainably unlike many parts of PC manufacture)?
20 years? Sure, but it's still very questionable at 10 years.
If only we didn't force people to upgrade their computers by making more and more features non-optional and slowing down the computing experience for the sake of producing features easier for developers.
---
EDIT: I can't stop thinking abut this; I think I've been nerd sniped:
I can't find good data on this though, it seems like a new laptop costs the planet 422.5kgs of CO2e on average.
The actual running cost of the Pentium 4 (mobo, ram, etc) itself is somewhere around 40w for this generation (not including peripherals, discrete GPU or screen) - it does not have speedstep so can't power down. a high end dGPU from the era runs about the the same as the entire PC. so let's double that 40w. -- we ignore peripherals and screens as those could be the same for both new and old PC's.
80w used roughly for 8hrs per day? That comes to about 292kW a year roughly.
In the US energy generation costs 0.3712kgs CO2e per kWh; so something like 109kgs of CO2e for running this PC for a year.
Meaning even if your energy efficiency of a new PC doubled (using 40w instead of 80w) it would take nearly 5 years to offset the carbon cost of a new PC. Not discounting that energy production can be carbon neutral.
That said, currently: The most efficient x86 computer will consume 10W to 25W at idle.
>it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU. Yet today it is not uncommon to hear of 850W PSU's
Exactly that. I just had to upgrade my 850W psu to 1000W because I'm upgrading my gpu to an rtx 3090.
This is why if you want the absolutely most power efficient NAS/home server and you don't need a lot of cpu/ram people use p4 era hardware. The lowest wattage I saw on a German user group about this was around 6W for an idling system.
I myself am currently running a p4 era 1u server (a Fujitsu primergy rx100 s7) with a 4 core Xeon (2.4ghz or so), 4gb ram, google's edge TPU accelerator(dual edge tpu on pcie) and two 1tb spinning disks in a mirror on as a CCTV server(the tpu does object recognition). This box runs at 20%~90% cpu utilisation all the time and it draws 35W on average at the wall. It can peak at 75W or go down to 25W (it never really idles). This server (excluding the tpu) cost me under $100. And if I want I can replace the disks with bigger ones or add 2 more external drives as there are 2 unused sata ports. Also this server comes with remote management card so I can login anytime using a separate network port and do a power down, set temperature alerts etc, see power usage or access the console/bios.
Good luck finding a similar option with similar features, low power consumption and price with modern hardware.
Yes, I'm a fan of retro hw. I have a commodore 64, a 386sx, a win98 and winxp pcs. Some have only sentimental value, but not all old hw is impractical to use seriously today.
Your 1U server seems to be Ivy Bridge-based, which is about a decade or so newer than the Pentium 4 and much, much more energy efficient - and there's a lot of cheap second hand hardware out there from that era as well.
I had to check, because I wasn't aware there is such a time gap between the two. I seem to remember having P4s in many an office desktop while E3 xeons were present in new servers we installed.
My Xeon cpu is Intel® Xeon® processor E3-1220Lv2 with 17W TDP wow! It was released in 2012 while last of P4s were sold new in 2008 (but first were released in 2002). So yes, you're right there is a decade between these two. In my mind (incorrectly) they were in the same "era" as many of my small business clients had lots of PCs with them at the time of Ivy Bridge Xeons.
> This is why if you want the absolutely most power efficient NAS/home server and you don't need a lot of cpu/ram people use p4 era hardware
Calling bullshit on this one, my p4 nas chewed through power, I upgraded to one of the low power modern Celeron boards which drew around a quarter of the power.
My p4 had a big heatsink and fan, my quad core Celeron is passive cooled with a tiny heatsink in comparison.
it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU. Yet today it is not uncommon to hear of 850W PSU's.
Sure, but today we're pushing absolutely crazy levels of performance. At the efficiency end of the spectrum, we have something like the Intel N100, which delivers something like 14x the performance of a Pentium 4 on a TDP of 6 watts. An i9-13900F might have twice the peak TDP of a Pentium 4, but it has 150x more throughput.
Big PSUs are mostly driven by GPU power consumption, but it's easy to forget just how insanely performant modern GPUs are. An RTX 4090 has a TGP of 450W, which is admittedly very high, but it delivers 73 TFLOPS at single precision without sparsity. That's the same level of performance as the fastest supercomputer in the world circa 2004. It's kind of goofy that we'll often use all that performance for playing video games, but we can also do important work with that power.
>A desktop computer emits 778 kgs of CO2e annually. Of this, 85% results from emissions during manufacture and shipping
I don't really buy this figure. Using standard methodologies, Dell cite a figure of 454kgCO2e for the full tower version of the Optiplex 7090, with 46.7% of those emissions created during use. For the micro form factor version, they state 267kgCO2e, with 30.8% of emissions created during use.
There's absolutely a valid case for extending beyond the usual 3-4 year lifecycle, there's absolutely a valid case for buying a refurbished computer if you don't need high performance, but at some point an old computer is just e-waste and the best fate for it is responsible recycling.
>A desktop computer emits 778 kgs of CO2e annually.
It's patently false; a desktop computer emits barely any emissions at all; maybe a tiny amount of off gassing as it heats up.
The power to run the device may emit CO2 during generation, but that's a separate question and isn't from the device itself.
The gist is still correct, reduce and reuse are very important overall.
What I haven't really worked out (and the math would be involved and difficult) is whether all things considered running old enterprise hardware is better than buying new power-efficient hardware for the same use case. A quick cost-benefit analysis says buying new is 'better' because the energy use of the old pays for it, but that doesn't correctly account for all the cost of manufacture and if you're in a climate that mainly heats the house, you get "free heat" from the use of the older equipment.
Disclaimer: I don't mean to derail the discussion and I don't intend this tangent to go far in this thread:
I really wish more systems- and particularly tech-minded folks would think like you more often, especially when it comes to manufacturing/logistics as well as end-of-life costs.
In nearly all cases of my life I've found no real benefit to buying new over buying used, maybe it takes a little more research and patience finding the right deal but the effort is always worth it. I find the discussions around electric cars and massive ML models especially problematic because of the complete lack of consideration for these factors on display.
> it was exceedingly uncommon for PC's in the Pentium 4 era to ship with anything higher than a 200W PSU
I'd say that 250-350 watt PSUs were very common in white-box/home-built PCs in the P4/Athlon XP era.
400W and above, yes, that was much less common. I can recall at that point in time, Enermax made their name selling some of the largest PSUs on the mainstream market, and that line tended to be like 450-600W units.
I do sort of agree, it's almost completely about GPUs these days. The spinning rust is now low-power flash. A top Ryzen 9 or Core i9 might top out at 150-250W, but that's not that much higher than Prescott on a bad day, and plenty of people are picking models in the <100 watt power classes. However, the idea of a 400W-by-itself GPU was completely off the field in the early 2000s.
I'd say that 250-350 watt PSUs were very common in white-box/home-built PCs in the P4/Athlon XP era.
...and the vast majority of those PSUs were cheap off-brand units actually rated in unrealistic "Chinese watts", with 150-200 being all they could honestly deliver for anything short of brief spikes.
> The most efficient x86 computer will consume 10W to 25W at idle
Totally untrue. I've got a box sitting in my closet right now that has a full system power of only ~4.5W measured by a kill-a-watt.
Its based around a Supermicro X11SBA-LN4F, a small SATA SSD, and a, 80 PLUS Gold power supply.
My personal laptop based around an AMD 3200U sitting idle right now is pulling ~6W at the wall with the 14" screen on. According to HWMonitor the CPU package is using 0.65-0.9W.
Both of these are x86 computers and they're well under even your 10W idle estimate.
The cost of a Pentium III and the electricity to run it might be much, much less that the cost of labor to move one application over to a Raspberry Pi.
Then the cost of debugging the application on a Pi 4 might be more than the value of the company trying to run it.
You were sold a silly idea: that the reason software isn't portable has to do with work and resources. That's simply not true in a vast majority of instances. Companies intentionally lock software to certain platforms for sales reasons much, much more often than for technical reasons.
OTOH, Windows isn't portable, so if you want to run old Windows software, you don't move it - you emulate it.
Most of the desktops I saw people using in 2004 had like 256-512MB of RAM, total. They've got 512MB of just graphics memory. And then on top of that its a GPU from 2010, so this isn't "an early 2000s machine" to assess what performance would have been like. Try running on a SiS graphics adapter on-board with 256MB of RAM, and I imagine their results would have been a bit different.
And then they've got a 480GB SSD? That's an absolute monster of a drive for 2004, practically nothing would have come close to the latency and throughput available on that drive. Even a 32GB SSD in 2004.
I always see people remark "wow look how snappy old computers were" when they're essentially built like $10,000 machines if you were to actually have those specs at that time.
> I always see people remark "wow look how snappy old computers were" when they're essentially built like $10,000 machines if you were to actually have those specs at that time.
But the question is that, if you were to spend $10,000 or more today will you experience the same snappiness.
I would argue that it's impossible to replicate the low-latency experience of "retro" systems today with the overhead of modern software; no matter how much you are willing to spend.
Without a doubt in my mind. For fun I do use retro computers to mess around, often on original spinning rust drives while they still live so I do it side by side.
I've been messing around with some PIII laptops with integrated Intel IGPs running Windows 2000, 256MB RAM, old IDE drives. Loads of applications takes 10+ seconds to launch.
Applying effects on many photo editors is crazy slow. Editing photos taken on my camera today is an exercise in patience. It's even slow just panning around.
It can't even playback most of the videos I'll normally watch, even if you do load something like VLC. Not that it really matters, because it can't even draw a 1920x1080 image.
Doing an IMAP sync with even the crappy crypto it can do takes like a minute. You can see it drawing the graphics in the emails line by line. It takes a moment to switch emails. Replying to an email takes a few seconds for the new email window to appear, you can see it drawing the UI while it loads.
Don't get me wrong, sure maybe in notepad.exe there's a few extra nanoseconds between keystrokes. But my machine today (way less than $10k) doesn't really have any lag for the software I run for text anyways.
Next time, complain that your DOS 6.0 can't run Notepad :-) Maybe you paid a fortune for that system back in the 80s.
If I were on DOS, I'd be happy running Borland C++ with 1-2MB of RAM. Even though Ahem 640k ought to be enough for everybody. The lesser the bloat, the better.
Editing, categorizing, archiving, searching, and viewing my photos and videos isn't bloat though, it's one of the major reasons why I have a computer. There's far more to computers than just entering text on a local filesystem and compiling small applications.
The overall memory usage is still much lower than 512MB when running the older Office XP suite - and when using Remote Desktop. In contrast, I tried running Opera 36 and my system started swapping already. That has nothing to do with hardware.
While there is a point about I/O being expensive, and I did mention that they get removed over time when chips get faster, it still points to the fact that software that is not bloated runs better. Period.
Still you're comparing snappiness of software designed for hardware seven years older than your processor (during some of the most rapid gains in performance YoY) and 13 years older than your GPU, with IO 1,000x faster than what it was targeted for. Your CPU is over 10x faster than the hardware Office XP was targeted for. The graphics card is probably at least 1,000x faster than what Office XP was intended for.
Most people think of Office 2007 as a massively bloated and sluggish piece of software. And the experience of it running on the hardware they were running in 2007, it probably was. Is it still slow and sluggish on a modern Intel/AMD CPU with an NVMe SSD?
Go run Office XP on a PII 400MHz with integrated graphics. Tell me how responsive and fast the experience is. It's a bloated mess compared to Office 95, so much extra fluff you don't need.
And I mean, why are you even expending all this waste of a GUI with true colors? So much bloat. So much waste. Imagine how much more efficient it could be if it was just drawing 80 columns.
By your logic, a PII 400 MHz came in 1997 and you are running software that was released 4 years after that processor did. So instead of figuring out how to run something that works very well, you're just ... complaining.
I agree with the fact that I could be doing this exercise in DOS with 80 columns on whatever and see what is the best environment I can live with. I actually do have a productive DOS environment as well.
And If I were doing that exercise, I'd be trying to see if I can add more stuff to it (PCPaint, DOSAMP etc.) to find out what works and what doesn't. And document it for the benefit of other people.
What is stopping you from figuring out what works for your PII 400 MHz?
I'm just pointing out when you're crushing something's design specs by 10-1,000x it's no surprise it's snappy. Do the same with modern stuff and it'll still be pretty snappy.
It's not like Office XP was some last optimized version and it's all bloat after that. Office XP could also be considered a massively bloated office suite itself.
The point is that old software and hardware are everyday usable. There are modern day software that do not suffer from the same performance issues as some of the other bloated ones. Albeit very few exist.
One can always complain about things that don't work. Instead, look at everything that works well today - it is a very good way to bring usability back into perspective. The intent is in bringing computers way beyond their intended eras and keeping things running with them. The effort recognizes and acknowledges the mistakes done so that there is a course correction going forward. HTH.
> There are modern day software that do not suffer from the same performance issues as some of the other bloated ones. Albeit very few exist.
This is what I'm saying though. You're suggesting modern software is all a bloated mess, unlike all this well-polished and optimized software they used to make. But you're only suggesting the old stuff doesn't suffer performance problems because you're going over the recommended specs 10-1,000x. Do the same with modern software, and suddenly its not so bloated now is it?
Back in the day when Office XP released it was the bloated modern software unlike that well-polished and optimized software they used to make. You're just drawing arbitrary lines of computing hardware and software to say "this is bloated, this isn't". The requirements of Office 2000 was a 75MHz CPU, 16MB of RAM, and 189MB disk space. Office XP's minimum requirements were a 133MHz CPU, 32-128MB RAM, and 210MB disk space. Additional instance of an Office app also went from only needing 4MB RAM to needing 8MB RAM. Nearly 2x the resources, so incredibly bloated!
I'm happy you're finding good use with the old hardware. Its good its serving your needs. I too tend to keep old computing hardware around until I no longer have any use for it, and even then I tend to sell it or give it to people who will actually use it instead of throwing it away. I've got some software I want to share with my kids, and it generally only runs well on like Windows 95 machines, so I actually am actively looking for some old hardware to run these educational games. So yeah, I'll be running an old Pentium III machine for them and it'll absolutely suit their needs. Sometimes a 10GHz 2TB RAM machine just isn't necessary for the task at hand, I agree.
I'm just arguing the commentary about what is "bloated" and what isn't is ignoring a lot of other factors of what "bloated" really means. If you run something at way under the designed specs, you're going to have a bad time. If you run it with hardware that's 1,000x the design specs, its no surprise it runs fast. It doesn't mean the newer stuff is "bloated" while the old stuff wasn't, it was just designed for a different target in mind. And maybe for your needs, all you need is exactly what Office XP has. If all you're doing is putting letters on paper though, then Office XP is massively bloated compared to just using a typewriter. For me, most documents I work on are being worked on with cloud collaboration stuff, so tools like Office XP don't have anywhere near the features I need from an office suite. Is it still bloat then?
I'm saying that software is getting bloated over time, and it needn't - if the idea is to keep software engineers employed, we could spend a few years doing a tick - increasing features and more years doing a tock - making things lean, efficient and sustainable.
This is a very different idea, it's just like you said: 'finding good use with the old hardware'. This is a mindset, it is something that tries to keep systems alive while ensuring sustainability. Fixing broken hardware or softwre - This is what the post is about.
As engineers, we should be realizing that there's tons of work lying around, and sustainability is a process and a goal. This is about recognizing how to put those efforts around it.
> I'm saying that software is getting bloated over time, and it needn't
But the "bloat" isn't just wasted space and resources; its often genuinely useful features. Your example with Office XP is a great example. We can look at just Word and compare it to even Word 2007. Its missing a lot of genuinely useful features. Its math equation builder is absolutely rudimentary and pretty bad compared to what is available in 2007. It doesn't show a live word count in the status bar. It doesn't offer the quick styles functionality. It lacks good bibliography and citation tools. Its document comparison engine is very rudimentary. It lacks a full screen reading mode. Its spell checker is extremely rudimentary. Are these additions just bloat?
A contextual spell checker is going to use more CPU and RAM than just looking at whitespace and seeing if the word matches a database or hits a set of basic morphology rules, isn't? Handling a more complicated set of changes to do comparisons on is probably going to lead to a more complicated internal representation of the document, consuming more resources, isn't it? Having the processing and designs of the reading mode means the application is larger and consumes more ram, doesn't it? So its increasing the requirements by a good bit, but you're getting a lot of features along side it.
Ah, but Office 2007, that's a bloated mess.
But once again, if your standard is just putting black letters on a white page, why are you even bothering drawing colors to the screen? Aren't colors just bloat? Why have a screen at all? Just bloat.
So yeah, does Word 2021 require a lot more resources to run than Office XP? Sure. It also does a whole hell of a lot more. Far better pen input with smart shapes, equations, and more. Far better dictation support. Even more enhanced contextual spelling/grammar tools along with a smart editor feature to give better critique. Better annotation and commenting tools. Better change tracking and comparison tools. Better collaboration features and live cloud syncing. New search tool to search settings, resources, help, and more. Accessibility checker tool helps find potential improvements to your documents like help adding alt-text and other things.
Its not like Office 2021 is just the same thing as Office XP but now it needs at least a gigahertz and a few gigs of RAM. If that were the case, I'd agree its just bloat. But its got thousands and thousands of improvements and new features, all of which demand just a little bit more and a little bit more resources. Its unreasonable to expect for them to be able to shoehorn all the same features of Office 2021 into similar requirements of Office XP. And even then, picking Office XP is entirely arbitrary; why not hold them up to Office 95 as the standard of resources? If its using more than 8MB of RAM and 55MB of storage space clearly its just bloat.
Why should a videogame need any more resources than the original DOOM? Clearly everything added on is just bloat.
You completely miss the point about sustainability and choose to argue, you go ahead.
If I were running an older computer, I would figure out how to be productive with Office 95.
Likely prediction: You will never build that Pentium III machine from old or refurbished parts. Nor will you be the type that will actually enjoy playing DOOM or anything with that machine. You'll instead keep complaining.
Sustainability, Preservation - It's a mindset. If you don't get it, you don't do it. :-)
Quite a rude assumption there, seeing as how I've already mentioned I have the machines and they're currently running, and I've already talked about how I do retro computing as a hobby. They're just not in the state for little kids beating on them daily. You know, from a whole preservation mindset.
I've said nothing against sustainability, only pushed back against your claims that modern software is just bloat compared to what was there in the past. Often a 1/2HP motor just isn't enough oomph for what you're wanting to achieve. There's nothing innately wrong with that, and it's not just bloat. Sometimes the things you want to move are just heavy and the physics of the challenge requires more power.
"Most websites do not work with TLS v1.2 anymore."
This has not been true for me. In fact, using a TLS proxy I have tried to force TLSv1.3 for all sites and a large number, possibly even the majority, still use TLSv1.2 and will not accept TLSv1.3.
If using old software/computers, one approach for dealing with changes in TLS is to use a present-day TLS forward proxy, bound to localhost address or running on a relatively recent computer on the LAN if necessary. This way, one does not need to do any encryption using the old software/computer. The proxy converts HTTP to HTTPS.
If people had integrity DRM/Activation would eventually expire with the publisher EOL, and default to an unlocked state for data/media/format migration. There is nothing worse than getting legacy IT issues elevated to a forensic recovery operation.
I think many users agree MS has been on a downward slope since XP and 7. =)
> I think many users agree MS has been on a downward slope since XP and 7. =)
I think it is worth discussing the counter arguement.
My team supported Windows from XP to Windows 10. I continued to be staggered at the backwards compatibility of it. I remember having an ERP client that was written in VB6, and nobody thought it would run on 7, but it did. Everyone assumed it was obvious it wouldn't run on 10...but it did... with less RAM usage and it was a bit snappier. Same with the ancient copies of WordPerfect and Photoshop (old enough that they came in a box).
I had a mixed fleet of machines from very old to very new and Windows 10 ran on all of them. I bet it would run on this guys P4 just fine. That is extraordinary isn't it? MS have a current commercial OS that will run on 30 year old hardware!
Before we criticise MS too much, let us compare that to other commercial providers...like Apple for instance. How much of your 1990's software runs on the last gen Intel Macs even? Answer is...none, isn't it?
Nope, MS has the worst competitor imaginable... themselves. =)
People are going to compare the new with the old... every release.
Modern MS is getting well deserved criticism, because we all know they are better than a GUI concept 1990s Apple Steve Jobs was actually recorded ridiculing in an interview.
Win10 SSD support was good, and is the last version of windows in my opinion.
Win11 has done too many silly things to trust in production environments. =)
Yeah, a lot of users installed the early Win11 fine, than lost legacy desktop registry flag support, and the missing TPM overlay screen spam this year.
If you game, than YMMV on the current MS OS. Now I have to figure out how to get Steams Cuphead to run under Linux. =)
I haven't had a Windows for Gaming in 10 years. The first seven years of that, I haven't played many games. However, CS:GO, Factorio, Civilization and Stardew Valley, among others, all had native Linux support and I wasn't bored.
Then the Steam Deck and Proton came. I ticked a box in Steam settings to not restrict "Steam Play" to verified games, installed a Windows game and started it. It just worked.
When Cyberpunk 2077 came out, Proton was updated in time and I launched Cyberpunk 2077, a demanding modern game, in my Linux Steam Client as if it was nothing special.
It's three years later and I'm still thrilled. You bascially don't have to care anymore. I still take a look into ProtonDB before buying a game, but I can't remember the last time it wasn't working (excluding Anti Cheat-Stuff for Multiplayer, but even that is being worked on).
Thanks, I agree Steam has seen the light, and I hope they keep focused on improving fun stuff.
Also, Classic FreeCIV on the web looks like fun too, definitely have to find a weekend for a round. The Anti-cheat stuff is unfortunately a necessity for multiplayer games, as even Pokemon GO PvP cheats lamed up the game. lol
Oh, I didn't want to critique Anti-cheat for existing and not properly working (yet) in Proton.
It's software that tries to validate the environment the game is running in and is suddenly confronted with "wait, thats not a Windows system at all". And as you already argued, just ditching Anti-cheat is not an option.
So they either ignore Linux/Proton altogether or have to start actually implementing anti-cheat for a Linux/Proton environment. Even if they go with the second option (as e.g. EAC is, iirc), it simply takes time to implement.
> So they either ignore Linux/Proton altogether or have to start actually implementing anti-cheat for a Linux/Proton environment.
Most of the popular anti cheats do work on Proton. EAC, Punkbuster, Battleye, and VAC all work fine, which I think covers most games with anti cheat. Unfortunately some of those require opt-in by the developer to work on Proton, and therefore some developers have neglected to opt in.
A handful of other games use their own anti cheat that does not work, and unfortunately those games have the resources to do that because they are very popular (CoD, Valorant).
Valve took a look at how hard Steam Machines bombed, and tried again with Proton. They saw some companies anti-cheat rootkits weren't working, and didn't let that stop them from making all the other games work.
Valve saw the light over a decade ago. Hell, I'd argue they always saw Linux as second-class, but worth not completely ignoring or actively breaking.
You get a weird situation where very old software runs fine on the latest OS - and of course very new software works fine, but somewhere in the middle you get things that were removed (often software works fine, but the installer won't run because it's WIN16).
> MS have a current commercial OS that will run on 30 year old hardware!
i586 support was dropped by major linux distros some time ago, i somewhat doubt that w10 will run on something pre-pentium and w11 won't even install on my ~5y old dell laptop
Not even that; the CPU they use is from 2004. Socket 478 wasn't introduced until 2001. And they're using a SATA SSD!
Then again ... this is 20 years old. In 2004 20 years ago was 1984, and computers from 1984 were definitely seen as "old computers" in 2004.
On the other hand, a lot more changed from 1984-2004 than from 2004-2023. Windows XP is conceptually the same as Windows 10, macOS, or GNOME today, even though details between implementations differ, and a Pentium 4 is essentially the same as a modern CPU, except slower.
A C64, DOS, or old Unix machine was conceptually quite different to Windows XP, and the capabilities of these machines were radically different from a P4.
"Age" and "old" is not really about years, but more about concepts.
This reminds me, around the same time I got a Sun SPARCstation from the early 90s for free, and I was doing the guy a favour by taking it off his hands. With inflation and EU import tax the price of this machine must have been €15-€20k, easily. And I got it for free with keyboard and (huge!) monitor, and everything. I don't think anyone is giving away 10 year old high-end workstations today; they're perfectly functional (my i5 midrange laptop is 6 years old).
> Then again ... this is 20 years old. In 2004 20 years ago was 1984, and computers from 1984 were definitely seen as "old computers" in 2004.
> On the other hand, a lot more changed from 1984-2004 than from 2004-2023. Windows XP is conceptually the same as Windows 10, macOS, or GNOME today, even though details between implementations differ, and a Pentium 4 is essentially the same as a modern CPU, except slower.
This is also my takeaway here. I'm pretty sure that someone who first used a computer with a typical 2023 setup, Win10 and all, would pretty easily figure out a typical 2004 PC with Windows XP, Internet Explorer or early Firefox, etc. But going back even ten years from 2004 is more different, you'd be on Windows 3.1, which looks quite different and requires familiarity with different workflows. 1984 is radically different as the typical machines were all CLI/TUI and required machine-specific knowledge to work on, you couldn't just figure it out by looking at it.
There's a vast gap between what we can run and what we're told we should run.
Why not run on old hardware, with software that works for you? Because of electricity? As others have pointed out, the environmental cost of manufacture is greater than the difference in performance per energy unit for even systems that are ten years old, and in some cases, older. We're literally sending hardware to landfills because companies like Microsoft want to make more money (which, incidentally, is why I think Microsoft is the cause of the Dark Ages of Computing and why Bill Gates will eventually be remembered as someone who caused more waste and held back progress more than anyone in this time period).
I run servers, and even though many of the servers I run are using modern hardware (ARM, Apple ARM, AMD Ryzen), I still run a fleet of AMD AM1 Athlon hardware. They serve DNS, web, firewall, SFTP, NAT, email and so on in all sorts of environments. Why? Because to NAT and firewall a gigabit of traffic you don't need more than a quad core, 2 GHz CPU, and because using all four cores at 100% still takes less than 20 watts for the whole system.
I'm even building one in to a 1U case right now to go to colo because the colo power budget is 100 watts or less, and I also have four 10 TB drives and a RAID controller to add. But even these 2014 systems, unlike Intel CPUs of the time, can take 32 gigs of ECC memory, so they're still very usable.
So much software has artificial barriers. You need AVX. You ned AVX2. You need SSE4.2. You need FMA3. But why? Do you REALLY need them, or are you fine running certain software a little slower? After all, you're not going to use your 2014 AMD Athlon to transcode to h.265 often.
It makes me sad that so much hardware goes to landfill because of completely ridiculous reasons. Add FUD that people share about how just writing zeros over a drive is somehow not good enough, and you have the problem of people literally destroying hardware rather than recycling. It's not a good way to run a planet.
> The most important thing I learned was that old computers are still fast — and quite usable as a daily driver. User percieved latency has not improved with much faster hardware (throughput). We need to start paying attention here when writing software.
It was a fun video experiment, I wanted to actually see if a low-power semi-modern PCIe card (2.0 x16) with HDMI would actually work on a slot 478 motherboard with PCIe 1.0 x16. It works rather well and can run OpenCL 1.2 applications.
Apparently CUDA v6.5 can run on Windows XP 32-bit as well. I intend to try this on a GeForce 730 I have saved from before (PCIe 3.0 x16).
Incidentally, the first Graphics Card I had was a 32MB RIVA TNT2. Do not have that working card anymore. :-)
The key take home for me there is the simple replaceable software and data portability. This is something I am working on at the moment. I built a new windows PC recently and am moving to portable file formats and tools slowly. Nothing goes near it until it is portable. This may end up as a Linux desktop one day but I have 100gb of stuff to sift through.
The easiest thing to do with a really old computer is get it an Ethernet card of some form (they almost always had them and they're not terribly hard to actually build) or a serial port, and connect it to a modern Linux computer as a dumb terminal.
Many people prefer those old computers and their (adjusted for inflation) $200 keyboards.
Other devices had absolutely horrible keyboards and should best be forgotten. PCjr.
The last I used a Windows XP PC, I've found it's near impossible to browse the internet.
Most websites require a modern browser.
Modern browsers don't work with XP.
> With so much effort, technology and money being invested on the frontend side, why do we still have so many websites that fail to work?
Because it’s not worth it. I don’t know many people excited about debugging websites on museum hardware to please the potential one user, that could use a more modern device when it fails anyway. As a hobby why not, but it makes no sense in a professional context.
Your CPU instructions follow an architecture written almost 40 years ago and JavaScript is itself 27 years old. They are actually museum worthy.
The point of having any standard is to exactly prevent people from debugging things every few days.
Despite this, many modern websites hardly follow any sort of compatibility despite using a polyfill library. So these professionals are either not doing a very good job or this argument is a non argument. :-)
The instruction set of the CPU I use to reply to this comment is from 2018. It has been a long and constant evolution from the CPU architectures of the 70s to what it uses today. JavaScript is old and kept improving too. It does a lot more and much faster than the first version of JavaScript developed in about a week.
Standards are very useful, but we can't keep using old things forever. We stopped using HTTP 0.9 or TLS 1.0 for example.
Spending time to support unused obsolete hardware may be fun, but good luck to get funding for that activity.
1. What computer are you using? You do know that a lot of software developers do not take active advantage of instructions right? Mostly because developers aren't as aware of floating point rounding modes or availability of co-processors or even vector units. If it's a computer that uses an instruction set you made in 2018, well, kudos - but that is not a large set of people out there.
2. I don't think it's hard to add HTTP 0.9 support at all, maybe I will add it to https://github.com/guilt/phttpd. I will not bemoan TLS or SSL dying at all. As protocols used for system update - they've caused more problems than they have solved. The protocols will break once again when agencies end up factoring large numbers in the near future. There is a need for privacy but there is no need to break system updates in the name of increasing security - it does not work that way.
3. Keep breaking things enough and you'll find all your customers moving away from you. Good luck explaining to them why your software breaks day after day. You might be able to fool them for some time.
4. I do not need funding to fix obsolete things, a lot of people are already doing it - some are getting paid for it too. :-)
It’s a very common MacBook M1. I agree that most developers use an interpreter or a compiler with the default settings. At least on the mac everything seems to target a very recent instructions set. But it’s quite common to see hot paths being optimised using not so old instructions on amd64 too.
I also agree that you should keep most of your users happy. How many users are using such old hardware and software? At work, we disabled old TLS versions and many cipher suites for security reasons because we knew no one would care. We kept TLSv1.2 enabled because we still had users using software incompatible with TLSv1.3 unfortunately.
ARM is at least 30 years old, if not more :) My first computer (in school days) was a 6502 based BBC Micro - so, maybe half an ARM (and a leg) computer.
I am well aware of the TLS/SSL issues, I was personally responsible for large scale patching of these ciphers at a large retailer for a very large fleet of machines on their datacenters. DM me if you want to know the details.
Yet, Today - I am the one slamming this practice for web servers serving system and firmware updates, and I write about how Debian solves this problem well (in another thread on this HN page). It is okay to upgrade people to forward looking infrastructure, and the cost of not keeping sustainability is steep.
The correct way to get people to TLS v1.3 is to allow some means by which people can get the updates needed for both algorithms, as well as the new certificates. And the deal is that TLS v1.3 may not be enough for a post quantum world. This may be the right time to undo our mistakes of the past.
> The most important thing I learned was that old computers are still fast — and quite usable as a daily driver.
I have to point out that your build isn't entirely an old computer. Having an SSD makes a major difference. If you were to use an IDE HDD or even an old SATA, you'd probably think differently.
It should, but it's early enough you might have to custom-compile to fully utilize it.
Also 32bit (or 32bit with weird memory features like EM64T) may be significantly faster because of the smaller memory access sizes. Or weird hybrids with 32bit addressing and 64bit extensions.
I don’t understand how checksums help for HTTP sites. If an attacker controls file delivery such that checksums are desirable, they might very well also control those checksums and their display.
At that point one might get those checksums out of band, on a modern device via HTTPS. But if a second device is needed it could also, at the cost of some more effort, be used to just download those assets directly, then transfer locally.
HTTPS does not solve the problem if the server operator adds malicious software; it only prevents spies from doing so.
Cryptographic hashes can help if the user who downloads the files knows the hash ahead of time somehow, although there can be a difficulty in figuring them out.
Getting the checksums out of band like you suggest is a possibility, and might be suitable if the file cannot easily be transfered directly. (It could also perhaps be communicated by other people by writing down, by voice on telephone, etc; if they (and the communication in use) can be trusted.)
How are certificates distributed to computers before the TLS session begins? How does a user trust an untrusted certificate and it's chain, say from Let's Encrypt?
You only distribute public keys to the end user - and private key signed hashes that can be verified by those public keys.
The user is given an option to trust a key for a new repo, when it's not in their list of trusted keys. The installation already comes with a list of trusted keys when it begins. If you trust it, then you run it - just like you should be trusting the root certificates in your system to really benefit from TLS, even. That includes Government, Telco and Company issued certificates. :-)
> How are certificates distributed to computers before the TLS session begins? How does a user trust an untrusted certificate and it's chain, say from Let's Encrypt?
A handful of root certificates are baked into every OS and carefully guarded at every level to make sure they’re never given to someone who doesn’t have possession of a domain. Are you proposing to do this for GPG checksums? What would that look like?
It's the same logical process. A list of public keys that are trusted for installations are already kept in the OS. When updating, first they download packages are public key updates, and signed with the old key.
The user is also alerted if this key update happened because there may have been a private key compromise, or if this is a cryptographic strength/algorithm update, or a key rotation etc.
Once the user decides to verify the information, do their research - and approve, the new keys are now used to verify signed packages. So on and so forth.
The deal is that you do not rely on TLS to fetch this package update information at all. You merely download the signed hashes required to verify package authenticity and proceed.
Even before the OS exists, it tells you that so-and-so is their public key, download the ISO, verify the signed hashes with their public keys. Once you trust and install with this ISO, the update process can take care of the rest.
You can do that research with other systems. Not the one where you are stuck because of outdated protocols, certificates etc.
Besides, if your system is not following the most secure TLS version or having the most up-to-date certs, what is the point in browsing something with TLS? It should be plaintext compatible.
Yes, of course, why not run a system that literally draws hundreds of watts for a fraction of speed of any reasonably modern system? My tiny desktop has been too fast, too silent and uses too little power under full load. Screw that! It barely heats up the room, like that old P4 that I used to have 20 years ago.
In fairness to the GP, the kind of CRT that would support a usable desktop session is light years away from the type of CRTs that cool-retro-term simulates.
Though if I was going to do something like this then personally I’d rather have a second screen as a CRT and have that resolution dropped right down, rather than “putting some crt in your crt”
I'm surprised they went with Windows rather than with one of the ultra-lightweight GNU/Linux distros intended for use on antique computers. Puppy Linux, or something similar.
Windows XP is presumably full of security vulnerabilities.
Win XP is considerably smaller and more responsive than the lightest of lightest-weight modern distros. XP was released over 2 decades ago now, and Linux has bloated up vastly in that time.
About the only Linux that can even compare is a hand-rolled config of Alpine, and even so, while Alpine on a well-specced 15YO 64-bit machine can compare with XP64, XP remains lighter and more responsive. On a 32-bit machine, it's game over: XP32 is dramatically lighter and quicker.
This is true, but incomplete, and more saliently, misses the point.
[1] Yes, all the original 3 BSDs support x86-32. But they're not all: quite a few Linux distros still do, including Debian, Alpine, Void, and a number of other lesser-known ones. It's not entirely a legacy platform just yet.
[2] The point, though, is not whether they support 32-bit or not. The point is, do they run acceptably well on ~15+ year old hardware?
This is not about bitness. Linux supports 32-bit, XP supports 64-bit. That is tangential to the discussion. 64-bit x86 systems have been around for 2 decades now.
The real point is that up until about 20y ago, OSes were considerably smaller and quicker than now. Since then all the mainstream ones have got much bigger and much slower with little to show for it.
This is not immediately apparent because hardware has got faster. Not all hardware: CPUs haven't improved much. Raw single-core CPU performance scaling fell off a cliff in about 2007 and it will never recover. That everyone ignores this is one of the big lies of the industry.
But RAM has got much bigger and faster, and disk drives even more so thanks to SSDs and new interfaces for them.
SATA is about as old as x86-64. (Assuming a fresh clean OS -- any OS), put an SSD into a Core 2 Duo, and watch boot times fall through the floor. It's amazing.
But install a Linux from the 2020s on that machine and watch it become a slug.
Try Windows 7, generally agreed as a good version, on a Core 2 Duo and watch it struggle. Then nuke it and put XP64 on it, and watch the same hardware become fast.
I didn't believe it when people told me. So I tried it myself. It's astounding.
Testing took a few days and it remained fine in that time.
It was of course behind a firewall the whole time.
Also, the first thing I did was install an alternative browser. So long as you don't let any MS client app talk to the Internet, it's not _that_ bad.
I started with an old version of SeaMonkey, installed from USB, and then bootstrapped to newer browsers from there.
There are current 3rd party antivirus and firewall apps, and even with antivirus, XP is _startlingly_ fast compared to even Win7, and 7 is faster than 8 which is faster than 10 which is faster than 11.
I can't imagine connecting a Windows XP machine to the internet and/or running any network-connected software such as a web browser or email client.
It would be interesting to see how long it takes for a brand new (but "fully patched") Windows XP machine to be taken over by malware after connecting to the internet.
Bonus if it's behind a stateful firewall that blocks incoming traffic by default.
It would be interesting to see how long it takes for a brand new (but "fully patched") Windows XP machine to be taken over by malware after connecting to the internet.
I tried Linuxes on old PCs in the late oughts. Yet they were usable only for a narrow sliver of technical users. Puppy also ran as root and had a super janky collection of inconsistent apps.
Perhaps Xubuntu [0] or Lubuntu [1] would be better go-to lightweight distros.
These days with the web having taken over much of what people use desktop computers for, I think Xubuntu + Firefox should go a long way even for a non-technical user. Admittedly I have little experience helping non-technical people use desktop Linux.
It's more that streaming services, even Netflix, stutter. The hardware doesn't have decoders built-in, or they only have Windows drivers, or the resolution is too much to handle. So the owners go back to EOL Windows versions or throw it in the trash and buy new, disposably priced machines.
Because designing a new motherboard is an expensive and painful process, so it's usually reserved for new processors.
The SBC market is full of things that are small and new.
But old processors are often run in old equipment, which has lots of space. And if you're running a new-old board, you often want bulky connectors anyway.
Did not realize it was so simple and easy to download high quality video files and watch them full scrren with great performance and hardware acceleration (and no ads!) This is awesome!
the small "conventional" formats are surprisingly watchable. It depends a lot what it is. I had a 3 hour lecture without slides that didn't lose anything relevant at 75 mb.
It was not 100% essential to the archive, if it was 1 gb it wouldn't have made the cut.
Oh well, he does it for fun. I do it daily, running Win2003 and
last PaleMoon browser that works on WinXP. Why? Because I really like XP/2003.
Sure, I could run Firefox 52 ESR, but it looks like junk and its slow.
Palemoon on warm cache starts right away (around 200ms probably).
Luicky, most of web sites I care still runnining fine, but dark clouds are
on horizon. I cannot belive that rendering "mainstream" web pages eats more RAM
than playing first Stalker game!! What the hell happened? Everything gets bloated. UIs are terrible and yet people are using it, just mumbling to themselfs..
For shitty webpages I have VM running Linux and using of course latest crappy firefox. But hey, some newest firefox starts to lag terrible when rendered over X server. Thank you Mozilla for putting even more bloat to Firefox..
Okey, I vented abit :) Time to play some TRS2004 instead...
Joking or not, that ‘stuck in time’ story is so terrifying! I have my worries with bloatware as well, but I solve that with Linux or FreeBSD, minimalistic DE and other modern ways. Attempts to freeze time for some mediocre event (be it some Windows version of the past or good-ol’-days) scares me so so very much! My personal hell would be living that very life you praised.
I yet to have to find my new great OS. Im 100% sure it wont be Windows.
I have an eyes on some minimalistic Linux distro. But who knows, maybe Haiku will grow enough, before I will be forced to make a switch.
As for staying in past, im not that much afraid for SW part. I have all toys mirrored. What worries me is the HW part. Those scrapyards where I get pretty decent HW will eventually run dry. And we know what new HW is...
Have a look at SerenityOS, it's early days but I think it's a truly awesome project and expecting good things in future years. It's a completely new OS. The founder just hired two developers full time which is promising
I really hope that one becomes daily driveable. The pairing of the consistent UI and the BSD-style holistic unix is something I'd relish.
The biggest hindrance will be, as ever, drivers. They are making consistent progress with USB and PCIe from what I can tell in the Discord. If they targeted a known platform like RPI4 and got all the drivers for that right it could be big.
The article does not mention memory, and to me, that is the largest constraint on old systems when choosing an OS.
For a Minimal OS on an Old system, you can try NetBSD or OpenBSD, i386. Modern Browsers will be a no-go, but you will find NetBSD and OpenBSD very usable.
One note for OpenBSD, if you have less than say ~2 gig of memory, you may need to disable the kernel re-link. It will work, but will hammer the system. For NetBSD, I have it on a system with 512M and it works great.
Both fully are able to handle the 2038 issue on their i386 builds (32 bit) and has been able for a while. That is something you should be aware of for long term use.
Yeah, I am aware of that problem. Windows seems to store TIME differently that UNIX so it should be all right here. But I have dozens of old Linux 32 bit servers here and there...
As for RAM, well I have 16GB RAM and its pretty much upper limit Win2003 can handle. Ideal seems to be 8GB because you will have more Pools and PTEs.
For todays web, limiting factor is per process 2GB limit (or 3GB on Linux).
It barelly enough to browse using new FF browser. Older browsers like Palemoon
are much more memory efficient. When it uses 300MB of RAM, its already quite a lot for it.
Yeah, I know Plan9. Similar problem, low HW coverage. Additionaly, it doest give me anything I already have, and I get drivers problem. Not worth of switch IMO.
I can stay in Win2003 then and utilize VMs for newer stuff. Big plus of staying is that I have all the apps/tools mirrored and I know this system pretty well.
Well, I have them, but they arent really Win2003 but XP.SP1, and they seems to be incomplete. If you have better source, I am all ears :D
Anyway, those sources are handy to check some internals of Win2003. I have few binaries hex edited so they are tweaked to my needs. Like Desktops 2.0, IPMON.DLL helper to accept DNS server as 127.0.0.1 from netsh and few more probably.
I guess there are places on the internet where I have seen people being able to build some versions of Windows from source.
Since this is illegal I can't say a lot more about it.
There's always the ReactOS project, I wonder how good things will be if we just slipstreamed their DLLs into Windows. That would be a good way to do this.
I care about stability. Its a must. And my Win2003 is absolutly reliable.
In old desktop I had random BSODs due to driver issue. Some Audio driver
was not happy about PAE and 8GB RAM. After long fight I isolated faulty .sys driver and decided to use HW drivers instead instead of my mobo vendor ones.
Problem vanished.
Well, there is not much to do really. Im happy with it. Its stable, fast, nice looking. If I had a source code and could compile part of kernels I would try to do some more hacks maybe. Like one thing in CM annoys me that there is no knob to limit how much RAM can be used as Filesystem Cache. Once all RAM is eaten by cache, system ca be a bit erratic under load. With should NOT really happen because Win2003 implements LRU cache, so it should discard least used pages on fly and make them to use either by apps or cache. But, this not happens often really, so not big issue :)
In true video game fashion: There are rumours that there are non-Microsoft people out there compiling Windows kernels. If you choose to investigate, the reward would be a FS-Cache RAM Knob.
Yeah, I track ReactOS. Unfortunately, its not stable to my needs. I would really like it in better shape.. Im too old/tired to try to contribute to it sadly :(
Writing OS today is nightmare because of drivers. HW vendors are shady, HW is very complicated and whats worse, it changes often. You need army of skilled devs to handle that.
Factories and labs frequently have machines or instruments which are controlled by a computer. They are run off control cards which are inserted into ISA or PCI slots in the computer and are commanded by the legacy software through old, proprietary drivers. Examples are cards from National Instruments or Galil. Such equipment can cost tens of thousands of dollars (or more) when new. Also, decades-old software written by long-gone engineers at the factory still runs on the equipment, and nobody understands how the stuff works nowadays. Therefore, there is plenty of incentive to keep the old systems running and running and running.
Unfortunately, old computers sometimes break. That's where I come in -- I do a side consulting business with a colleague where we refurbish the old computers -- replace parts as necessary, install old versions of the O/S, replace mechanical hard drives with SSDs, and do whatever else is needed to keep the computers running for the next few decades.
I know we're not alone out there since we're aware of other small businesses which provide a similar service. It's an important thing since -- as many point out here -- modern software companies don't make backwards compatibility a priority, but factories and labs have equipment which need to run for decades, so the computers controlling them also need to run for decades.