When I started University in 2000, I had a quad-boot system: Win98, Win2000, BeOS 5 and Slackware Linux (using the BeOS bootload as my primary because it had the prettiest colors). I mostly used Slackware and Win98 (for games), but BeOS was really neat. It had support for the old Booktree video capture cards, could capture video without dropping frames like VirtualDub often did, and it even had support for disabling a CPU on multicpu systems (I only saw videos of this; never ran BeOS on a SMP system).
I wish we had more options today. On modern x86 hardware, you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD if you replace your Wi-Fi card with an older one (or MacOS if you're feeling Hackintoshy .. or just buy Apple hardware). I guess three is kinda the limit you're going to hit when it comes to broad support.
I think BeOS was the only OS that allowed smooth playback of videos and work at the same time, something Windows was capable 5 and Linux 10 years later :D
What technically enabled this on such limited hardware? Was there lack of security/containerization/sandboxing that made os call much faster and context switches better?
Other people mentioned the real preemptive scheduling — and the general overall better worst-case latency — but another factor was the clean design. The other operating systems tended to be okay in the absence of I/O content but once you hit the capacity of your hard drive you would find out that e.g. clicking a menu in your X11 app did a bunch of small file I/O in the background which would normally be cached but had been pushed out, etc. A common mitigation factor in that era was having separate drives for the operating system, home directory, and data so you could at least avoid contention for the few hundred IOPs a drive could sustain.
Yes. This always amazed me with BeOS. It would play 6 movie simultaneously making my PC very slow but still responsive. As if the framerate just went down.
Bear in mind that resolutions back then were much lower than now, and not all computers had 24 bit color frame buffers. Video cards ran one monitor for the most part, with no others attached.
Be had well written multi threading and preemptive multitasking implemented on a clean slate - no compatibility hacks required. That meant it worked well and was quick/responsive. There were still limits, and the OS didn't have many security protections that would get written in today.
Some people were, but it wasn't too common. Workstations had far higher resolutions long before this, but home PCs running non 3d accelerated hardware were still mostly 1024x768-ish.
The BeBox itself was vastly different hardware than a standard PC as well, so it could break a lot of rules as far as smooth concurrency and multitasking... kinda like the Amiga did.
Yup, had a 22” Mitsubishi monitor that could do that resolution in ~2002. Everyone would pick on me about the text being so small, but I’d let them sit at my desk squinting and I’d stand ten feet back and read the screen with ease as they struggled. The monitor was a beast though, around 70lbs if memory serves.
That was more the exception than the rule. Besides, 1080P is about 45% more pixels per frame than 1280, and likely at a higher frame rate. Big difference in hardware load.
I think it was their thread/process scheduler. It had a section of priorities which got hard realtime scheduling, then lower priority stuff got more "traditional" scheduling. (Alas, I don't know too much about thread/process scheduling so the details elude me.) That way the playback threads (and also other UI threads such as the window system) got the timeslices they needed.
Isnt giving near real time priority scheduling to audio/video how Windows handles things those days? I think I read that somewhere last week under Linux kernel scheduler behaviour response discussion.
Amiga did this in 1985. It's just that for compatibility reasons Apple couldn't do this. Even funnier: the fastest hardware to run old MacOS (68k version) on: an Amiga computer.
Ah yeah I still have my PowerComputing PowerTower Pro! At the time it was a current model, its 512mb of RAM was insane and my friends & classmates were jealous! hahah :)
Check out this video[0], basically an Amiga with an accelerator card potentially makes for the fastest environment to run 68k-based Mac OS (System 7) ...
Well, it's more akin to something like Wine where it's not exactly a virtual machine, since the processor instructions are the same. Tho that's about the extent of my understanding.. haha
I sometimes used my Atari ST with an emulator called Aladin.
"Cracked" to work without Mac ROMs. But wasn't really useful to me because of lack of applications (at the time).
IIRC there were solutions like this for the Amiga too.
That depended _very_ heavily on your graphics card at the time. In 2001, I could get X to crash on my work computer if I shook my mouse too fast. At home on my matrox card, yes, it was rock stable.
High definition playback is still not as smooth as it could be in browsers on Linux (or if your CPU is fast enough, it will drain your battery more quickly), because most browsers only have experimental support for video acceleration.
Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power). The only exceptions I can think of are early generation Atom processors, which were terribly slow.
Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power).
The point is that modern GPUs have hardware decoding for common codecs, and will use far less power than CPU decoding. But the major browsers on Linux (Firefox and Chrome) disable hardware decoding on Linux, because $PROBLEMS.
So, you end up with battery draining CPU-based 1080p decoding. And even more battery draining or choppy 4k decoding.
Linux could do that only if your system was lightly loaded. Once you started to have I/O contention, none of the available kernel schedulers could reliably avoid stuttering.
I had this experience too, my video card was so shitty that I wasn't able to watch 700mb divx videos in windows, I had to boot into linux and use mplayer.
This would be challenging with modern codecs using delta frames. The only way I can see it work is precomputing all frames from the preceeding keyframe. Doable, but decent effort for a fairly obscure feature.
I never saw BeOS do that with video, but I heard it do it with MP3 files. SoundPlay was a kind of crazy bananas MP3 player -- it could basically act as a mixer, letting you not only "queue up" multiple files but play each of them simultaneously at different volume levels and even different speeds. I've still never seen anything like it outside of DJ software.
> you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD [...]
That sounds just as good? Compared to quad-booting Win98/Win2000/BeOS5/Slackware, today you could quad-boot Win10/FreeBSD/OpenBSD/Ubuntu. Actually, depending on what you count as different systems and what exact hardware you have, you could have 2 laptops sitting on your desk: a pinebook running your choice of netbsd, openbsd, freebsd, or some linux (https://forum.pine64.org/forumdisplay.php?fid=107), and an x86 laptop multibooting Windows 10, Android, Ubuntu GNU/Linux, Alpine Busybox/Linux, FreeBSD, OpenBSD, NetBSD, and Redox (https://www.redox-os.org/screens/). That's 2 processor families in 2 machines running what I would count as 4 and 8 operating systems each.
There also used to be other CPU architectures--though even at the time, enough people complained about "Wintel" that maybe it was obvious that the alternatives weren't ever going to catch on.
People complained about "Wintel" because the 32-bit x86 chips were so fast and cheap they destroyed the market for RISC designs and killed existing RISC workstation and server architectures, like SPARC and HPPA and MIPS.
By the time the Pentium came around, the future looked like a completely monotonous stretch of Windows NT on x86 for ever and ever, amen. No serious hardware competition, other than Intel being smart enough to not kill AMD outright for fear of antitrust litigation, and no software competition on the desktop, with OSS OSes being barely usable then (due to an infinite backlog of shitty hardware like Winmodems and consumer-grade printers) and Apple in a permanent funk.
We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.
To Microsoft's credit, the early Windows NT versions were multiplatform. I remember that my Windows NT 4.0 install CD had x86, Alpha, PowerPC, and MIPS support.
The other thing people forget, which is still a bit incomprehensible to me, is that the multiple Unix vendors were saying they'll migrate to Windows NT on IA-64.
...well, we all know what happened - but I've often thought that Microsoft hastened their demise.
Somewhere in there, of course, was also the whole SGI moving away from IRIX (SGI's unix variant) to Windows NT (IIRC, this was on the Octane platform) - there being some upset over it by the SGI community. Maybe that was part of the "last gasp"? I'm sure some here have better info about those times; I merely watched from the sidelines, because I certainly didn't have any access to SGI hardware, nor any means to purchase some myself - waaaaay out of my price range then and now.
Of course - had SGI not gone belly up, I'm not sure we'd have NVidia today...? So maybe there's a silver lining there at least?
They couldn't afford to compete with Intel on processors... they just didn't have the volumes and every generation kept getting more expensive. For Intel, it was getting relatively cheaper thanks to economies of scale since their unit volumes were exploding throughout the 90's. Also, Intel's dominance in manufacturing process kept leapfrogging their progress on the CPU architecture front.
It actually worked pretty nicely - if anything better back in those days when software expected to run on different unixes, before the linux monoculture of today.
> We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.
I agree that phones are more locked down than desktops/laptops nowadays, but it's worth pointing out that neither Microsoft or Intel are really winners in this area. They both still are doing fairly well in the desktop/laptop in terms of market share though.
I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects. Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together. They each wanted to separately dominate their part of the industry and both largely succeeded, but MS would have been just as happy selling Windows NT for SPARC/Alpha/PowerPC workstations and Intel would have been just as happy to have Macs or BeBoxes using their chips.
> I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects.
True. I've always regarded "Wintel" as more descriptive than accusatory. It's just a handy shorthand to refer to one specific monoculture.
> Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together.
Right. They both happened to rise and converge, and it's humanity's need to see patterns which turns that into a conspiracy to take over the world. They both owe IBM a huge debt, and IBM did what it did with no intention of being knocked down by the companies it did business with.
> OS X was around in the days of XP and Linux was perfectly usable on the desktop.
> A few years earlier things were a little more bleak.
I admit I was unclear on the time I was talking about, and probably inadvertently mangled a few things.
As for Linux in the XP era, I was using it, yes, but I wouldn't recommend it to others back then because it still had pretty hard sticking points with regards to what hardware it could use. As I said, Winmodems (cheap sound cards with a phone jack instead of a speaker/microphone jack, which shove all of the modem functionality onto the CPU) were one issue, and then there was WiFi on laptops, and NTFS support wasn't there yet, either. I remember USB and the move away from dial-up as being big helps in hardware compatibility.
Yeah Wifi on Linux sucked in those days. For me that was the biggest pain point about desktop Linux. In fact I seem to recall having fewer issues with WiFi on FreeBSD than I did on Linux -- that's pure anecdata of course. I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.
> I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.
ndiswrapper. It's almost a shibboleth among people who were using Linux on laptops Way Back When.
> NDISwrapper is a free software driver wrapper that enables the use of Windows XP network device drivers (for devices such as PCI cards, USB modems, and routers) on Linux operating systems. NDISwrapper works by implementing the Windows kernel and NDIS APIs and dynamically linking Windows network drivers to this implementation. As a result, it only works on systems based on the instruction set architectures supported by Windows, namely IA-32 and x86-64.
[snip]
> When a Linux application calls a device which is registered on Linux as an NDISwrapper device, the NDISwrapper determines which Windows driver is targeted. It then converts the Linux query into Windows parlance, it calls the Windows driver, waits for the result and translates it into Linux parlance then sends the result back to the Linux application. It's possible from a Linux driver (NDISwrapper is a Linux driver) to call a Windows driver because they both execute in the same address space (the same as the Linux kernel). If the Windows driver is composed of layered drivers (for example one for Ethernet above one for USB) it's the upper layer driver which is called, and this upper layer will create new calls (IRP in Windows parlance) by calling the "mini ntoskrnl". So the "mini ntoskrnl" must know there are other drivers, it must have registered them in its internal database a priori by reading the Windows ".inf" files.
It's kind of amazing it worked as well as it did. It wasn't exactly fun setting it up, but I never had any actual problems with it as I recall.
Yeah I know what ndiswrapper is (though admittedly I had forgotten it's name). I should have been clearer in that I meant I was constantly amazed that such a tool existed in the first place and doubly amazed that it was reliable enough for day to day use.
I wish we had more options today. On modern x86 hardware, you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD if you replace your Wi-Fi card with an older one (or MacOS if you're feeling Hackintoshy .. or just buy Apple hardware). I guess three is kinda the limit you're going to hit when it comes to broad support.