What a fun time that was in the aughts for distros. Hardware was rapidly changing and made a lot of new stuff possible. I wonder if there are still emerging hardware platforms to inspire kids today. I had the most fun when I could combine software with hardware, like trying to fit a distro on a floppy, or using the business card CDs to boot a distro into RAM, creating thin terminal clients, routers, firewalls. Feels more real when there's something physical involved.
Also worth noting that the concept Robert discusses of avoiding "system rot" is an early example of immutable infrastructure, a game-changing design used to automate fixing state and entropy failures in modern systems. Had no idea that's where it would lead.
I've always seen "immutable infrastructure" as another riff on the general theme of Postel's quip, LTI systems, functional programming, "just restart it" approaches all the way down, ... The general admission that systems with memories becomes rapidly much harder to reason about than over time than systems without memories.
Memory is of course also the only way to do anything useful when one interacts with the real world, it's very hard to eke out a decent living being an amnesiac genius at anything. I also think well-designed memory systems become harder to reason about much slower than poorly-designed ones -- take, for example, that NetBSD box that one guy spun up as a simple file server that was still working untouched and forgotten about 10 years later. There's bound to be some weird stuff happening on that system, but even so you could probably remote in and mostly find a system you could navigate and administer.
But when you don't have the luxury of time and experience to design such a graceful system (and none of us with day jobs do) you can reap a lot of those benefits by just trying to be as immutable as possible before you start causing major performance issues.
> I wonder if there are still emerging hardware platforms to inspire kids today.
I should think so. There are all sorts of interesting SoC's with widening capabilities but still always with vastly different tradeoffs; and where once you could apply very limited programmed control to all sorts of objects, today you can run them using a full-fledged computer. There are also more "smart" devices which smart people may want to reverse-engineers and make generic and customizable - from TVs to vaccuum cleaners.
>I should think so. There are all sorts of interesting SoC
Why would kids want to play with such things?
I got into computers because that's where "the internet" was, where pirated video games were, pirated mp3s and movies, so learning some technical stuff came with the territory.
But today kids have all that one touch away. They don't have to leave that comfort zone. Back then there were no comfort zones. If you wanted (free) entertainment on the PC you had to put in the work.
I am old, so this comment is pretty hilarious. You got into computers “because that’s where the Internet was”. I got into computers two decades before that and had to create stuff myself because I could not afford everything. I had to save up for Z80 books and wait for a family road-trip to buy them. When “the Internet” came along, suddenly everything was everywhere and free. Why would anybody need to learn anymore? It was all just “one touch” away.
Except you did learn. And the “kids today”’still are. Except they are building a Proxmox homelab running Linux containers and Tailscale networking so they can host a Minecraft server. And they are using CAD to design their own custom fit ( to their finger-length ) macropad and programming the microcontroller themselves. They are running Jellyfin to stream the content their parents don’t subscribe to. They are doing video editing and 3D rendering that would have been Science fiction in the 90’s and learning about audio and video codecs to do it. They may be running Steam on Linux. They are also jail breaking their phones and overclocking their hardware. Hardware much more complex than what we grew up with.
Some of them are even learning a bit of Python or Typescript and creating virtual AI assistants from free APIs or Open Source models.
Trust me. Kids are still putting in the work. They are just putting it into different things.
>they are using CAD to design their own custom fit ( to their finger-length ) macropad
Who is this "they" doing all of this? I never heard of kids doing this. I'm sure a couple around the world are doing it but definitely not the majority.
Meanwhile when I was growing up, almost everyone knew to tinker with Windows internals to get pirated games working. Almost everyone knew what an IP address was. How many today know this?
You're generalizing a generation from a small niche of tech kids.
It was always a small niche of tech kids. Today, 10 years ago, 20 years ago, 40.
I went to a good school in one of the tech capitals of the US and I was the only person I knew who ran desktop Linux in high school.
Tech will always be the domain of smart, ambitious nerds, almost by definition - that's who you have to be to have a shot at pushing the envelope. If you want to expand the number of kids doing it, you'll have more success expanding the number that fit that preceding definition.
But "back then", thinking about the late 90ies and early 00ths here, far less people had to use a computer at all.
So in absolute numbers there are most probably still more kids with "general tech know-how" today.
Prior to my knowledge of Firefox profiles and multiaccount containers, I used tiny core to isolate my banking, shopping, general browsing, etc; from both each other and my base OS, in virtualbox. I preferred it to other small distros like slitaz and puppy because it was really bare bones and easy to add to.
Now you'd likely just use docker, or like me profile and containers for compartmentalisation; aka super paranoia.
> I used tiny core to isolate my banking (...) in virtualbox.
A few years ago, I tried something similar—not out of fear that malware might steal money from my bank account, but because my bank required me to install some kind of security software to access their internet banking on a PC. It turned out they don't like customers running their software in VMs. My account was completely blocked, including my debit card and ATM access. I had to visit a physical branch to resolve the issue. So I gave up on using internet banking on a PC and switched to their mobile app instead.
I would have taken the opportunity to "educate" them that that your use case is a legitimate one, and why they should teach their systems not to block users like that.
I think one qubes feature that could be profitably extracted are the disposable VMs, particularly for web browsing but also as a sort of default jail for potentially dodgy software. Like a right click “run this in a disposable vm” option in mainstream OSes. And run the built in web browser like that by default.
Apple could probably come up with some spiffy branding. “Run this on an islandTM” or something like that. Call the feature “Archipelago.”
> one qubes feature that could be profitably extracted are the disposable VMs, particularly for web browsing but also as a sort of default jail for potentially dodgy software
A similar feature ships on HP business PCs as HP SureClick based on uXen, derived from Bromium micro VMs, derived from Xen, which is used by Qubes. Also cloned by Microsoft in Windows as Application Defender Guard. Bromium isolated each tab of a web browser in a separate Windows VM with copy-on-write memory, and even individual network connections could be isolated in a micro-VM.
A similar architecture has evolved on mobile phones as Android Virtualization Framework (AVF) with pKVM nested virt. Apple has taken baby steps in this direction, adding hardware nested virt on M2+ silicon, M3+ macOS and M4+ iPad Pro "Secure eXclave" micro-VM for camera LED indicator.
While the underlying infrastructure is slowly being built to enable disposable VMs and other security improvements based on micro-VMs, one challenge is high-performance graphics composition of display output from the VM with the main desktop. Google has blazed an OSS trail with virtio and CrosVM on ChromeOS, which will hopefully be ported to Android with external display output. Since Apple controls both hardware and OS, they could add GPU hardware support for virtualization, removing any perceptible UX slowdown from micro VMs. Intel already added SR-IOV virtualization to Xe iGPUs.
Thanks, this is great overview of the space, I had no idea about all this. Maybe I should play with other OSes more beyond Mac and Qubes and Linux but configuring Qubes tends to take up all my extra OS time :-)
It's a bit of a mindset change, but yeah, you can start to think of
operating systems not as big frameworks to run lots of applications,
but as thin wrappers around single applications in an ecosystem like
qemu+virsh. Clone a fresh template, install something in it, talk to
it with sockets/tcp whatever.
That's back during the evolution of the operating system, I remember getting CDs of DSL with our console switches. What a great time to be building out our massive server rooms! Lots of fun back then!
Also worth noting that the concept Robert discusses of avoiding "system rot" is an early example of immutable infrastructure, a game-changing design used to automate fixing state and entropy failures in modern systems. Had no idea that's where it would lead.
reply