Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Especially in the Linux world where not asking too much of their own PC is seen as a virtue, so people are running tiling WMs from the nineties, spend their time in Emacs and say everything is fine.

Meanwhile I want my Linux system to run VR, multiple 4K displays, very demanding games and bluetooth headphones. And Linux is the worst for it, because everything feels laggy and half polished there and on the proprietary OS my setup feels MUCH faster.

Sure, it's not Linux's fault, but let's stop saying everything is fine and dandy because Emacs is still running.



No one is saying everything is fine. It's not.

What we're arguing is why everything is not fine.

SGIs ran VR, multiple displays, and very demanding apps in the nineties too. They did all sorts of wonky, complex, 3d input devices too. So did DEC Alphas. This is the stuff Unix was built for.

My claim is that nineties Linux was much closer to having the right architecture for it than 2020 Linux for this sort of stuff. The reinventions and onion layers didn't help; they hurt.

I think the only piece nineties Linux didn't anticipate was the level of hot-swapping hardware (USB, Bluetooth, displays, etc.), and the level of power management. Modern Linux never got that architected or integrated quite right, because it was built with hack upon hack upon kludge. It's split up in bizarre ways between kernel and user space which would be really tough to clean up right now.


> think the only piece nineties Linux didn't anticipate was the level of hot-swapping hardware

They did, modular kernels date back as 2.2 at least. And USB was born in that era.


If this was anticipated by the architecture:

* My USB webcams wouldn't show up in a different order each time I reboot. This works fine under Windows and Mac.

* My monitor configuration wouldn't be hardcoded in my xorg config file, or swapped around manually with xrandr. I'd have a way to code up config options for whatever is plugged in, and if something unanticipated happens, it'd do something reasonable until I coded that config in too.

* I wouldn't need to reconfigure my drawing tablet to connect to the right monitor each time I plug it in.

* The system wouldn't get into an unrecoverable, unstable state with e.g. an unreliable USB cable.

.. and so on. It's designed for a fixed set of hardware, with layers on top of that to support hotswapping. I don't have "USB 4k Logitech Webcam" on the native level. I have /dev/video3. I then have layers to map names back.

Same thing with HDDs too, actually. I refer to them as /dev/sdc4, rather than by a GUID or name or similar. Layers with onions.


You've described need for UUID but then discard it for disks, why?

    $ ls /dev/disk/by-uuid/
    266c945c-1c6d-40e7-b770-73864a5541fa

    $ cat /etc/fstab 
    UUID=266c945c-1c6d-40e7-b770-73864a5541fa       /


Mostly, because as of 2020, most things don't use UUIDs. See e.g.

1) https://www.raspberrypi.org/documentation/installation/insta...

2) man fdisk

3) man mkfs

And so on. The /dev/sd_ is primary, with UUIDs as kind of an afterthought

It ought to be the other way around, with UUIDs as the primary, proper, canonical name and interface, and a legacy backwards-compatibility layer for /dev/sd_ devices. It's even reflected in the directory structure. Yes, I CAN list disk "by-uuid," label, id, partuuid, or path, but those are special cases with sd_ as canonical.

It's kinda retrokludged in there. I never said USB/etc. didn't work. Just that it wasn't architected for it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: