Hacker News new | past | comments | ask | show | jobs | submit login

> Linux (where do I start)

Not true, there are tons of ways in Linux where you can completely recover from a bad update. Either manual backups (even using tar works), BTRFS or ZFS, or something more fancy like NixOS (that I use).




This still doesn't mean there's absolutely no downtime, especially in an enterprise setting where it's usually easier to perform generic rollbacks on Windows than Linux.


I didn't use Windows for 10-15 years. Coming back I was amazed at how often and how long updates took (and how aggressive they got with them). Every few weeks I would have to reboot for an update and it took awhile. For macOS it's like 2-3 times a year. The major update is the only one that takes a significant amount of time.

I've had more bad updates with Windows when dabbling than using Linux/macOS full-time--but I'm willing to chalk that up to personal experience. What was annoying was they removed shortcut keys to access "safe mode" or "recovery mode." You now let Windows detect it had a failed boot (surprise, it didn't in my case).

Linux has way more options for updating. Unless you're updating the kernel you don't have to reboot (I believe there are ways around even that). You install the updates when logged in, so it's a normal reboot and you're not waiting longer. For large installs we booted straight off the network (PXE) which means unless you hit an undiscovered hardware or weird hardware config issue, you can test offline or rollback 100s of machines with a reboot.


Not sure about that last point.

In my uni, the computers booted over PXE and were often re-imaged, while mounting user files over the network. I imagine it is a rather common setup?

Anyway, since they had groups of similar hardware, software was thouroughly tested before any update was deployed to a given group. This was the case for Linux, and probably for windows as well.

This was a smallish university, with an IT staff of 3-4, mind you, so that's probably not too complex to deploy. There's probably turnkey solutions out there (Red Hat likely has something like this).

Linux makes sense in a company setting IMO, as companies can generally pick the machines their employees will use, so they can vet compatibility and software updates in advance. Of course, more moving parts (different HW/SW combinations) means more difficult testing.


What do you mean? NixOS you basically can reboot in the latest working version, so the downtime is likely minutes at the worse. Also snapshots in Btrfs/ZFS are really fast to recover too, since they're basically a pointer (again, at worse you will have minutes of downtime).

I had updates in Windows take quite a long time to complete, even on SSDs, and in Windows you also can't update the system while running (it must be during reboot). And since you can't use the machine, this can be considered downtime too.


I will add, that currently Ubuntu does that by default, when on ZFS. For users, it will snapshot their homedir every hour, so even if they accidentally delete something, it doesn't need to be catastrophic.


You're right but that wasn't my point. I didn't say you can't recover easily from Linux, I said it's just as likely to break your shit through an update as any other OS.


It isn't though, even when it did, ah let's say 15 years ago I could always repair it when it booted into terminal or with a resqueCD. The same can't be said about Windows, that's usually because GNU/Linux had sane filesystem which Windows never had.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: