Hacker Newsnew | past | comments | ask | show | jobs | submit | tlamponi's commentslogin

FWIW, the few non-techie people in my life that I care enough to administer their notebooks and provide support all run KDE on Debian happily.

While I had some reservations about acceptance when I made the switch from Windows 7, it turned out that it was one of my better choices of my life, and resulted in much less work for me compared to what Windows caused for me previously. And GNOME just did not work out well for most of these people and the workflows they are used to.


I like pass and use it a lot, especially as it provides a good and safe backup for the case my vaultwarden instance goes up in smokes.

There is also a drop-in replacement with has some extra features and a bit better UX in some parts, personally I only really use it for the better support for handling multiple GPG keys, as I got some physical backup keys and it can be also nice teams for a shared vault.

https://www.gopass.pw/

https://github.com/gopasspw/gopass


There was a time when people said that about AMD.

Don't get me wrong, Intel's outlook is IMO currently indeed rather bleak, but I would not completely write it off just yet.


> There was a time when people said that about AMD.

And Apple, to complete the circle.


AMD is equally fucked. Building off of IP-locked architectures is just a graveyard. Even apple will hit a wall one day.


There are a myriad of companies that have thrived in "IP locked" environments, a host that have failed too. Equally there are heaps that have thrived and failed in "IP open" environments.

I think at best you could say it's more challenging or perhaps risky being a bit restricted with IP, but I'd call it miles away from a "graveyard".

You can hardly call Intel/amd/qualcomm etc all struggling due to the architectures being locked down.

Look at powerpc/Isa. It's (entirely?) open and hasn't really done any better than x86.

Fundamentally you're going to be tied to backwards compatibility to some extent. You're limited to evolution, not revolution. And I don't think x86 had failed to evolve? (eg avx10 is very new)


> Is it like writing frontend code in Rust and compiled to WASM ?

Exactly, it's actually quite lightweight and stable plus mostly finished, so don't let the slower upstream releases discourage you from ever trying it more extensively.

We build a widget library with our products as main target around Yew and native web technologies, you can check out:

https://github.com/proxmox/proxmox-yew-widget-toolkit

And the example repo:

https://github.com/proxmox/proxmox-yew-widget-toolkit-exampl...

For code and a little bit more info. We definitively need to clean a few documentation and resource things up, but we tried to make it so that it can be reused by others without tying them to our API types or the like.

FWIW, the in-development Proxmox Datacenter Manager also uses our Rust / Yew based UI, it's basically our first 100% rust project (well, minus the Linux / Debian foundation naturally, but it's getting there ;-)


The linked forum post has an FAQ entry, this was a carefully weighted decision with many factors playing a role, including having more staff available to manage any potential release fall-out on our side. And we're in general pretty much self-sufficient for any need that should arise, always have been that way and provide enterprise support offerings that back our official support guarantees if your org would have the need for that.

Finally, we provide bug and security updates for the previous stable release for over a year, so no user has any rush to upgrade now, they can safely choose any time between now and until August 2026.


We manage anything, including package builds, ourselves if the need should arise; we also monitor Debian and release critical bugs closely, we see no realistic potential for any Proxmox relevant package to disappear, at least nothing higer compared to that happening after the 9th.

FWIW, we got staff members that are also directly involved with Debian which makes things a bit easier.


Scam is probably the wrong word, and it's choice might be a bit feeling fueled, but it's really not true that this only depends on the HW.

systemd also changes behavior in what naming policies are the default and what it considered as input, it did that since ever but started to version that since v238 [0]. Due to that the HW can stay exactly the same but names still change. I see this in VMs that stay exactly the same, no software update, not change in how the QEMU cli gets generated, really nothing changed from the outside virtual HW POV, interface name still changes.

The underlying problem was a real one, the solution seems like a bit of a sunken cost fallacy, and it added more problem dimensions than there previously exist.

Besides, even if the HW would change, shouldn't a _predicatble_ naming scheme be robust to not care about that as long as the same NIC is still plugged in somewhere?

Disclaimer, as stated elsewhere: I really like systemd, I'm not one that speaks out against it lightly, but the IF naming is not something they got right, but rather made worse for the default case. Being able to easily pin interface names through .link files is great, but requiring users to do that or have no network after an upgrade, especially for simple one-NIC use cases in a completely controlled environment like a VM is just bonkers.

[0]: https://www.freedesktop.org/software/systemd/man/latest/syst...


Ah, ok, I didn't think of systemd version changes. Thanks.

Regarding your rhetorical question about "the same NIC", I think the problem is in determining whether the NIC is the same, and it is not an easy one to solve. I remember that older Suse Linux versions used to pin the interface name to the NIC's MAC address in an udev rule file that got autogenerated when a NIC with a given MAC first appeared on the system, but they stopped doing that.


Yeah, the permanent MAC address (i.e., the one the card actually reports to the system not the one dynamic one it can use) would be the safest bet, as that is the most stable thing there is, and more importantly, it is very relevant for switches and firewalls in enterprise settings, so if it changes it's often likely that network access will be broken any way, so one basically can only win with using the MAC as main identifier IMO, at least compared to the current status quo.


Sadly a NIC's permanent MAC is known to not always be unique: https://www.howtogeek.com/228286/how-is-the-uniqueness-of-ma...


As long as you only got NICs with different permanent MAC addresses installed that does not matter for getting actually long-term stable names.

And for the other case you can still fallback to the other policies, it still will be much more stable by default.

Please note that I don't say that MAC is perfect, but using something that is actually tied to a NIC itself would fare much better by default compared to the NICs position as determined by a bunch of volatile information, and what normally does not matter to me at all, as e.g., I will always use a 100G NIC as ceph private network while the 25G ones as public one, no matter where they are plugged in. That someone configures something by location is the excpection, not the norm.


I don't know if this is still the case but the last time I went without ifnames=0 adding a GPU would cause all the network interfaces to get new names. Junk.


Note that Proxmox did not put out any news, the Perl foundation did, and that is based in the USA, so not sure if it really shows what you try to suggest.


If you want to package something from upstream git then you might want to check out https://optimizedbyotto.com/post/debian-packaging-from-git/ it is relatively new and uses a modernish tooling.

The policy manual serves as both ruleset but also explains lots of things w.r.t. packaging, as that's part of the ruleset: https://www.debian.org/doc/debian-policy/index.html#

For actually uploading new packages to the archive you need to be "DD" (Debian Developer), which is a bit more involved process to go through. "DM"s (Debian Maintainer) is easier and can do already lots of things. It's also possible to start out by finding an existing DD that sponsors your upload, i.e. checks your final packaging work and if it looks alright will upload it in your name to the repositories.

You might also check out the wiki of Debian, it's sometimes a bit dated and got lots of info packed in, but can still be valuable if you're willing to work through the outdated bits. E.g.: https://wiki.debian.org/DebianMentorsFaq


14 different schemes multiplied by some acting slightly different in every version. Sure you can pin it, but that fixes only their internal back and forth, is only possible via the kernel cmdline and there is no guarantee for how long the old versions will stay available, as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again.

And sure, one can pin interfaces to custom names, but why should anybody have to bother with such things?!

I like systemd a lot, but this is one of the thing they fumbled big time and seemingly still aren't done.

Pinning interfaces by their MAC to a short and usable name, would e.g. have been much more stable as doing that by PCI slot, which firmware updates, new hardware, newer kernel exposing newer features, ... changes rather often. This works well for all but virtual functions, but those are sub-devices of their parent interface anyway and can just get named with a suffix added to the parent name.


I imagine they went against mac address because it is not immutable, some folks rotate mac addresses for privacy/security reasons.


The original one is still there. Systemd knows even about that, it's differentiated as MAC vs PermanentMAC.


There are, unfortunately, some older devices (like some Sun systems) which use the same MAC address for every network interface on the device.


i thought about that, but couldn't you access the hardcoded address to identify the card?

but you also want to be able to change a card in a server without the device name changing. at least that used to be an issue in the past.


> as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again

Note that the naming scheme is in control of systemd, not the kernel. Even if it is passed on the kernel commandline.


Yeah, I know, I spent more than a week into looking for options to reduce impact for all of our users.

And note that cgroupv1 also still works in the kernel just fine, only the part that systemd controlled was removed from systemd. You can still boot with cgroupv1 support on, e.g., Alpine Linux and OpenRC as init 1. So not sure if that will lessen my concerns about no guarantees for older naming-scheme versions, maintaining triple digits of them sure has its cost too.

And don't understand me wrong, sunsetting cgroupv1 was reasonable, but it was a lot of churn, it at least was a one time thing. The network interface naming situation is periodic churn, guaranteed to bite you every now and then just by using the defaults.


Can you tell me why NamePolicy=keep doesn't do the trick?

Looking myself for options to keep a Debian bare metal server I admin from going deaf and mute the next time I upgrade it... It still uses an /etc/network/interfaces file that configures a bridge for VMs to use, and the bridge_ports parameter requires an interface name which, when I upgraded to Bookworm, changed.

At this rate maybe I'll write a script that runs on boot and fixes up that file with whatever interface it finds, then restarts the network.


This worked brilliantly in Debian for more than a decade, had almost zero downside, and just did what asked. I went through 3+ dist-upgrades, for the first time in my life, without a NIC change.

It was deprecated for this nonsense in systemd.

Yes, there were edge cases in the Debian scheme. Yet it did work with VMs (as most VMs kept the same MAC in config files), and it was easy to maintain if you wanted 'fresh'. Just rm the pin file in the udev dir. Done.

Again it worked wonderful on every VM, every bare metal system I worked with.

One of the biggest problems with systemd, is it seems to be developed by people that have no real world, industrial scale admin experience. It's almost like a bunch of DEVs got together, couldn't understand why things were "so confusing", and just figured "Oh, it must be a mistake".

Nope.

It's called covering edge cases, ensuring things are stable for decades, because Linux and the init system are the bottom of the stack. The top of the stack changes like the wind in spring, but the bottom of the stack must be immensely stable, consensus driven, I repeat stable change.

Systemd just doesn't "get" that.


systemd's design choices here were influenced by a lot of bugs Red Hat received where failed hardware was swapped out and interface names changed as a result. Real world enterprise users wanted this, it wasn't an arbitrary design choice.


That's quite the jump.

Some real world users asked for a fix. They did not mean they asked specifically for this fix.

There were other ways to handle this.

With Debian's system, you could wipe the state files, and for example eth0/etc would be reassigned per initialization order. Worked fine.

Even if you didn't like that, pre-Systemd udev allowed assigned by a variety of properties, including bus identifiers.

It was merely that Redhat, as usual, was so lacking in sophistication, unlike Debian.


It turns out that people do not love having to log into a machine after a network card swap to get the new network card to have the same name. Initialisation order is explicitly not guaranteed by the kernel and so absolutely does not work every time.


Even if you didn't like that, pre-Systemd udev allowed assigned by a variety of properties, including bus identifiers.


> systemd's design choices here were influenced by a lot of bugs Red Hat received where failed hardware was swapped out and interface names changed as a result.

Under RH-based systems the ifcfg-* files had a HWADDR variable, so if you swapped a card you could get the new MAC address and plug it in there and get the same interface name. There was also udevd rules where you map names to particular hardware, including particular MACs.

> Real world enterprise users wanted this, it wasn't an arbitrary design choice.

As a real world sysadmin, working now a few of decades in this field (starting with non-EL-RH, then BSD, then Solaris, then RHEL, Debian, and now Ubuntu), I have never wanted this.


Great. A tech swaps out a network card, now how do I log in to rewrite the ifcfg file when the interface wasn't brought up with the correct config because it has a different name?


> now how do I log in to rewrite the ifcfg file when the interface wasn't brought up with the correct config because it has a different name?

Unlike most desktops, basically all servers got out-of-band management (e.g. IPMI) and a NIC swap is something that needs a tech physically near the server, so even a simple serial console is easily plugged in. Or how will that new NIC work with the whole network, like any basic networking setup or firewall won't allow traffic from arbitrary MACs, so normally this needs to be coordinated already anyway in an enterprise setting, e.g. through a change management process.

And why would one optimize the whole design for network naming for the edge case and not the much more common one like simple software updates.

And the design is not even being able to guarantee it for the edge case. Plugin that NIC in a different PCI slot, or let the firmware to a blip and report it differently–all things that happened!–and you still got no network with net naming scheme. Worse, you reboot after a systemd update, and you can have no network either. Or the kernel learns that your NIC supports virtual functions, guess what, no network because the (seemingly just-in-time) predictable naming scheme now sees that information changing its previous prediction.

I never will be able to understand how one can argue for breaking the common use case, nobody argues that there isn't a real problem or that there is the One True Way™ to solve it (at least I do not intend so), but arguing for using a certainly not ideal default that optimized for an edge case feels a bit like some sunk cost fallacy to me.

Sorry for my wall of text, I would really like to care less, but at $work I am exposed to this mess directly, not only for our infra but for all users of our projects, can all be done and managed, sure, but the churn and hours I have to put in thanks to this feels unnecessary and could be used for much more useful things.


> A tech swaps out a network card, now how do I log in to rewrite the ifcfg file when the interface wasn't brought up with the correct config because it has a different name?

IPMI/iDRAC/iLO/XCC/etc.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: