Hacker News new | past | comments | ask | show | jobs | submit login
Linux 6.1 Officially Promoted to Being an LTS Kernel (phoronix.com)
167 points by 0xDEF on Feb 8, 2023 | hide | past | favorite | 55 comments



Because everyone loooves Rust, you'd be glad to know that this is the first Linux release with initial Rust support, so Rust is officially in the kernel now and will run on bunch of computers shortly, no matter if you want it to or not (granted you run a Linux kernel of course).

> Among the key highlights for Linux 6.1 are the initial Rust infrastructure has been merged

https://www.phoronix.com/review/linux-61-features


> will run on bunch of computers shortly, no matter if you want it to or not (granted you run a Linux kernel of course).

Probably not. The Rust support merged in 6.1 was only the bare minimum of infrastructure needed to write kernel modules in Rust, there's no modules in the kernel that actually use Rust yet. You need to explicitly enable the Rust infrastructure with a config option during compilation, and e.g. Debian doesn't do that yet (why would they?).


> Rust is officially in the kernel now and will run on bunch of computers shortly, no matter if you want it to or not

To me as a user, what difference does it make if the binary that I run was compiled from Rust, not from C? It is machine code.


As a user of the kernel, none, right now.

Long term there should be some improvement in reliability and security, if things work out as planned.


I also think it will bring faster development and easier testing.


And a new generation of developers who have found C too intimidating to write code in.


I don't think that is the problem. Rather it turns out everybody has been naive in thinking you can actually successfully manage memory, or other shared resourced in C without footgunning yourself.


Those are definitely related but distinct. The fact is that many younger developers will simply (for this and other reasons) refuse to contribute to a C (or C++) project, but happily do so to a Rust project.


I trust Linus to keep the shitty contributions out (and lambast contributors if necessary).


No worries, that's only natural, especially for Linus, after all he's the one that said "if keeping C for the sole purpose of not letting C++ "devs" contribute - so be it" lol (and thanks him for that)


Also does this lead to an increase in dependencies for building the kernel? How about building it for older hardware?


Other than the headline of "Rust kernel module mainlined" I don't know what this entails. Do you need the entire rustc/cargo toolchain to compile this portion of the kernel? Or have they implemented some Rust compiler in gcc that it uses?


Rust support was merged into GCC in December: https://www.phoronix.com/news/GCC-13-Rust-Merged


That doesn't sound right, are you saying that the kernel depends on an unreleased version of GCC (13.1?)


Nope – the kernel currently requires LLVM to compile any Rust code

GCC 13.1 will bring the first LLVM-free Rust kernel builds


No, it doesn't. It uses the regular LLVM compiler.


> Rust is officially in the kernel now and will run on bunch of computers shortly, no matter if you want it to or not

I believe Rust is only used for a few kernel modules at the moment.


Silly question: if Linux kernel interfaces e.g. syscalls are stable, why is there a need to create an LTS release? How can non-LTS updates affect differently than updates within the LTS branch?


They won't add features but will fix bugs in it. Like distributions with long term support.

You may ask why this matters, but I just installed 6.1 in Debian (testing) a couple of days ago, and it has a fatal bug in a graphics driver I think (AMD iGPU Ryzen 5) - as soon as I start a browser in GNOME, the signal to the monitor is lost and I have to reset. I installed 5.18 in the console, and it works fine.

I haven't figured out how to report the bug, but I also have a hunch that it's probably already fixed. And that's why you make these long-term support releases. You stop refactoring/adding features that destabilize the code, and just fix bugs, and end up with a well-tested fossil. In theory you'd have automated tests of everything, and then you wouldn't need these fossils. But in practice you don't.


LTS kinda garantees that the internal APIs will not change, allowing for upgrades without reboot using something like Ksplice.[1] (aka "Hot patching")

[1] https://en.wikipedia.org/wiki/Ksplice


It also means that there's a fixed target that can get an extended amount of testing in the field, and when fixes for that version come out you know you're getting issue X fixed but no other change in behavior. Less chance of incidental breakage.


Um, no, I have not seen anything that would be the case for upstream Linux. Sure, downstream distributors like Red Hat might be able to give such guarentees, but they do not generally use upstream LTS releases directly.


Linux (POSIX systems in general, really) userland syscalls are very stable even in non-LTS, this is as you point not the issue. However, that is only the interface to the userland - the "kernelland" driver/modules interface is not garanteed stable.

This and LTS releases usually receiving important security backports reasonably quickly (compared to non-LTS versions) unless you somehow have certain expensive "Enterprise Linux" licenses and can coerce the licensor to do the backporting of the security fix for you.


Greg Kroah-Hartman explains the differences: http://www.kroah.com/log/blog/2018/08/24/what-stable-kernel-...


It's super important for hardware that is not fully upstreamed or other projects that carry a large patch burden. The solution is of course to upstream, but being able to pick a stable kernel and run with it for years without rebasing everything just to get a few security updates is a big win.


LTS is a promise that the devs will break less things.


> if Linux kernel interfaces e.g. syscalls are stable

Syscalls are not the Linux Kernel interface. Syscalls are core mechanisms to access protected/secure resources. Memcpy, fopen, etc. But the Kernel has much more to it's ABI than just syscalls and those are not guaranteed to be stable.


There's no such need for regular users. I upgrade to linus/master branch regularly since ~4.4, and there's no API breakage, aside from regressions (expected for -rc* kernels, I think) or very obscure stuff like media pipelines changing shape and whatnot.


Are you kidding? If you're on slightly unusual or esoteric hardware, the chance that something about your system breaks after a kernel upgrade is not insignificant. The chance is much lower when upgrading from one LTS point release to another.


Why would I be kidding? Userspace ABI doesn't break much at all. Unbootalbe system is also quite rare. Obscure drivers may break, but I just fix them or wait a bit if I don't have time, that's why I run rc kernels, afterall. But that's not an ABI break.


I wasn't disputing your userspace ABI thing. But you said "There's no ... need [to run an LTS kernel] for regular users". Regular users like their hardware not to break, which means LTS kernels are useful.

If everything is working for you, and you don't need any features or enhancements from a newer kernel, it's totally reasonable for a normal user to not want to risk frequent kernel upgrades.


LTS != stable. On average there's a LTS upgrade every 7 days. Mainline's release cadence is 3 months.

There are a lot of changes going into longterm supported branches: 10s of thousands of patches that were developed and tested against a different codebase written many years into the future are being backpatched into up to 6 year older codebase on top of other randomly picked patches with a hope/assumption that everything will work out the same way it does on mainline. It often does, but uh...

Mainline gets months of testing for each final release, stable releases get about a week of testing at most.

I definitely put my trust into mainline as a regular user. I also like the support for my HW to keep improving as a regular user, which mostly happens with mainline, if I have recent HW.

I view long term supported releases as speciality release useful for some very specific situations, like for device manufacturers who dislike instability of internal kernel apis and can validate each such release against extensive testsuite before pushing it to users. I would not run it as a normal user. (maybe with an offset of several releases, to have more testing for each stable release by others before I move to it)


Well, given how frequently kernel upgrades break stuff, if what you're saying about LTS reliability is true, I suppose the only reasonable solution for regular users is to never update their kernel. Or use a distro which holds kernel releases back for months to do their own validation, like Ubuntu and Debian.


I have different experience. I'm on the same Arch install (migrated across 3 different computers) for 15 years, going through almost every minor version kernel update during that time, and I never saw breakage of my system caused by the kernel update. Maybe I'm just lucky picking HW. :)


Why don't Linux devs give different labels to the LTS releases? Somehow a kernel version that gets supported for 7 years is the same kind of LTS as one that gets supported for 3.


They do, they label it LTS :)

It seems they don't know beforehand which releases will be labeled LTS though, it happens after the release itself.

> eg KH was planning on Linux 6.1 being LTS given its December debut. But he was waiting on feedback from kernel stakeholders over their test results with Linux 6.1 and plans around using Linux 6.1 for the long-term. He's finally collected enough positive responses -- along with co-maintainer Sasha Levin -- that there is confidence in maintaining Linux 6.1 as an LTS series.


> It seems they don't know beforehand which releases will be labeled LTS though, it happens after the release itself.

This I do not understand. I think for commercial reasons it would be good to know:

* The next LTS

* Its actual real "Life Time"

well before its release. I think this will help commercial entities and some distros in their planning (maybe even the distro I prefer). Right now it seems rather random.

But with that said, years ago, I heard some kernel developers would like to stop having LTS kernels. I wonder if these somewhat unplanned LTS announcements is one of the reasons for that thought.

To me, if they cannot have a formal planning process, maybe better not to have any LTS kernels.


LTS has never meant "life time support", it means "long term support". "long term" means more than usual. LTS versions stay supported for some years when other versions are only supported until the next version comes out.


That's not what I'm saying. There is a huge difference between a release that gets updates for basically a decade and one that gets updates for less than an Android phone's cycle. Why can't they call them Really Long Term Release or somethinf like that?


Because it sounds like the designation is negotiated with many parties simultaneously and there is a lot of uncertainty in the process. It seems that once a critical mass of supporters pledge to maintain the branch for a while it becomes designated LTS. You can't force anyone to maintain it for a really long time so they won't commit to a timeline. I am speculating this is why.


Linux kernel actually does have a SLTS designation for super long term release. https://wiki.linuxfoundation.org/civilinfrastructureplatform...


When it comes down to it, it is the distros (including Android) that decide which kernel versions they are going to use and for how long. Upstream doesn't always know this in advance, and only have limited influence in nudging distros to use certain kernels, since the big distros do their own patching and backporting anyway. There is no point for upstream to continue to support a kernel for 10 years if no one is actually using it 3 years later. So instead they give an initial support window, and extend it if the kernel becomes popular and no serious maintenance issues are discovered.


Anyone know if there are implications for Asahi Linux, especially on m2 macbook air?

Judging from https://github.com/AsahiLinux/docs/wiki/Feature-Support it looks like a lot of m1 stuff needs 6.2, and most of m2 is "edge" (which I suppose means might be mainline 6.3 at the earliest..)?


I don't think Asahi Linux will ever limit themselves to an LTS kernel. The whole point of it is specialized drivers that take time to get into a mainline release, nevermind the ongoing improvements and march of new hardware. I could see them eventually settling on stable kernels though.


I'm running an up to date Manjaro system, and although I have kernel 6.1.9-1 available as an option, it's not installed and it's not listed as LTS nor is it "recommended".

Does this mean that'll change soon with an update, and I should start running this kernel?


You may want to move away from Manjaro in general[0], but I would advise against doing anything that the OS doesn't expect you to do - so I'd wait for an update.

[0] https://github.com/arindas/manjarno . EndeavorOS seems like a pretty good "Arch but easy" distro these days, and maybe Manjaro's improved since the last time I ran it (a few years ago now)


Fwiw, I'm well aware of those issues cited, and still choose to use Manjaro as a daily driver.


Completely valid.

I used Manjaro for a while and had recommended it to a semi-technical friend and it blew up in my face, so I tend to mention the issues when I see people using it.

Nothing wrong with choosing it of course as long as one is aware of the issues.


Thanks for the info, and the link. I will read it over. I'm thinking I'll probably move back to PopOS when they release their DE Cosmic. I used to run pop and loved it, but really wanted to try an arch distro. Endeavor looks great, but I'm not interested in anything "terminal centric" which is the first sentence on their site. But thanks for the suggestion!


Manjaro w/ KDE was my daily driver OS for several years before I switched to EndeavourOS w/ KDE a few months ago. The only major difference between the two that makes EndeavourOS more "terminal centric" is that you have to use the CLIs pacman and yay for installing packages, as Pamac the GUI package manager is something Manjaro provides. That said, I highly recommend making the switch and think with some small adjustments, you'll feel right at home.


CLI-centric does not sound great for an HTPC, though. I think Manjaro is not bad choice for hassle free TV computers with modern hardware. It has much better hardware support than Ubuntu out of the box. Updates rarely break something other than Gnome (I find anything else incredibly hard to use on a TV) extensions. It does not get in the way and it rarely needs my attention.

Yeah, it's a easy use. That's a major plus in my book. I really don't want to babysit my appliances.


Why run "an arch distros" - why not just run arch?


PopOS is one of my favorite distro's, I can't wait for Cosmic to come out.

Even if I don't go back to Pop on all my boxes, I may standardize on Cosmic as my DE.


Any recent lists/posts on "top changes" between previous lts and 6.1?


Is the PremptRT path merged in to the Kernel mainline as of this version?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: