Hacker News new | past | comments | ask | show | jobs | submit login
Linux 6.2: The first mainstream Linux kernel for Apple M1 chips arrives (zdnet.com)
554 points by CrankyBear on Feb 20, 2023 | hide | past | favorite | 314 comments



Work from the Asahi Linux team has been upstreamed since 5.13. This article makes it sound like 6.2 is good to go when that's really not the case.

There is a list of upstreamed and missing functionality on the Asahi Linux wiki here: https://github.com/AsahiLinux/docs/wiki/Feature-Support

Linux 6.2 adds (checks notes): cpufreq to M1s, devicetree for newer hardware, HDMI out to Mac Studio (2022) and bluetooth support.

That doesn't sound as interesting as this article suggests when stuff like USB, the touchpad, keyboard, speakers, 3.5mm audio, suspend/sleep are all still WIP downstream.

This isn't a dig at the Asahi Linux developers. They're making solid progress. This is just a bad article.


> are all still WIP downstream

For full clarity, having observed Marcans socials for a while, a big reason as to why upstreaming into the kernel is slow is because the Linux kernel suffers heavily from having BDFL maintainers.

Basically a specific maintainer can make upstreaming patches to their part of the kernel a process few people want to go through due to how much leeway they have in approving/rejecting patches. Stuff like yelling at merges that also happen to fix bugs in the parts that they modify because "bugs should be upstreamed separately" (even when splitting out the bugfix makes zero structural sense) or getting angry at contributors for lines they didn't contribute but that git diff happened to spit out around their commit to keep the diff readable.

Having watched that for a few weeks really gives you an understanding as to why so few Linux modifications for obscure devices have their patches upstreamed. (Switchroots main project, which is the Linux kernel but modified to run on the Switch for example doesn't bother upstreaming anything as far as I can tell.)


These days it’s hard to ignore the fact that Linux enjoys its continued success in spite of these sorts of crusted in grumps, not because of them.


Would you give the project to other contributors, working for Meta or Google for example?


Absolutely not, but it's also not a binary choice between "power-mad maintainer" and "corporate management".

There's plenty of maintainers who aren't that sort of a stick in the mud (even on the Linux kernel), the problem is that the Linux kernel has a few too many entrenched ones who are basically using their position to be a bully to interested contributors. They're generally aware enough that they know that the people they get into fights with aren't likely to use any of the official processes and are more willing to put up with it.

It's more akin to "this maintainer should be replaced" rather than "the entire management is bad"; to my understanding the actually important trees (as in, the server related components, since Linux is primarily used as a server kernel) don't have these maintainers; they're mostly on the stuff that matters less like well... Sleep/Wake related things, speakers and batteries, all of which are mostly useful for desktops and laptops.


What does "give the project" mean?


Giving the ultimate control for what gets accepted and rejected


I'm not that knowledgeable in this department, but: if the BDFL is really hurting the usability of their library / service, can't it just be forked by more benevolent actors?


> BDFL

(Benevolent Dictator For Life)


> Linux 6.2 adds (checks notes): cpufreq to M1s, devicetree for newer hardware, HDMI out to Mac Studio (2022) and bluetooth support.

It's worth noting that these features were long available since July 2022 (https://asahilinux.org/2022/07/july-2022-release/) to Asahi users. And like other Asahi users, I've been using them since that time on my Mac Studio running Asahi (with HDMI output, 10G ethernet, and Bluetooth too): https://triosdevelopers.com/jason.eckert/stuff/AsahiSwayM1Ul...


   stuff like USB, the touchpad, keyboard, speakers, 3.5mm audio, suspend/sleep are all still WIP 
That makes it sound as if they are barely (un)usable which isn't true either


Are speakers enabled in Asahi stable already? If they're still behind a build flag (which is for a good reason, you can blow them up if you're not careful), I'd say those are indeed unusable.


> That doesn't sound as interesting as this article suggests when stuff like USB, the touchpad, keyboard, speakers, 3.5mm audio, suspend/sleep are all still WIP downstream.

I wonder what kind of realistic ETA there is (if any) for these kinds of features.


A lot of them sound pretty close. Sleep for example is something they know how to get working, all that’s left is someone to come up with a plan to integrate it nicely in to the Linux kernel in a way that isn’t copy pasting the existing system and changing the values that are different on Mac.

Speakers also do work but they are disabled by default so a bug in the current implementation doesn’t blow up the hardware.

And a whole set of major features will be supported as soon as the thunderbolt driver is done which I’ve heard is in progress.

The touchbar, fingerprint reader, and camera are things I wouldn’t hold my breath for.


> come up with a plan to integrate it nicely in to the Linux kernel in a way that isn’t copy pasting the existing system and changing the values that are different

Laughs in NetBSD


I used to be pretty pessimistic about the team. While it had some solid heads in it, I thought things would take much longer than they have. Given their current progress/speed I wouldn't be surprised if most of these are functional enough for upstream by the end of the year, with iterative progress towards M2.


speaker support is being actively worked on [1]

[1]: https://phpc.social/@marcan@treehouse.systems/10981800943090...


I don't understand why USB support should even be an issue. Is the controller integrated into the SoC or something? Or is it a chipset thing?

Feels like Apple have made this a lot harder than it needed to be. Of course they're masters at designing great hardware with ridiculous flaws. Like somehow being fragile despite the metal construction. Missing connectors, weird keyboard design choices. Soldered-on components. I bought a Macbook Pro once almost a decade ago. Never again. Never before or since have I felt less like I actually owned the hardware I'd bought.


I often have to read this kind of comments about apple, but it never aligns with my personal anecdotes about apple, and it never mention actual hardware or brand whose qualities are so much more valuable than apple's.

You sound like anything but apple is good, but there's a lot of awful hardware out there, that is for sure. How to you avoid it ? What's your foolproof buyer method ? I might sound snarky, but I'm also genuinely interested !


Before I buy a laptop, I pretend I accidentally put my fist through the screen, and shadowbox the process of replacing the panel.

For example with Thinkpad that meant finding a compatible panel based on specs they publish about all their models, then finding a teardown and rebuild video published on the Lenovo website for the model.

I was really interested in an m1 laptop but I tried my process with it and all research pointed to "send it in to apple care" which I don't want to do because I know how to use a screw driver and order parts, that should be enough.


>Self Service Repair is intended for individuals with the knowledge and experience to repair electronic devices. If you are experienced with the complexities of repairing electronic devices, Self Service Repair provides you with access to genuine Apple parts, tools, and repair manuals to perform your own out-of-warranty repair.

https://support.apple.com/self-service-repair


Is that the thing where they mail you a pelican box of parts?


Yeah, display, battery, and storage replacement being doable is absolutely key to me. But even more importantly, the display needs to be well shielded, especially if it's an LCD panel, because with those one crack is enough to make the entire display unusable. The Macbook I had was very vulnerable if it fell and landed in its side. It's because Apple prioritise a thin bezel over durability, I think.


There's plenty of terrible hardware out there, indeed. Personally I prefer Thinkpads. I've owned a lot of shitty laptops over the years, though the Macbook Pro(2013/2014 retina) is the only one that's managed to become permanently dead as yet. The others I've been able to repair to some extent. I've had my Thinkpad for six years now and it's still going strong. It's not even a powerful model, I just run pretty low-overhead stuff on it and test the stuff I code remotely on my home workstation through a vpn. It's sturdy as hell though. 6 years of not-so-gentle use and no accidental damage has happened. The Macbook display was damaged and useless after 3 months, because it was dropped, once. Of course it was deemed user error, because it was. But man, I expect a bit more sturdyness from a $1400 laptop built in metal by a company that is supposed to deliver "superior quality" at a hefty premium.

I expect to keep my Thinkpad for another 5 years at least. But I might switch to Framework at some point if they can match the Thinkpad build quality. For laptops I value sturdyness and repairability over all other considerations, because I'm clumsy as hell. Apple products are far too fragile for me and far less sturdy than they look.


I really like my thinkpad and basically all my family has second hand thinkpads, but let’s not pretend that they are flawless — their last few generations of intels have terrible throttling issues.


Sure, that could be, I don't have any issues, though my thinkpad also just has an i5 in.


> Apple products are far too fragile for me and far less sturdy than they look

Amen.


"What's your foolproof buyer method ?"

Reading trustworthy test sites, before buying anything expensive. Which has gotten a bit harder, due to LOTS of paid content, but it still works, if you know how to spot the signs of a bad site.


Apple is kind of in their own price-point which is (potentially) justified by their quality. The main alternative is "buy something cheaper".


Is their any laptop that is even in the same ballpark on a performance-battery-life plot?


Apple used their own physical transceiver for the USB3/DP/TB port. They probably could have found an existing one that worked and supported Linux but it doesn't seem to be unreasonable to develop their own. To support the astounding bandwidth of USB3 and above these devices are a marvel of engineering. Every different cable needs the hardware to re-tune its timing and analog circuitry to match.


Sure, they can develop their own. But nothing is stopping them from also providing drivers/patches to the Linux kernel. Many companies that develop hardware do.


Apple is not charity. Why would they do that? It have to be economically justified.


They’d sell a lot more to Linux-only users… but that’s obviously not their goal, and Apple seems to be unusually good at being ruthless in their prioritisation.


You just need to care about Macbook. It's a luxury item after all. I made the same mistake treating my first Macbook as ordinary laptop. Of course it broke very quickly. Bad USB port, GPU issues, keyboard issues, broken SSD, broken audio port and so on.

I bought my second Macbook and I'm using mostly wireless accessoires to prevent port damage. I don't use its built-in keyboard and touchpad to prevent them from breaking. I don't disconnect it from power to prevent battery damage. I bought huge SSD (2TB) and don't fill it with data to prolong its life.

And you know what? It works wonders. This macbook works without hiccups for almost a year! Sure, I'll need to replace my Apple Keyboard soon, but that's just $250, not $5200.


Oh no, be careful. You should keep it in a dry clean room with external cooling, only ever run a stripped down minimalist, headless linux distro to avoid taxing the hardware too much. Only ever use it through SSH.


I hope this is sarcasm.


There's an M1 Mac Mini. What's this about a touchpad?


M1 Mac mini supports the shiv trackpad of course. Comes in clutch when you have limited desk space (and to me the superior input device).


No idea what a shiv trackpad is. I did find this while googling for it though: https://www.etsy.com/listing/783328345/


A shiv is an improvised, easily concealed, close quarters DIY weapon, constructed of pretty much anything that can be formed into a sharp point or cutting edge. They are popular in prisons due to supply side constraints on other armament options. Think "toothbrush sharpened into stabby thing" or "piece of bed frame filed down into cutty thing." Just about the furthest thing from Apple hardware I can think of.


Weird auto correct. Should be Magic Trackpad


It seems difficult to use long-term for RSI. What are your thoughts?


I totally had this problem with a magic touchpad.

Great device. Loved it - for about a month. Couldn’t touch it without pain after that.


No mention of Hector Martin, who leads the project, is made in that whole article.

Alyssa has been instrumental on the GPU front, but I'd think marcan deserves at least a passing mention.


I’ll also note I’ve been following him and it’s sad to see how much many pointless drive by objections he seems to get from non-maintainers when trying to upstream code.


Tbh after I maintained a medium sized open-source project for a while, I became a real misanthrope and gave it up. I decided my life is worth too much to me to waste it on random ingrates who sent me actual death threats for not doing what they want. Why would I put up with this?

Though tbf, the number of wonderful people was much greater, but I just couldn't take the negativity i was getting from a not insignificant minority of manchildren who have probably never heard the word "no" from mommy and daddy while growing up


100%. I worked on a very popular Open Source project, and for years those "manchildren" were making my blood boil, borderline made me depressed, and it was clearly turning me into a bitter and angry person. To a point I decided to just get off all social medias, start ignoring bug reports, and isolate my work to less public-facing parts of the codebase. The day I got off bugzilla and moved to a standalone project was a massive relief.


There's a very vocal group of such people, sadly. Thank you for saying no to them as long as you did. Makes it a little less likely the rest of us will encounter them in our open source projects.


I suppose here's a reason why many people choose to contribute to private companies instead -- better compensation, more respect. Only thing that's better in public projects is the knowledge that you might be doing something good for everyone.


Those 2 things aren't mutually exclusive.


At least he does seem to be getting paid via patreon subscribers.


Where was it hosted? And was there not a "block" button?


On the other hand, I also had to stop following him because he's in a middle of SOME social media drama pretty much constantly and not all of it is externally inflicted and not all of his rants/conflicts are the fault of kernel maintainers or other people.

It seriously reminds me of hackerspace/opensource drama we've been part of in my college years and by now I've kinda grown up enough to find it tiring and remember that you need two parties to cause it.


Reading his mastodon feed, Marcan may be not so different from the maintainers he claims are the issue.

Imagine being a maintainer of a specific section of the code, only to receive an email from contributor stating that you must move to github from mailing list and if you disagree then "tough luck" and it's your fault.


I also found that line really strange. Not to take away from Alyssa's brilliant work, but the sentence made it sound like she's carrying the company/project on her back here.


Probably more of the unfortunate part is that the GPU stuff is what everyone is most excited about. Hector's work is instrumental and he is instrumental to all of it coming together, but his work just doesn't get seen as being as flashy. Make sure I also mention, yes Alyssa's work is brilliant too. They are all brilliant and deserve praise, but GPU work is gonna be more exciting to see for the lay person.


But even that's a bit weird in the context of the kernel 6.2 release because Alyssa's work generally hasn't been around the Kernel, she's been working on the userspace driver.

It's Asahi Lina who really did the kernel driver (building on Alyssa's reverse engineering work).


> It's Asahi Lina who really did the kernel driver (building on Alyssa's reverse engineering work).

Mostly based on her own reverse engineering work actually (which was all livestreamed). Alyssa's work was on the shader ISA and what goes into the various command queues to get the GPU to do useful things, Lina's work was on the lower level aspects of how the queues even work, interactions with the firmware, power management, etc.


Isn't Asahi Lina, Marcan's alter-ego? Apologies if I got the wrong end of the stick there.


Maybe, maybe not. Since it's not explicitly public, it's nicer not to assume that or ask. Privacy is a big thing in vtubers community.


Probably. Maybe.

They use all the same tools etc. But given Asahi Lina has chosen not to reveal their identity it's better to just assume it's a different person.


We all know why that is.

In the $current_year things are so hilariously sexist that I pass my work off as my wife's so I can make more money pretending to be her.

Makes me wonder how many great men of the past were actually their wives alter egos.


> Makes me wonder how many great men of the past were actually their wives alter egos.

Interesting thought. I know JK Rowling is "JK" because her publisher wanted to obscure her gender. This sort of thing is probably common.


What's even more common is all the supposed genocides in the colonies that left no one alive. Just like today Elizabeth Warren pretended to be the extremely blond Indian so did Sitting Cloud turn into the extremely tanned Sicilian: Marco Totally Italiano.


I hate to speculate, but I wouldn't be surprised if Hector Martin explicitly instructed them to put the spotlight on somebody else. It's certainly a nice gesture if so.


I'm pretty sure he'd mention Asahi Lina as well then?


This is a good point.


Exactly! Seems weird to miss out the name of the project lead.


Moral: When you have a large project don't mention anybody in specific. You will only alienate those you didn't mention.


Well, it's fine to mention the names of key players. But if you choose to do that, it's weird to not mention the name of the person who started the project and has been driving it.


Each choice we make in this world alienates some. We don’t need to avoid it. The upside of recognizing people on the team probably far outweighs the downside of missing someone. One should probably know who made significant contributions on one’s project. If films are able to have credits for everyone who worked on the project, then so can we.


People these days need to grow thicker skins.


this really doesn’t seem like a “it was better in the past” type situation


Yeah, beyond. It ventures less into oversight than it does a slight.


Seems like innocent mistake is most likely.


I used to use Linux as my primary OS for almost 20 years, but I switched to MacOS recently with my M1. While I wouldn't want to use Linux now on it, and I have to admit I now prefer MacOS a LOT, I am very happy for projects like these because by the time Apple stops supporting my M1 Macs, Linux will probably be very stable on them and will save them from the recycling plant.

(I also resurrected a Macbook Pro 2011 and got it to run about 5 years after Apple stopped supporting it as well.)


I've run both back and forth and simultaneously for many years (since probably 1997/1998).

My ultimately conclusion is that when I run linux on my desktop I focus too much on the OS and too little on what I'm trying to accomplish.

Oh cool there's a .001 version bump on the graphics drivers, let me spend all day recompiling that and tweaking my awesomerc.

MacOS, you just can't do that simply by virtue of being less customizable.

Am I quite as efficient as I was when my computer would reboot in <30 seconds to a desktop with all of my browsers open exactly as I wanted them (slack here, different monitoring/graphs exactly tiled exactly on another monitor) and I could switch between specific task-focused desktops with awesome (programming work here, ops here, email/biz work here, etc) no I am not as productive, but, how much time did I spend fine-tuning that setup? It felt like A LOT and I know (for a variety of reasons, this being only one of them) that I'm generally more productive since I retired my last Linux desktop in 2018.

That said, please don't take any of this to diminish the accomplishment of the Ashai team. The fact that they did what they did and did it well enough to get mainstreamed is an absurd feat. The fact that it was significantly (primarily?) done by an anime-girl live streaming on twitch is absolutely hilarious.


I hope that doesn't come across too negative, but that sounds more like a problem that you personally have than something inherent from the customization that Linux offers. I run Arch with i3, vim, and tmux and quite frankly I don't think I spend more than an a couple of hours per month maintaining my configuration, and I don't see exactly where one would spend so much time. If you struggle with that, instead of changing your config, write down on a notepad the changes that you would do, and at the end of the week review them. Chances are the majority of stuff is unimportant.


I am going to have to disagre with that....like the other poster said, a couple hours is a lot.

I also tried to avoid maintenance a lot with XFCE, but part of the problem is that I could never get it to look exactly like I wanted. There were always some minor GUI bugs that irked me to no end. Add to that, there were some weird inconsistencies because for a long time there were both GTK2 and GTK3 programs running at the same time and they didn't look quite normal.

Then there was the problem of those new-style GTK interfaces with just the hamburger menu, and not all programs were like that. I tried to fix that too somehow by moving to other Window managers but there was always some weird problem.....arrgg I LOVE linux, but (I hate to say it) MacOS just looks nicer with less fiddling.


I appreciate this conversation a lot, but I’m more on the side of the GP than yours, I think:

> I also tried to avoid maintenance a lot with XFCE, but part of the problem is that I could never get it to look exactly like I wanted. There were always some minor GUI bugs that irked me to no end. Add to that, there were some weird inconsistencies because for a long time there were both GTK2 and GTK3 programs running at the same time and they didn't look quite normal

I take the point, but part of my maturation with “full time” Linux systems - personal desktop, laptop and work machine (Linux VM) to different degrees - was

(1) tinkering less,

(2) getting orders of magnitude more efficient when I did want to edit my OS or core programs, and

(3) buying an M1: while being amazed at many aspects of M1, and admitting I never got proficient with MacOS, I went back to Linux and i3 because I had just as many frustrations with MacOS as any graphical bugs I get in Linux

(4) the comfort of familiarly with my Linux configuration, rather better or worse

Now I’m comfortable with any configuration time I spend these days. It’s not much, and easily justifiable as some combination of work, hobby, and learning. I no longer think of it as serious work, which has been very good for me not wasting hours for stuff I didn’t really didn’t need to care about.


I certainly see your point. There are still things about Linux that are so much better than on MacOS. For example: window shading, easier to start programming on, built-in package manager, not creating ._ files on external drives, not having the terminal restricted to certain directories, not having the mouse touchpad have a delay for click n' drag.

In fact, I'd probably still be on Linux if it weren't for the fact that I shifted from programming to content creation and certain apps like Davinci Resolve and some photography apps work better on MacOS, and it's impossible to get M1 performance (especially in a laptop) with Linux at the price of an M1 laptop.


A couple hours is a lot per month.


Speaking as someone with a hefty dose of ADHD and anxiety, I can only dream of wasting a mere two hours per month. It blows me away to think of two hours being a major distraction for anyone. Count yourselves lucky!


Some of us have families.


Myself included.


That's a worst case. The vast majority of the time I don't touch anything. I was just trying to point out that once you have a stable config that works out for you, there is no need to constantly change it.


> a couple hours is a lot

That's a worst case. Most of the time I don't touch anything.

> the problem is that I could never get it to look exactly like I wanted. There were always some minor GUI bugs that irked me to no end. Add to that, there were some weird inconsistencies because for a long time there were both GTK2 and GTK3 programs running at the same time and they didn't look quite normal.

I think that is the big issue with environments that allow for endless customization. People look for perfection, and when you have too much free time it's tempting.


> I don't think I spend more than an a couple of hours per month maintaining my configuration

This is... still significant.

Outside of auto updates sometimes rebooting my laptop overnight and major version bumps requiring me to hit the "update" button explicitly once a year, I'm unsure what "maintaining my configuration" would entail once things were setup to your liking on macOS.


I'm not sure what the GP is maintaining but I have used the latest Fedora and Sway (and i3 before that) day-in and day-out for years, all I do is run dnf update once in a while... it gets completely out of the way.


Yeah, I use an arch derivative so its a rolling distro and even with the problems that entails I update packages about once a week and have not had issues since a bug got me 2 years ago.

How often are things changing for these people that configs need to constantly reviewed?


I guess some people want heavier customisation, whereas they just accept what macOS enforces.

I have been using different Linux distros in the last 15 years (mostly Arch, so rolling), and really I don't customise much (default i3, default vim, etc). Doesn't really require any maintenance at all, I would say.


Spending a few hours a month maintaining GNU/Linux is perhaps the single most effective learning process for Computer Science I know. The entire stack (except binary blobs) is always available and something novel occurs everytime. I love fixing my Linux, I get to read about so many aspects of stuff I didn't know before.

And with NixOS it's more than cummulative it's a multiplier.

Gonna get me a second-hand M1 or M2 soon.....


I don’t see how this can be true under any definition of computer science. I used linux in college and I think it was useful, but more as an immersion learning program for the Unix abstraction than anything related to what i was studying (CS).

I didn’t learn much about algorithms, or digital systems, or compilers by setting up arch Linux. Even my OS class, it’s not like learning about swap helps understand context switching or even virtual memory.

I relate to how satisfying it can be tinkering with things but a lot of times it’s just distracting. Like spending a day optimizing productivity tools instead of being productive. Or trying to get your laptop to recognize and change audio output when plugging in headphones instead of studying for a midterm.


Did not Galileo grind his own lenses for his telescopes?

Did not Yogi Berra state that in theory there's no theory between theory and practice but that in practice there is?

Is an abacus useful in Computer Science or is it an impediment to purely conceptual algorithmics?

I use Linux precisely because it enables good Computer Science, as telescopes enable good astronomy, as particle accelerators enable the realisation of the prediction of the Higgs Boson.


Well you can't deny that if you understand how to compile your kernel, how to make a rootfs, what a primary/secondary bootloader is, etc, then you actually have learned about computers, right?

Try Linux from scratch, then ask yourself how macOS, Windows and *BSD solve those problems. Wouldn't you call that computer science?


No, not really. They're closer to the equivalent of learning how to install a new lens in your telescope so you can do more astronomy.

You've learnt about telescopes, which is a very useful skill, but it's not the same thing as astronomy.


Ok let's take one step back and look at the parent:

> I don’t see how this can be true under any definition of computer science.

Let me arbitrarily copy one definition of computer science, from Wikipedia:

> Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (including the design and implementation of hardware and software).

I think that designing an OS counts as computer science, at least under "some definition of computer science". And learning how to maintain an OS is a step towards understanding how it is designed.

Of course, maintaining your OS does not teach you Javascript. But Computer Science is not limited to Javascript. I wouldn't be very happy if you told me that I am not a software engineer because I don't know Javascript, to be honest.


In this analogy, I think you should replace Astronomy with optics to be equivalent. And in this case you learned a lot about optics


I think it's like saying if you learn how to maintain a submarine, have you become a good swimmer? Or, if you know how to maintain a submarine do you understand fluid mechanics?


Ok let's take one step back and look at the parent:

> I don’t see how this can be true under any definition of computer science.

Let me arbitrarily copy one definition of computer science, from Wikipedia:

> Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (including the design and implementation of hardware and software).

I think that designing an OS counts as computer science, at least under "some definition of computer science". And learning how to maintain an OS is a step towards understanding how it is designed.

Of course, maintaining your OS does not teach you Javascript. But Computer Science is not limited to Javascript. And I have seen many developers distribute libraries without having a clue about how package management works, which results in a big mess. And then they complain about the tools ("CMake sucks, it's not my fault"), where actually they just don't have a clue how it works below.

I see computer science as the discipline that makes the whole computer work. Because one can't be bothered to understand anything below their favourite framework doesn't mean it doesn't count as "computer science".


I feel this one. Short story. I'm primarily a designer that 10 years ago decided out of boredom to use Vim. Next thing I knew I was using fish as my prompt and learning a lot more about my command line.

2 years ago I made the switch to Linux, mostly because I was inspired by r/unixporn. I took a long weekend to install a barebones arch setup with i3. Everything needed to be touched, and I realized that although I'd been using computers for nearly 30 years, I really didn't understand how they worked.

Maintaining things like Linux, Vim, or some semi-complicated, interlinked toolchain makes me need to learn things constantly. Running updates becomes a lot more complicated, and every time I do it, I learn something new about how computers work that I didn't before.

Why do I do this? Well, I like learning! I also really enjoy my work and if fully retired, would still fiddle with computers as a hobby. Nothing against it, but I think a lot of people use their computers simple to get to their end task, and don't really care much about how that end task might sit on top of a bunch of other core systems.

In my heart, everytime I fiddle with Linux I'm reminded of the seven-year-old kid who sat at the family kitchen computer trying to learn just what the hell DOS was and why my games didn't work. These are magical machines, and it's fun learning why they work. That's what Linux gives me... something to explore.


> I run Arch with i3, vim, and tmux and quite frankly I don't think I spend more than an a couple of hours per month maintaining my configuration

Like others have said, that is a lot.

And I'm saying this as someone who's been running Ubuntu with i3 for the past 10 years or so. I practically don't spend any time on my configuration, unless maybe an hour or two when I upgrade Ubuntu.


I also don't really spend time maintaining my Linux, but I don't think 2h a month is a lot.

I mean, I probably spend more time reading HN comments or restarting Xcode (after cleaning the cache, and the hidden cache, and the DerivedData) when I work on macOS xD


I think it is entirely up to the person. I know people who are insanely productive on linux, I am. But I also know people who get so distracted by customization, they never seem to get anything done other than, "check out how I got vim to look today!"

Some people get stuck down the customization/r/unixporn rabbit hole.


A couple hours a month sounds like a large and annoying amount to me.


I am genuinely surprised by how many people find that 2h a month is a lot of time, especially when it's about maintaining their work station.

I mean reading HN comments probably doesn't take them much less, does it?


That is because they think of that as active time. For instance windows updates probably waste much more time (and are much more annoying), not to mention all the individual app updates also popping up to take a few minutes at a time. Yet, when you ask people about how much time they spend maintaining windows, they will say zero.

Anyways, my impression on debian stable would also be that I spend about zero hours for 3 years until there is a major upgrade (which is basically starting the upgrade and then drinking coffee until it is done). However, I am probably also wrong about that and actually spend a few seconds here and there on changing a wallpaper, seeing what is new in a new Firefox version etc. Summed up over a month that might actually be many minutes, but it does not feel like it.


This is how I would describe my experience when running a Linux distribution on a corporate laptop that was pre-configured for me to work properly. But it has yet to ever be my experience on a personally-maintained Linux laptop. And anecdotally, my coworkers who are running Linux still seem to be sinking time into stuff like "ugh, sorry, can't get my microphone working today" type stuff.

I love Linux and I'm glad so many people are happy with their setups, but it just hasn't worked flawlessly enough for me over the years.

Note: I'm specifically talking about laptops here. It has worked great for me as a mostly headless workstation and of course as a server.


I think it’s because most people assume that their setup doesn’t need maintenance at all. So any amount of time just needing to be invested simply to keep things working is strange - 2 hours seems excessivr


They don't think about the time they spend updating brew, or dealing with pipx, or waiting for corporate McAfee AV to update, or figuring out weird docker/rancher/podman on osx issues, or "ffff `date` doesn't work that same as anything else..."


Out of that list updating brew is about the only thing I regularly encounter and even then I can just it run while I do something else.


And you think those two hours are not "while I do something else"? I highly doubt you can spend two hours of active time a month on maintenance even if you tried. However, if you count "clicked yes, then it took 30 minutes to download and update" as 30 minutes, then this seems about right for any OS.


I think your phrasing caused issues then.

I am not doing any significant manual configuration, tweaking, or futzing with my system setup beyond maybe half an hour after initial install. There might be occasional changes I try but they are (1) rare, (2) just toggle a setting, and (3) are very occasional - in the order of seconds-single digit minutes over the course of a year.

Your phrasing makes it sound like you are spending multiple hours every month fiddling with your system.

I think most people aren’t measuring the time spend doing updates because for most people they are done overnight, and aside from major updates take just a slightly extended reboot if done during the day. No one is including the os updates, chrome updates, etc downloading in the background as time that they’re are spending maintaining their system, etc.

It is reasonable to read someone saying “I spent X amount of time doing Y” as meaning the person is saying that they were personally spending that time doing Y, not “my computer spent X amount of time doing Y while I was using it”


> I think your phrasing caused issues then.

Who is "you"?


Ah, I assumed from your comment you were the original commenter that said "I don't think I spend more than an a couple of hours per month maintaining my configuration", but I see you are not, sorry!


I dunno, if someone is saying “I spent a few hours every month maintaining” and they mean “I do other things while packages download” then they need to phrase it better.


Linux doesn't keep you from having to update packages.

I agree that there is all this software tedium as well, that I'd much prefer to eliminate, but it sadly exists across all platforms.

But that's not what I'm talking about here. I'm talking about time spent getting and keeping hardware working properly, writing stuff into x11 or networking config files, that sort of thing. I have always found myself doing significantly more of that when running Linux (specifically on a laptop) than I have any interest doing anymore.


I don't understand why you'd need to configure x11 or networking more than once.

My routine maint tasks are `fwupdmgr get-updates` and `yay -Syu --devel` followed by `reboot`

I'm not sure what else people are doing other than tinkering with how things are setup. I spend a lot of time trying out other window managers and compositors or setting up various keybinds or automations I think would be useful, but I don't consider those maintenance tasks.


Oh you definitely don't in theory. But you totally do in practice. There's no good reason, it's just the actual experience many people have.

> I spend a lot of time trying out other window managers and compositors or setting up various keybinds or automations I think would be useful, but I don't consider those maintenance tasks.

This is the kind of (in my opinion) low value tedium I'm talking about.

I'm aware that we're talking past each other in these threads. Some people are thinking of software updates, others of us are thinking of stuff like trying out window managers and messing with keybindings, and these are indeed very different kinds of toil.


>Oh you definitely don't in theory. But you totally do in practice. There's no good reason, it's just the actual experience many people have.

What are you referring to here?

That's like saying it's low value tedium to set folder view in finder to compact. Or trying out Rectangle or one of the auto-tilers, enable night shift.. They're preferences and I don't see how the experience would differ from one OS to another. Maybe you like the way everything works out of the box on OSX. That's cool. I don't. I don't really like how any OS (or wm or compositor) works out of the box.


I both understand the surprise - after all, I used to spend a lot more than two hours a month on this myself! - and consider it a lot of time for what I consider to be a very low value activity.

Reading and commenting on HN is also fairly low-value activity, but I certainly learn useful things more often than I do when futzing around with my OS, and that's just bonus; it's mostly an entertainment activity. Playing with my Linux configuration used to be an entertainment activity for me as well, but once that stopped being the case, it stopped being a good use of my time.


It is a lot. My laptop applies security patches itself at night, and then I wake up and use it. It’s great, I get time back in my life for other more important priorities. I have spent enough time managing Linux environments on desktops, servers, laptops, embedded devices etc. Spending more time on that maintenance, even if it is small 90% of the cases, is a complete waste of my time on this earth.

You can disagree and find this enjoyable, no one is saying you need to see things their way. We all have different hobbies and priorities. It is fine if other people do not want to use Linux as a daily driver.


It might be a personal problem, using MacOS is how I work around it :)

Like I said, I love linux, I run linux on literally 1000s of servers worldwide, the work Ashai developers are doing is amazing. I'll probably never again run it on my desktop/laptop.


> I don't think I spend more than an a couple of hours per month maintaining my configuration

I use Windows and the number of hours per month I spend on maintenance and configuration is close to zero.

Every so often I think about switching back to Linux but these comments remind me why I switched.


> I run Arch with i3, vim, and tmux

Those are a bunch of options for people who like optimising and configuring (nothing wrong with that).

I spend zero time on config in Linux just by accepting distro defaults. Maintenance would be less than on Windows or Mac just due to updates installing a hell of a lot faster.


I use Arch, i3 and vim (not tmux), and I don't customize anything: I just use them as they are, no maintenance :).


I remember from my usage that the Manjaro i3 edition is pretty good - of course there are some downsides to Manjaro itself but there is little hassle in installing and maintaining it. Of course remembering that it's a rolling release distro which makes updating the whole system bit harder.


Good to know the customisation is optional :)

At one point I aspired to a setup like that, but I thought I was too lazy.


I like your style. I spend something less than two hours per year waiting on my Mac updates and something like 1 hour per year managing my Windows 10 machine. My WSL Ubuntu "instance" takes about 15 minutes per year. So, more than you, but not a terrible management burden on any platform, in my estimation.


Zero?

Windows Update alone is a 15-20 min ordeal every 2 weeks (usually requiring 2-3 restarts to get all updates).


Remember, that’s for a highly customised setup, for a normal preconfigured desktop distro you’ll need a lot less


Sure I understand that. The thing is, I like tinkering and customizing. I'm sure most other people here do too. The fact that my chosen OS is kind of crappy in that regard is an advantage as it stops me from getting distracted and then I can spend that time working on things that matter to me instead. I'm pretty good at being focused, and getting better over time. But nobody has perfect focus, we're all prone to distractions. I want my desktop to be a tool for running software, not a source of yet more distraction. It's the same reason I don't install games on my work computer.

I did use Ubuntu as my main desktop for a while, I guess about a decade ago. I stopped because it took so much effort to get basic stuff working on my system at the time - graphics drivers were especially hard. As others pointed out, I did learn a lot from this experience. But nowadays my life is busy, I run a business and I need to focus my work time and spend it, well, working. Not tinkering with the systems I use to do work.

My free time, I also don't want to spend tinkering with the OS. I'd rather go to the beach.


> My free time, I also don't want to spend tinkering with the OS. I'd rather go to the beach.

Which is why I tend to avoid windows. Windows updates, updates by the individual apps, figuring out how to disable misfeatures etc. ; takes much more time and maintenance than linux.


Just buy a supported system and use the defaults. Has always worked for me since the nintees. On Linux there are a lot more defaults to choose from though, but the basics behind those was the same for a very long time.

When I am writing about this I realize that I my solution has always been a variant of the Linus Torvalds, move the old system to a chroot and reinstall. That has happened once every six years always because of user error or a new system. On Win/mac that happens more often (judging from my support load).


Well that's like not eating chocolate, no one forces you to do it.. you just gotta not do it.

I've been on linux for the past however many years and I don't waste time on my setup at all. I pretty much have the same setup for the last 4-5 years. I've spent maybe 20 hours since then tweaking things / switching distros. Never had to recompile my kernel, my drivers or anything like that since I'm on a t490 thinkpad and it pretty much just works.


just my 2cents, I have moved back to MacOS once M1 macs were released and have been using macbook air with m1 as my daily driver device ever since.

I wasn't really having any problems per se, window management on macos is a bit lacking and I'm not fond of the forced animations everywhere, but overall I was pretty happy with how everything functions.

Until few weeks ago I had free time and an adventurous mood to try out the Asahi (It's really painless to do so), and the moment I git clone'd my dotfiles and logged into my old i3 configuration I was very astounded by the sheer snappiness of everything, fingers remember all the shortcuts and all of the workspace switching is /instant/. That's something I have tried to replicate on MacOS multiple times and always failed, yabai and the like are not even close.

I still reboot into macos for when I'm mobile as I don't trust sleep on asahi (I don't think it even works that is), but when I use it at home I am very impressed by how stable and daily drive-able it is if current (admittedly, significant) compromises work for you.


This is why I'm tempted to try Asahi. After a decade of using macOS full time the idea of ever using it again makes me want to run to the woods, and Windows is only a little better. The Macs do seem quite powerful, though I expect the AMD laptops coming out this year to rival them.

BTW, many professionals like and use a highly customized setup which only Linux can offer. This is true in other fields too. See, for example, Euroracks.


Are you dual booting your M1?


Yes, that's default mode of installation for asahi


As a personal Fedora Gnome user, I have zero customisation I have to do which leads to absolutely zero monthly maintenance. And my personal projects are much easier to just get on with.

My work Mac requires constant TLC, although that's mostly because they want a container based workflow which is obviously not native to MacOS and there is always something going wrong that requires a couple of hours tweaking to get healthy again each month.

It also doesn't help that my Mac feels more responsive but also feels much slower. Again it's probably the container based workflow, or all the crappy AV and management software scanning each other but compiling code feels like it takes forever compared to my several year old Linux laptop.


I also am a Fedora Gnome user for the same reasons, with the same result. It is by far the most solid, JustWorks Linux experience. I also think that modern Gnome is actually a really good DE.


One of the reasons folks moved to Mac was because they could get a good development environment without all the Windows bloatware that makes it really hostile to work with. I didn't find it very fast coming from a Linux background, but it was better than Windows.

Unfortunately we are now at that state with Mac as well. I have a pre-VPN (Netskope?) that starts before logon which networks the laptop directly to the work public network that I then have to Cisco VPN from that to reach the work network.

I have SentinalOne, Netskope, Qualys all installed and all "realtime scanning". And it's slow. Security don't trust the employees and will mandate all this "security" software that could be bypassed if I actually was hostile but makes everyone else's environment terrible.

I'm sure if Linux became the desktop of developers we will end up with a load of rootkits too.


My advice flee!

SentinelOne is the most unuseable piece of software I have ever used, it only gives you false positives and can not handle spear phishing from people who actually know what they are doing. It sounds good on paper and once you deploy it you will have a hard time arguing for making system "less secure".


Thanks, it's one of those things that I just didn't think about asking during the hiring process. I asked about other bits of mandatory software, and workflows but I didn't ask:

> So how much anti-security software is mandated, and what is the impact on the developer mandated hardware?


I think it’s a matter of expectations. I use stock Ubuntu and GNOME and have almost zero customizations. I haven’t touched it except to upgrade since 2018 and have had zero issues. I get it though. I also use an M1 and love it.


Yeah, using one of the big boring distros really helps to focus on work. I've switched to Fedora for my new box and once the hardware setup was sorted I just stopped messing with it.


I use Slackware or OpenBSD. I completely understand You said (I used RH based distros when I was at Some-Large Commercial Air* in Seattle (it was a joy minus my second set of mgr's)). As much as I dislike using Red Hat*, they are great at what You mention.


I don’t even install all those gnome extensions everyone insists are essential.

The base fedora install + the proprietary codecs repo is literally all you need. Everything else is a waste of time.


The other replies to your comment cover my most immediate thought, so I won’t repeat any of that.

However, in addition, I think there’s probably a significant amount of highly transferable learnings you’ve gone through while spending all that time configuring your machine. Was any of that time wasted? Well, probably, at least by some measure… But it’s also very hard to gauge the value of all the incidental knowledge you gained about what computers are, how they work, etc.

My point is this: don’t over-penalize yourself for wasting time when you’re learning in the process, and try and remember how magical it is to learn things for the first time. And for other readers… It’s good to be pragmatic, but if you’re lucky enough to be in a position to spend a lot of time tinkering, playing, learning, then DO IT. Life is meant to enjoy!


I loved running Linux in college and tried every new (or old) distribution and compiled oh so much stuff and fiddled with every new (or old) thing I could find to fiddle with and thought it was super cool to write config files by hand and be able to read and modify all the source to everything I was running. As you point out, this was an excellent learning experience for me.

I kept doing that for awhile after I graduated, but eventually I realized it wasn't a good learning experience anymore, it was just a tedious waste of my time. I realized I should have been socializing or recreating outdoors or reading or picking up new hobbies if I wasn't working. Since then I've mostly used Macs or Linux machines maintained by the company I work for, and this is definitely a much better use of my time.

There are only so many fundamentals to learn here before it becomes just so much minutia and non-essential complexity.


> I think there’s probably a significant amount of highly transferable learnings you’ve gone through while spending all that time configuring your machine. Was any of that time wasted?

Oh my god no.

I ran gentoo ~amd64 on my desktop for YEARS. If you don't know gentoo you don't know what that means, but, I learned A LOT. I wouldn't trade it for anything, it's just not right for me anymore.


Funny, but this is the same reason I use ChromeOS—I get something that’s simple to use and update, but I can also use the built-in Linux VM for all my Linux needs (and not have to work about hardware inculpabilities, drivers and whatnot).


This has been one of the barriers to adopting Linux full-time for me too. The depth of the customization is great, but the fact that it's possible means that I'm endlessly twiddling trying to get things "just right", and from there it's a constant battle to keep it that way.

Part of this might have to do with how the sorts of tiling WM setups that Linux aficionados seem to love aren't compatible with me at all, and more traditional minimalist setups (e.g. OpenBox+tint2 or whatever the modern equivalents to those are) left too many unfilled or badly filled holes, which drives me to the bigger DEs, which are opinionated in ways that don't necessarily align with me which drives the twiddling.

The only way I can see this changing is if I somehow become able to pay the bills without working and pour myself into developing my own DE and essentially make the twiddling my job (which I think I would actually love doing, but doesn't seem particularly realistic).


Maybe you’re more OCD than I am. Im the opposite. I run dnf upgrade once every few days and don’t bother or think about it afterwards. I start my work Linux PC fire up a terminal and launch vim, slack, open Jira, and get to work. Everything literally just works. And when I’m done my gaming PC that also runs fedora runs steam and Blizzard games just fine. It’s amazing. Now I spend more of my time trying to give back (in the form of money or by evangelism) to the open source projects and maintainers that make my life so beautifully simple and fun.


>Oh cool there's a .001 version bump on the graphics drivers, let me spend all day recompiling that and tweaking my awesomerc.

That's a you problem, not a Linux problem.

I haven't compiled any drivers for Linux in many, many years, probably decades even.

It helps to avoid Nvidia GPUs however, but even here I haven't seen a big problem in ages (my work computer unfortunately sticks me with an Nvidia, but Debian updates seem to work as normal).


That's true in some cases, but remember hardware support can be a little bit mixed, particularly closer to the bleeding edge. So you could be happy enough most of the time, but if you encounter an issue that is annoying and disruptive but obscure enough that 90% of your day-to-day is OK, then you could definitely find yourself rebuilding and using kernel point-releases


I'm sorry, this is just wrong. I've never heard of anyone absolutely needing to compile anything, especially the kernel, on a mainstream Linux distro (e.g. Ubuntu) in ages. Plus, if your graphics card is Nvidia, you don't get to compile anything: the driver is a binary blob (only the shim is compiled).


I ran gentoo, from stage1 ~amd64. I compiled everything from scratch. I know I did it in hard mode.


Gentoo is NOT a mainstream distro.


I have literally experienced this, I'm not sure how you can dismiss it as "just wrong"


What, exactly, have you experienced?


I remember getting addicted to constant updates some years ago, when I was using Gentoo. Switching (back) to Slackware cured me. I do not now why since Slackware, too, gets regular updates. I now check for updates once a month or so.


You can get a read-only system with SilverBlue, the immutable variant of Fedora:

https://silverblue.fedoraproject.org/about


I'm now MacOS on M1 pro. I miss Linux and will be installing Linux as soon as it's safe to. I find macOS takes more of.my attention from work than Linux did. Pretty much opposite of you by the sounds of it.


I'm a long-time Linux user eyeing the M1 Mac somewhat. Why do you prefer MacOS a lot? My biggest fear is that I'll miss all the weird little things I got used to in my Linux setup.

Such as for example the cpu/ram/net/disk usage thing in the Gnome top bar. I have no idea whether there's something like that for Mac, and even if there was, I wouldn't know how to find it.


As a person that run on Ubuntu for a couple of years and then had to switch to macOS due to a work change, I can give you some feedback.

Most of those things (small utilities/plugins) exist for macOS, with two massive caveats - they are often paid (e.g. only 2 out of ~10 window managers are free; 1 out of ~3 usage thing in top bar is free), and the second one which bugs me more, they're often very Apple ecosystem centric (first or third party doesn't matter). They do things in the Apple way, integrate only with Apple tooling and services and devices, often in non-obvious ways. E.g. a fun one is that you can extend your macOS screen to an iPad, if you fill a bunch of Apple requirements (same AppleID account), but with zero feedback on which one you don't fill. If you don't, the button just isn't there, nothing says why, and all online guides boil down to "make sure you've done this". Which is my biggest problem with macOS - it isn't made for technical people because it is incapable of providing actual usable feedback that can help you debug. Be it magic buttons that only appear if everything is there, or useless error messages ("A USB device is consuming too much power and has been shut down". Which device?), or lacking features like separate scroll direction between mouse and touchpad. Oh and there's no native package manager for some unknown reason.

It's a decent shiny OS, fine for a graphical designer/video editor. I don't get technical people who swear by it, I'd be tempted to say Win 10 with WSL is better than it for techies.


My suggestion is to look into tools that make MacOS nice. There are tool bars for hardware usage, things like magnet are amazing for windows management, brew for package management, and learn to love iTerm 2 - it is a big reason why I feel productive and satisfied with Mac as my daily driver.


> Such as for example the cpu/ram/net/disk usage thing in the Gnome top bar.

This is an option for Mac: https://github.com/exelban/stats


I also now generally prefer macOS, but I find it worth the $100/year to license Parallels for M1 Macs. It is set up for trivial installs of Ubuntu and a few other Linux distros that support the ARM-64 architecture.

I had two older Mac laptops that stopped functioning with the latest macOS update for encrypted iCloud files (now just about everything is encrypted in motion and at rest). I put Ubuntu on both, a MacBook and a very old (tiny model) MacBook Air that is many years old.


Confessions from a former long-term 20 year Linux user, to now being another happy Apple customer.

It is never too late to try out an Apple Silicon Mac. They just get out of your way and they just work much faster.


Note that Linux 6.2 does not include anything like a full upstreaming of Apple Silicon support. You'll still want to run the downstream Asahi kernel for a while.


You’ll also likely always need a special installer since macs can not boot off a usb, you have to create a partition and set the security settings up.

But in the future the project could be reduced to a simple bash script you run and then it’s a stock distro


I remember the 'bad old days' in the 90ies when all the big companies had their own chips. SGI Mips, DEC Alpha, HPUX on PA-RISC, Sun SPARC, etc...

I really hope we don't go back to that. Especially with a company that defaults to 'proprietary' for so many things like Apple.


I could be wrong but I think a lot of the M1 support being added to linux is around what used to be called the "chipset" of the system and not the CPU itself, which is running a typical ARM64 ISA.

Getting the device tree set up, initializing graphics and all the peripherals, and the boot process is where Ashai Linux has done the vast majority of their work.


The M1 does have its own co-processors that I do not expect to ever get support, such as AMX.


really? seems pretty well documented here: https://github.com/corsix/amx


The PC platform is also sadly moving in that direction, with only the "legacy" devices being the biggest hurdle to keep it from closing up entirely (hence the heavily vested interests in attempting to kill them off.) The fact that there's decades of binary compatibility and third-party knowledge is a threat to the control-freak models of companies like Apple, who would rather bury the past so they can reinvent it, and charge us more for the same things we once had, in a never-ending cycle of consumerism and misinformation.

Unfortunately, until people value freedom more than pure performance, efficiency, or "security", closed ecosystems may become dominant.


I think buying Apple (or similar big brands) just for the hardware is shortsighted, as you're still voting with your wallet.


The core CPU isn’t really the biggest issue. It’s the drivers for all the extra stuff: networking interfaces, displays, input handling etc…

Also of course, the GPU which already has incredibly high variance across vendors.


We’re not heading back there. Those were all workstation grade architectures, never meant for consumers.

You’re not going to see Samsung, Asus, and Dell releasing their own properietsry arm derivatives, if that’s what you’re implying. Too many laptops, too much porting and unoptimized Software.

Apple’s always done this to maintain a prestigious image for non technical people.


Is anyone else besides Apple doing this?


Microsoft is doing the same thing with their Surface tablets (https://github.com/linux-surface/linux-surface) and no one bats an eye.


The nearest thing I can think of in the modern world is game consoles and smartphones. Except they are locked down much much more than an M1 Mac.


Those are mostly AMD SoCs with relatively mainstream x86 cores and AMD discrete graphics cards with some changes on the same die.

The most exotic thing about them is usually the unified memory controller.


Non-Apple smartphones mostly run Android, with a GPLv2 Linux kernel. I guess this already is more open than Apple, because OEMs need to open source their downstream kernel.


I don’t think it really makes that big a difference.

Apple already open sources their kernel.

The issue is mostly device drivers, which even on Android tend to be external binary blobs.

It kind of leaves you in the same place in my opinion


I think it does make a difference: there are tons of Android ROMs (or even Linux distros like PostmarketOS) that run on many different Android smartphones. But I don't see the equivalent for iPhones.

Isn't that because iPhones are more closed somehow? Maybe there is another reason, I don't know to be honest.


That’s down to different reasons:

1. You can still reuse the binary blobs that make up the drivers.

2. Android has unlockable bootloaders whereas iPhones do not.

However the binary blobs are not always legally redistributable or even reusable.

They don’t also help if you need to support any version of the kernel other than what shipped in case there’s any ABI differences


Thanks for the insights!


amazon graviton cpus


Apparently recently other CPU architectures including RISC-V SoC and Snapdragon 8 also gain mainstream Linux compatibility:

https://www.tomshardware.com/news/linux-kernel-adds-risc-v-j...


What's the use case here, buy Apple's M1 hardware and reinstall it with Linux? Apple's hardware is relatively expensive and part of that price might be from its MacOS, I will just buy a PC instead, better yet, a PC with no OS(or Linux installed).

I do need get a Mac Mini to test flutter apps there, not happy with its default small DDR size(8GB) and pricey upgrades, and, of course it's not upgrade-able either, disappointed.


> What's the use case

I think the answer is that it's a uniform piece of desktop hardware with a huge installed base. Apple's market share in the desktop space may be small compared to PC, but the PC ecosystem is heterogeneous, so the economics of supporting a large fraction of PCs is disadvantageous for open source maintenance.

If support for linux and other open source OSes on apple desktop hardware becomes a thing, it may well quickly become the best-supported and best-tested hardware of all, which would be a huge leap forward for open source on the desktop.

Besides: The relationship between Intel and open source is not exactly a love affair either. Maybe Apple will be nicer to open source, or maybe the sheer fact that there is a second hardware player in the open source desktop space will be a shift in competitive dynamics that's advantageous to open source.


Apple's hardware is pretty much the best (and I don't particularly care about the price for something I use as much as my laptop).

In the same time I've been using Linux for 20 years and I am happy with it, so I don't want to change to MacOS. Thus I would happy to buy apple & install linux on it.


That's precisely why I got an M1 MacBook (typed from Asahi Linux). I've also run Windows in a VM quite a bit on it since it's a fantastic Windows laptop, even if it means running it virtualized.


MacBooks, especially the new M1/M2 are pretty much the best built laptops you can buy. I’ve run Linux laptops before (Dell XPS, System76) and always wanted better build quality and battery life.

I am now running Asahi Linux on a M2 Air, because that’s the best combination for a user like me.


M1 and M2 are both chiplet design, it has quite a few accelerators hardware built in(e.g. neural network unit), are those supported? or currently only the CPU and (built-in) GPU are used?

if all those units are fully supported, e.g. I can use its NPU to train a machine learning model, I may get a M2 laptop too.


The coprocessors (TouchID, SecureEnclave, NNE) are not yet supported. These will be next in line once the webcam/mic/speakers are fully functional, so probably worth waiting a bit more (few months?) if this is critical.

Most of my usage is browsing, emails, and light development - so it works good enough for now. If I need a GPU, I run Jupyter on my homeserver with a 1050Ti.


The M1 chips are astonishingly quick, and extremely power efficient. You get something that outperforms all but the most powerful desktops for development tasks like "compile the whole of an embedded Linux distro", and will happily do that for 12 hours on the battery.

The only Apple hardware I've owned was a mac mini for the exact same use case as you, but the M1 laptops might change that if I can run Linux. I did not enjoy Mac OS when using it on the Mini!


I’ll take advantage of the appropriate audience here.. does this mean that M1/M2 Mac Mini just became a viable hardware for home server?


Use downstream Asahi and it probably already is. Read their todos, but much of it is “desktop” user related.


Can't wait for stable Linux to run on macOS. Wish I could work on a project like Asahi Linux, it looks a very interesting one.


Uh what? Linux would be the OS, from kernel an up, it wouldn't run "on macOS"? Are you talking about virtualization?


I assume he got autocorrected from "macs" or some variant thereof.


A mistake on my phrasing. I meant MacBooks, not macOS.


These days, you don't have to admit to mistakes like this: you can just blame them on auto-correct.


The tools we use are not responsible for the things we do with them. We are!


I still own too many Macs that Apple arbitrarily decided I shouldn't be able to use anymore. As much as I'd like a new M mac, I wont buy one until I know I can run problem free linux on it when Apple decides it's time for me to upgrade again.


I'm curious what exactly this is in reference to?

Like I was around for the whole 68k and PPC transition. Obviously the Intel transition too. I think the Intel was the worst, at least for us that doubled down on expensive G5s that barely had any shelf life to them.

Beyond that? Apple is pretty reasonable. Android Phones that were released at the same time as the XR, which still runs iOS 16, have been deprecated for almost 2.5 years in any kind of support.

I feel sorry for people that bought Intel Macs in 2019, as those are really the ones going to be left out in the cold.

Outside of that, as far as installing a brand new Apple OS on a 5 year old+ computer, is a crap shoot from the performance perspective. Maybe it will work, maybe it won't. Maybe it will require too many concessions.

Intel's forever 14nm++++++ honestly sort of stagnated things for a while, but now they've taken off again, and you can often feel the age on those machines in a lot of use cases.


I have a 2006 Mac Pro with a 32bit efi. Apple released an, I think, 8800GT gpu in the 2008 model and explicitly prevented it from being used in the 2006 mac. I don't remember exactly when, but I think by 2010 the 2006 Mac Pro was EOL and "couldn't run" the next major version of OSX. I think they blamed the 32bit EFI. There was a fairly easy workaround for this so I was able to keep installing updates for quite some time.

The old macbooks with the coreDuo CPUs, only lasted a few years before a next major osx release dropped 32bit cpu support.

I have a 2011 15" macbook pro that Apple ended the life of around 2015 or 2016, I believe, by discontinuing the GPU drivers in next major osx release.

How many releases do you think before x86 is dropped completely from OSX?


Hurrah for the asahi folk! They’ve been working incredibly hard and dealing with some seemingly crappy maintainer pushback to make this happen.


After moving from an Intel MacBook Pro to an M1, I’ve lost the ability to run Ubuntu in a VM with Vagrant.

If I understand right, this is an important step to getting an Apple Silicon version of Ubuntu, correct?


You can run Ubuntu in VM today (and have been able to do so since day 1 of Apple Silicon machines). It’s possible that vagrant doesn’t support this, and you need different VM software (UTM and parallels both work).

This is about getting Ubuntu and other linux distros to run bare metal on Apple Silicon without virtualisation or macOS being invovled.


Thanks. Your answer is mentioning UTM and Parallels, which would mean we're emulating x86 commands, correct?

Like some of the other answers here mention, if there's an arm build for Ubuntu, this would get things closer to running natively on Apple Silicon I think?

I am indeed trying to ensure Vagrant will work, since that's the technology we've typically used on our team, and we have some tooling setup to take advantage of it.

Vagrant is nice because it automatically shares a folder with the regular desktop, and we have all of our setup ready to go. But I imagine I could get Parallels or UTM to do the same thing... although I wonder how much extra CPU cycles and battery drain I'll have. But as long as I can get it to work, I'd be generally happy.


> Your answer is mentioning UTM and Parallels, which would mean we're emulating x86 commands, correct? ... if there's an arm build for Ubuntu

There is indeed an ARM build for Ubuntu (and has been for over a decade!), and this is what you would most likely want to run in a VM on an Apple Silicon machine. You can also run the x86 version emulated, but this will of course be much slower. As of the latest macOS releases, it's even possible to use Apple's "rosetta" translation software to run translated x86 binaries within an ARM linux VM, and this is likely the best thing to try first if you are using software that doesn't have ARM versions.


Yeah you need an aarch64 Linux build. Suggest just using UTM if you don't need anything fancy. Pretty sure they have an ARM64 Ubuntu image in their gallery thing.

I'm running Archlinux ARM in a VM as my primary development environment on an M2 Macbook Pro and it works great.


They list an ARM64 Ubuntu image in their gallery - https://mac.getutm.app/gallery/ubuntu-20-04 - but there is no download link.

Any idea how to get that? I would love to give it a shot.


Just install UTM and it will allow you to install from gallery. There is probably a way to download the disk image directly but the app will handle it for you to get started.


Multipass (from Canonical) could be a useful tool for you, they're supporting M1 since the 1.8 version.

https://multipass.run/


You should be able to run Ubuntu with a VM in Vagrant just fine in a M1 too, in MacOs, without Asahi.

It just needs to be ARM Ubuntu, not Intel Ubuntu.


I am finding no ARM Ubuntu images on https://app.vagrantup.com/boxes/search that work with VirtualBox unfortunately.


docker desktop certainly runs some type of linux in a vm to support the containers you run in it. So it's certainly possible, probably just need to do some research.


On the weekend I set up a arm version of Fedora in a vm (in the free version of VMware)on my M1 Air, it worked quite well.


Linux 6.2 lands support for the odroid M1 too (no relation, but also ARM)

https://archlinuxarm.org/forum/viewtopic.php?f=67&t=15997&p=...


Could somebody explain in very simple terms how I might get my M1 Mac Pro to dual boot between MacOS & Linux? Is there a guide or HowTo? Or am I a few months early re mainstream Linux rollout.


Run "curl https://alx.sh | sh" , ???, profit.

This will download and execute the Asahi Linux installer. If you want to check the source ahead of time, it's on the Asahi Linux GitHub.

While functionality improves rapidly every day, it is still decently rough around the edges. However, the installer is very streamlined and it's easy to get set up and try it out.


Just pointing out anyone can purchase a domain and redirect it to the official website while also serving a malicious script.


1. Someone asks how to do something

2. Someone tells him

3. Third person chimes in “you know, it’s technically possible that they’re lying!” despite no evidence of that being the case.

Tough crowd…

(Btw, you can easily verify that the alx.sh installation method is recommended by official sources, for example here: https://asahilinux.org/2022/03/asahi-linux-alpha-release/)


In this case however the official site provides the same guidance. Still, don’t pipe scripts to shells. At least read the script first.


Sure, but I didn’t know that from reading the comment


Right. I don’t trust the comment but I do trust the official site.


>Still, don’t pipe scripts to shells

Why not? Without looking, I assume that the script is downloading a binary and running it. What could it be doing that is more dangerous than that?


It can be doing anything your shell is privileged to do. You’re effectively giving password-less ssh to the Internet.


I think the point that sebzim4500 was making is that the script is downloading an arbitrary binary and running it and that this isn't any less dangerous than running an arbitrary script, so you're screwer either way.

If someone wanted to do `rm -rf /` on your system, they wouldn't put it in the setup script you're piping to sh: they'd put it into the binary, making your inspection of the setup script effectively useless.


If an installation script is downloading an arbitrary binary then I’m not running that script unless that binary also comes from a trusted source. We have PKI to prove that sites are who they claim to be. I only run binaries from trusted sources.


But then if you trust that source and its binaries, why would you inspect their scripts? What extra protection does that give you? None, imo.


How is that different from downloading a binary from the internet and running it?


Knowledge and trust of the source.


For the record, I think this is a totally reasonable comment, and I don't see why it's being downvoted. Often, the same people who spread FUD about shell script installers will download a binary from GitHub releases, npm, etc. without a second thought.

There are a few things that make shell script installers particularly dangerous, though. I don't think any are meaningful with the way people normally use computers, but they could be meaningful in the future if we improve our collective security posture:

* Shell script installers aren't digitally signed. Most OSes have pretty weak code signature schemes anyway. They're littered with root-of-trust issues that prevent a lot of OSS from being signed to begin with, and in turn, the vast majority of users (especially power users) ignore code signing warnings. But, as these problems are addressed, shell scripts will become weaker and weaker in comparison to binary packages.

* Shell script based sites can fingerprint the client and serve different content to a browser and to the "curl" command, confusing users who attempt to audit what's going to run before it's passed into the shell script command. This is a fine argument and a real problem with the "pipe to sh" approach. However, unless the user is also independently checksumming or disassembling every binary application they download, it's also a bit of a straw man.


> Often, the same people who spread FUD about shell script installers will download a binary from GitHub releases, npm, etc. without a second thought.

Nice straw man. Completely unfalsifiable. I didn’t bother reading the rest.


You can detect whether a script is being piped to shell or just downloaded, and serve different content. At least pipe it to a file, then execute that file.


There aren't many things more dangerous than that.


You’re a bit early but it’s fairly easy. I just did it on my M2 MacBook Air. It’s basically a bash script you run, answer a few question, pick a size for the new partition and it shuts down your computer. You then hold the power button while it starts up, you then pick the Asahi partition and follow a few more steps to finish the installation.


How easy/hard is it to set up a 3 part scheme where your personal files live on their own partition that can be accessed from macOS or Linux?


You don't want to do that. MacOS support for any modern filesystem (i.e. not crappy exFAT) that is not its blessed proprietary format, is likely to be bad or non-existent - at which point you have to install some shaky extension from a third party... The chances for anything to go Very Wrong are high. It's even worse than with Windows.

The best way to share files with a proprietary system is to go through some sort of network, i.e. NAS, smb, etc.


That's just bad advice. You can mount HFS+ from Linux just fine.


It's APFS by default for the last few versions of macOS, though, right?


Yes, but HFS+ is fully supported read/write for a mount.


Correct, it is APFS since 10.12 or 10.13 (so, 6 or 7 macOS versions/years).


This is completely wrong. ZFS works very well via O3X [0]. I ran my home folder off of ZFS on various Mac Pros for ~11 years. ZFS works well in Linux and FreeBSD as well.

----

0: https://openzfsonosx.org/


It should be fairly easy. Just pick a filesystem that both macOS and Linux can read/write to and format a partition accordingly (the Asahi installer won't make this third partition for you, but it should be pretty easy to figure out).

If I was going to go that route, on my MacBook which is 1TB, I would probably do something like 200GB for macOS, 200GB for Linux and then a shared partition of the rest (~600GB). I'd probably make the shared partition ExFAT, or possibly un-journaled HSF+.

I think the easiest way would be to install Asahi first, pick the size you want for it (200GB in my example), then once you have that sorted, figure out how to create another partition that you will format as ExFAT/HSF+/Whatever you want. I'm sure this is possible/easiest from the macOS side of things, although I have never partitioned a drive in macOS. The Asahi installer does it from the command line in macOS, so I'm sure Disk Utility has the ability to resize and create partitions, too.


The Asahi Linux installer sets everything up and you end up with dual boot. Check this post: https://asahilinux.org/2022/03/asahi-linux-alpha-release/


My experience with dual boot 25 years ago is that it is so inconvenient to reboot that you literally end up using only one OS, the one you prefer the most.

Just forget about dual booting nobody really does that.


Not too familiar with this but does this mean that eventually Asahi Linux will be obsolete? Is Asahi a forked Linux kernel and the work will eventually be merged right into the main Linux Kernel?


Yes, check out the second entry in the FAQ

https://asahilinux.org/about/

Their work goes beyond the kernel. For example Mesa (the user space part of the Linux GPU stack), boot tooling and installation tooling.

And of course the incredible amount of reverse engineering.


As always, Linux is "only" the kernel. I don't know the details but I would guess that there are other bits needed, at the very least an installer so you'll still need Asahi Linux as a distro.


That's the end goal.


And the original plan, but recently Hector is getting more and more annoyed with the process of upstreaming the patches. I really hope the situation improves so we don't get another PaX situation :( (different reasons of course, but similar result)


What is the PaX situation? I tried googling but didn't find it


results of my Googling:

PaX seems to refer to the PaX team: https://pax.grsecurity.net/

There seems to have been drama related to some of their patches https://news.ycombinator.com/item?id=14633163.


There's a lot more nuance there, but my summary would be: pax team created grsecurity patches which are awesome, they come as a big bundles rather than separate patches for each part, and are a bit disruptive - you need to know why you want them / are they worth it. Upstream doesn't want huge bundles, pax doesn't want to invest time in splitting them up and fighting with upstreaming each one separately. There's some external effort in the last years to chip away the most important ones, (KSPP, Popov Kees) but it's slow. In the meantime pax provides grsecurity as consultancy.

Again - I skipped lots of details.


In short, PaX and grsecurity were security patches for the Linux kernel that failed to be upstreamed.


Why would you pay so much for mac OS and end up using free linux?


You would be paying for the hardware which is pretty good (as long as you can accept the downsides, most notably that it isn't upgradeable nor maintainable, and Apple charge an arm and a leg for memory/storage). It'd be like buying a laptop that came with Windows and installing a Linux distro on it. Why not? If you're more comfortable with it that with the base OS, even better that there's choice.


You are paying a pretty decent chunk of this for the mac OS too, because you can get similar spec Windows laptop for half the price


You can not get a similar Windows laptop for half the price. That is absolutely ridiculous. Show me a single Windows laptop that has a similar screen, keyboard, trackpad, build quality, processor, battery life and silence. I'm waiting.


the new Dell XPS is just one example, ticks all your boxes. If you think nothing compares to Mac you clearly haven't been paying attention to Windows laptops.


Battery life and silence are no way even close to what the MacBook has when performing the same tasks at the same performance


> Why would you pay so much for mac OS and end up using free linux?

Because Apples MacBook Pro with M-Series Pro and Max chips are arguably the best allrounder laptops on the market right now


They also start at 2400€ for 16GB of RAM and 512GB of storage, soldered, and over 3000€ with double RAM and storage. At that price point they better be good.


Yeah... but Apple Silicon requires firmware for a lot of the peripherals. The upstream Linux kernel will now happily be able to load said firmware into the right place... but it doesn't come with it, so don't expect it to be as easy as rebuilding the kernel and dropping into a generic ARM image.


That's why Asahi Linux has a custom installer. Amongst other tasks, it installs a stub macOS partition that contains known supported Apple firmware versions. See "Whats with all the disk space?" under the FAQ at https://asahilinux.org/2022/03/asahi-linux-alpha-release/


Wondering how long it will take for our engineers to be requesting this installed on their machines..


I love the idea of Linux on a mac, but until the touchpad software is up to the same standard as the apple version I'll make do with osx.

edit: I believe they are looking for donations.. I'll find a link when I have time.


Linux has had Magic Trackpad support long predating it's support for Apple Silicon. On KDE/GNOME sessions in Wayland, you'll get 1:1 touchpad gestures for desktop management. Asahi should be using Wayland out of the box, so you'll be getting trackpad gestures along with the regular HID support.

Edit: here's what it looks like in action - https://youtu.be/aBEsxTVRsEo?t=100

It's also worth noting that Firefox ships with pinch-to-zoom and swipe-back gestures by default for Wayland now, too.


It less about the gestures, but about the general "feel" of the touchpad: latency, accuracy, acceleration, palm detection etc etc etc "just works" on macOS. I never had even close to the same experience on Windows, let alone on Linux. Maybe there's a mythical "Mac-like" Linux touchpad driver out there that doesn't suck, but if it isn't installed by default on a vanilla Ubuntu setup (or any other maintream distro) then it might just as well not exist.


There are some physical constants that you can't obtain short of measuring the touchpad and response in a metrology lab. Unless Apple releases their sensor thresholds and constants, reverse engineering can only approximate. Even now Flutter on iOS with the Cupertino theme only feels somewhat native.


I don’t think it needs a metrology lab - how about clean room reverse engineering of Apple’s driver(s) instead?


While my experience with Windows mirrors yours, I must say I've generally been quite fond with the feel of touchpads on Linux.

Both my spouse's laptop and mine were dramatically better in Linux compared to the Windowses they came with.


I use the Apple Magic Trackpad on my Linux desktop and it works very well out of the box. Works both over Bluetooth and over USB. The acceleration is different from macOS, but it isn't any worse. I can see that it's been supported for a long time in mainline Linux: https://github.com/torvalds/linux/blob/master/drivers/hid/hi...


Good news! It's in vanilla Ubuntu now.

The main holdup was Wayland - go get your Wintel machine and try booting up a Wayland session in KDE or GNOME. If your trackpad was manufactured with multitouch, chances are it will have gesture support. Any distro shipping GNOME 41-ish or Plasma >5.25 should have this by-default.

As for the feel... I'm gonna be honest, I don't notice any difference from my Mac. If anything, my Magic Trackpad has more gesture options on my KDE machine out-of-the-box. Don't knock it till you've tried it!


Erm well, I'm on Ubuntu 22.04 with Wayland and the touchpad is still crap :/

Worst thing is that I can't enable "tap to click" because the palm detection just doesn't work at all.


I have an external Magic Trackpad 2 running with Wayland/Ubuntu 22.04, and it works great.


Why Wayland? You get libinput on both Wayland and X11 and X11 has gestures since 21.1


there was a magic touchpad driver for windows under bootcamp, worked quite well.


All of that is meaningless, if you move, and click, when typing.


I've not noticed any issues with palm-rejection when typing on Linux. If your trackpad does have issues, you can disable it while typing (but most Windows Precision trackpads work fine).

I'm mostly using the Magic Trackpad on my desktop though. It's very possible you might find a laptop with bad palm-rejection firmware, but I don't think that's an issue with these. Or Synaptics touchpads, in my experience.


I've had problems with palm rejection in MacOS on the M1 MBA; nowhere near as bad as on Dell machines running various flavors of Linux, but they are there.


Heh, interestingly I also noticed more problems on my 14" M1 MBP than I had on my previous mid-2014 13" MBP (or more specifically: any problems at all, because the touchpad on my previous MBP was pretty much perfect).

I suspect it's simply because of the bigger touchpad. Even if the "error rate" is the same as before, there are just more potential errors because of the bigger touchpad surface.


I just sponsor marcan on GitHub here: https://github.com/sponsors/marcan but there might be better ways. Not sure about touchpad support.



Also don't forget to subscribe to Asahi Lina! https://www.youtube.com/c/AsahiLina


Strange hill to die on when full hardware accelerated graphics doesn't even work. Touchpad is the last problem



> This is still an alpha driver


Yes, but it works. You can use it and it does acceleration on par for lower OpenGL versions.


A touchpad experience that doesn't get in the way is at least as important as good desktop rendering performance though.


Have any of you actually tried Asahi? Touchpad works flawlessly. A rust based, open source GPU driver is working since December and swiftly advances daily to a fully stable state.


You should try using MacOS with software-accelerated desktop rendering. Having seen both sides of the fence, having hardware acceleration for your desktop is a king's ransom relative to trackpad gestures.


I'm the opposite. I recently bought a MacBook pro and the latency of my mouse is so nauseating that I still use my old linux laptop for work.


Not sure if you really do mean latency, or the much more subjective-and-common complaint about the acceleration curve.

If you mean the latter, there are ways to customize that to be more like other desktop environments (sorry for not linking anything, currently on mobile and I don’t have time now to vet search results, but there are built in CLI settings which should be pretty easy to find).

Mostly commenting in case the above leads in a helpful direction, but if you actually mean latency of trackpad input apart from the acceleration curve, I’m wildly curious to hear more about what you’re experiencing and expecting otherwise!


Acceleration was my first thought as well. But I turned that off with some command I found online.

I'm not talking about the track pad (although that has quite a lot of latency as well). I'm talking about a mouse.


Is your mouse wireless?

I have no perceptible latency (and I game) on my M1 Pro MacBook with a wired mouse.


Yes it's wireless. The same mouse in windows or Linux is night and day though.


just wondering, does this mean a Rust build pipeline is officially part of the mainline Linux build pipeline? Since (as far as I know) the GPU drivers for the M1 are written in Rust by Asahi Lina?


I believe the GPU driver isn’t included yet, but separately from that, the first Rust For Linux patches have indeed been merged into mainline linux. The support is being merged gradually over time though, so there’s still a lot that hasn’t been upstreamed yet.


The build pipeline was already part of Linux 6.1. I played around with it on ArchLinux with the target to compile an out of tree hello world kernel module in Rust:

* https://blog.rnstlr.ch/building-an-out-of-tree-rust-kernel-m...

* https://blog.rnstlr.ch/building-an-out-of-tree-rust-kernel-m...

I'll probably make a follow up post with Linux 6.2


It appears so. It exists and is in tree but it's probably going to stay "optional features only" for the foreseeable future since rust can't currently target all platforms that the kernel supports.

1. https://docs.kernel.org/rust/index.html

2. https://github.com/torvalds/linux/tree/v6.2/rust


This is only initial bringup, the GPU driver is not mainline yet. (And it's UAPI is going to change to properly support Vulkan)


i just built 6.2 for arm64. what timing!

https://github.com/nathants/mighty-snitch/releases


God bless the asahi devs for liberating my machine, a true thank you form the bottom of my heart


As long as Hector Martin refuses to grow up and become a kinder person, I am not touching Asahi Linux project with a ten-foot pole.


Mainstream or mainline? I would think mainstream kernel is Debian 's or Ubuntu's, while mainline is kernel.org's :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: