Hacker News new | past | comments | ask | show | jobs | submit login
Asahi Linux for M1 Macs: progress report for September 2021 (asahilinux.org)
474 points by fanf2 on Oct 5, 2021 | hide | past | favorite | 186 comments



> The NVMe hardware in the M1 is quite peculiar: it breaks the spec in multiple ways, requiring patches to the core NVMe support in Linux, and it also is exposed as a platform device instead of PCIe. In addition, it is managed by an ASC, the “ANS”, which needs to be brought up before NVMe can work, and that also relies on a companion “SART” driver, which is like a minimal IOMMU.

Stuff like this makes me wonder: why does Apple do this? If I try to give them the benefit of the doubt, I can assume that these changes are done for performance, cost, power-saving, or maybe even security reasons. Otherwise it just seems like Apple does these things in order to make it harder for other OSes to run on their hardware. Which is certainly their prerogative, but it just makes me think less of them.

On the other hand, this is really cool:

> However, Apple is unique in putting emphasis in keeping hardware interfaces compatible across SoC generations – the UART hardware in the M1 dates back to the original iPhone! This means we are in a unique position to be able to try writing drivers that will not only work for the M1, but may work –unchanged– on future chips as well.

... but cynically (or perhaps just realistically), I can easily believe that this isn't done for reasons of openness, but because this makes maintenance of macOS itself easier for Apple.


Everything Apple does they do because it makes sense for them. They are neither trying to help nor hinder third-party OSes. They just don't care.

The NVMe design makes sense in the context of their common ASC architecture and how their SoC works. Various quirks are due to things like supporting their storage encryption. Even for things which we can't quite explain, I have no trouble believing that it made sense for them for whatever reason.


Thanks for the even-handed reply, which I imagine is much easier to have after diving into all this stuff first-hand (I assume you're the same 'marcan' who wrote the progress report).

As much as many of us want to attribute positive or negative reasons or motivations to things companies do -- especially secretive companies like Apple -- it's a nice reminder that most decisions are made without malice, because they make the most sense based on the requirements at hand.

Having control over the entire hardware and software ecosystem means that there's no reason to follow standards to the letter when those standards get in your way and make things harder, more expensive, or even just not possible.

Anyhow, just wanted to finish by saying all of this work is truly impressive. Obviously there's still a lot to be done to support everything to the same level as, say, a Dell XPS machine, but the progress made so far is pretty amazing, and even though I have no plans to buy an M1 Mac any time soon, I'm always excited to read these updates.


>As much as many of us want to attribute positive or negative reasons or motivations to things companies do -- especially secretive companies like Apple -- it's a nice reminder that most decisions are made without malice, because they make the most sense based on the requirements at hand.

Keep in mind too that "requirements at hand" can include a significant degree of path dependency, and that it can be a mistake to read too much reasoning into something too. Sometimes there just isn't any master plan, some decision made years earlier has created a dilemma due to dependencies built on it since and there just isn't the resources (or ROI, particularly without any spec compatibility to worry about) to deal with it right then. For a hardware company with necessarily very long lead times (they can't exactly start having chips fabbed and making electronics two weeks before launch) Apple runs a pretty hectic schedule with major new launches every single year. Even a vertically integrated company on their scale is not completely immune to the challenges of tight coupling or thinks of it all ahead, and there are definitely decisions Apple has made that they regret but can't easily get out from under. For example, just a few weeks ago HN had a sizable thread on "Swift Regrets" [0] by Jordan Rose:

>"I worked on Swift at Apple from pre-release to Swift 5.1. I’m at least partly responsible for many things people like about Swift and many things people hate about Swift. This list is something I started collecting around when I left Apple, and I’m putting them up so other language designers can learn from our mistakes. These are all things that would be hard to change in Swift today, because they’d break tons of people’s code. That’s what happens with real-world languages and libraries: the more users you have, the fewer breaking changes you can make."

For Apple same definitely turns up in hardware once in a while too, though of course they try to be careful and have a lot of institutional knowledge about pitfalls by this point. Can't always look at something they're doing now and assume that if they were greenfielding it they'd do the exact same thing again.

That's part of why they've long been (even before iOS) such hardasses about stuff like 3rd party use of private APIs under development. They certainly don't have any active incentive to break existing stuff, if everything could magically work forever that'd be awesome, but simultaneously they know they fuck up sometimes and want to be able to make changes. Hard to balance those in software without some kind of push to devs on putting experimental OS frameworks into production software or spraying all over the drive for installs. Users will inevitably come to depend on it anyway and now you're stuck.

For hardware I think it's obscure enough they're not worried about it, as absolutely awesome as it is Asahi Linux is never going to pinch them there.

----

0: https://news.ycombinator.com/item?id=28603794


> Keep in mind too that "requirements at hand" can include a significant degree of path dependency, and that it can be a mistake to read too much reasoning into something too.

This is why I'd say there's some value in sticking to spec even if it doesn't 100% meet your needs or do things the way you think they ought to be done. Tacking away is as likely to screw you in the long run as it is to benefit you. The benefits would be much more obvious, of course, in a world that's almost an unimaginable utopia from our perspective: one without intellectual property, in which major advances like Apple's ARM silicon were widely shared and put to the good of humankind in general, rather than held privately for the profit of a single corporation.


It's also the cost of those decisions would have quite the impact on the investment in R&D which is already bonkers as it is.

A lot of people try to attribute some human aspect to a large multinational (as they are mostly just sociopaths controlled by shareholders in a sense), but technology-wise it's often just "it made sense for us at our scale".

Same with the weird USB-C PD controllers that are 'almost normal' but then specialised for Apple; it's not that they want to make it hard to repair, use custom software with or patent it and screw with other companies.. it just makes sense to change that part instead of changing another part. At scale, that is a choice you have, but also a choice you must make in such an implementation. This is of course not exclusive to Apple, a lot of the really high quantity mass produced electronics contain slightly modified versions of existing parts because that turns out to be cheaper/more reliable/better fit-to-spec than modifying the rest of the design around an existing part.

This is one of the baffling things about calculators or toy electronics (for light and sound effects) in the same vein; modifying an ancient chip with very crude software on a single-sided extra thin older style PCB material with no silk screen and only partial solder mask with a die-on-PCB with wire bonding and epoxy blob... all to make the assembly 3 cents instead of 4 cents, and reduce the lack of connections to make that 3 cent assembly have less possibilities of defects and causing less of a multi-thousand-cent service process to be activated (swap in a store, callcenter, website, email etc). Every time you get one less customer to call about a crappy firetruck where the lights stopped flashing in a 3 cent assembly is a huge win. This is of course an extreme example, more complicated hardware and software extrapolates quite extremely from there.

Oddly enough, some of those practises are implemented at a much less impactful scale in HP and Dell desktops where they might have one large non-standard PCB hosting all the normal PC components but also all the DC-DC converters, front I/O and on-board WiFi. It makes almost no sense at all to do that, but the reduced number of connectors apparently makes the products cheaper to support, last longer and since people don't upgrade them anyway they are often just 'used up' and thrown away. That last part is bad of course but something that a design-for-manufacture department is unlikely to care about or mitigate using a trade-in or recycling program (which often just ends up meaning: ship the trash to china or Africa and let them deal with it).

The amount of details and their impact at this scale is astounding. Add custom silicon and you're almost in a new dimension.


> Everything Apple does they do because it makes sense for them. They are neither trying to help nor hinder third-party OSes. They just don't care.

That's my read on this as well. They designed it so it would work well with their other hardware/software and this is what they landed on. I seriously doubt they paid even a second of thought to other OSs running on their hardware. Some people will call that "user-hostile" but it's much closer to apathy IMHO and as a macOS and Apple hardware user I'm fine with that. It's cool that Linux can/will run on the M1 but I'll never be doing that myself. It reminds me of that scene in Mad Men "I don't think about you at all" [0] but Apple's feeling isn't even as sinister or hostile as the comment made by Don.

[0] https://youtu.be/LlOSdRMSG_k?t=40


> They just don't care.

Exactly this. I can easily imagine somewhere in some OS slack channel in Apple:

hardware dev: hey dear @channel we're breaking NVME spec in couple places and we're kinda running out of time to fix it, would be so kind to address it in you drivers if it's not too hard?

os guy: yeah, no worries those seem trivial, we can easily adjust.

hardware guy: cool, thanks a bunch.


> They are neither trying to help nor hinder third-party OSes.

That's naive.

It is absolutely part of their design goals to plan for obsolescence and to prevent third party OSes to run perfectly on their device. The move to soldering SSDs and RAM on laptops and desktops is designed to prevent users from extending the life of the device. Mac Mini's experience a delay of 2-3 minutes to even start booting alternate OS to frustrate the users. The T2 chips even prevent booting a lot of other OSes. There are umpteen examples like these.

And that's all ok when you consider the business goals of Apple. But please don't pretend that it's all an "accident" or Apple doesn't care if a user wants to break free from the stranglehold of the Apple ecosystem.


Perhaps that's their stance on competing software, but certainly not competing hardware. I recall an internal memo that had to do with intentionally being poorly compatible with other brands in implementing standards so that it could be written off as programming error.


> it makes sense for them. They are neither trying to help nor hinder third-party OSes. They just don't care.

That sounds eerily similar to a paperclip maximizer (https://wiki.lesswrong.com/wiki/Paperclip_maximizer). I suppose we all know what Apple is maximizing, but I never drew this parallel before.



> Stuff like this makes me wonder: why does Apple do this? If I try to give them the benefit of the doubt, I can assume that these changes are done for performance, cost, power-saving, or maybe even security reasons. Otherwise it just seems like Apple does these things in order to make it harder for other OSes to run on their hardware. Which is certainly their prerogative, but it just makes me think less of them.

The reason for most of these things is most likely that Apple didn't start from scratch with the M1. M1 Macs in many respects appear to be an exercise in "how can we make our existing iPhone / iPad system architecture into a general purpose computer", and so comes with a surprising amount of legacy idiosyncrasies. For example, Apple started using NVMe-like storage all the way back in the 2015 iPhone 6S, and therefore was 1) very concerned with fitting everything into a small, power efficient package and 2) not very concerned with being standards compliant at all.

If Apple started from scratch with the M1, it would have most likely been more standards compliant. Out of Apple's self interest and ease of maintenance, not to help the community. If there's anything Apple has shown - on Macs only - is that they're not really hostile with respect to standards compliance and helping third party OS support. It's just that they don't care _at all_ about supporting that, and prefer supporting their own internal processes every step of the way.

(Not an expert on this topic at all, just echoing my impression of the system architecture choices)


> If there's anything Apple has shown - on Macs only - is that they're not really hostile with respect to standards compliance and helping third party OS support. It's just that they don't care _at all_ about supporting that, and prefer supporting their own internal processes every step of the way.

Apple wrote Boot Camp drivers and the Boot Camp Installer for Windows on Intel Macs. The Boot Camp Assistant itself also paves the way for users to install Windows, for example, by partitioning storage for Windows installation.


That's fair. I suppose Apple does care about third party OS support as long as it sells them a bunch more Macs with comparatively low engineering effort. The ability to run Windows probably helped them sell a significant percentage more Macs, probably especially so when the Mac market was smaller just after they switched to Intel.

The Linux community had to reverse engineer Apple T2 drivers itself though. Apple didn’t restrict anything there, but the community had to figure it out on its own.


I believe this has come up multiple times on HN but the reason Apple breaks NVMe is because NVMe command structure isn't large enough to hold all the crypto state information they need to shove down to the device. The spec would not support what they are doing, so their choice was to either tweak NVMe to suit, or abandon their feature.


Why are they even doing crypto at the NMVe level?! Why is it not good enough for them to do it at the filesystem level (like ZFS native encryption) or at least block device level (GELI/dm-crypt/etc)?


Because that way the OS does not need to ever see the low level keys. It allows them to feed them straight from the Secure Enclave, which means an OS compromise can never result in compromise of storage encryption. Plus they do the crypto in a hardware accelerator, so it's free for the CPU.


What's the threat model here? If you install malware that runs as your user, it can see and edit any of your files regardless of whether or not the OS knows decryption keys for them. If you require a password to cold boot the device, then stealing someone's powered-off laptop doesn't get the data. (That's without any special hardware.)

So the scenarios that remain are: fingerprint to unlock device to boot (need to do some crypto before the OS, unless you want the fingerprint to just flat-out give up the key to anything sniffing the SPI bus), or somehow resisting data modification without requiring the user to type a password. I feel like Bitlocker tries to do the latter, but I don't know what attacks they are trying to protect against. (It's on by default on new laptops, but you can just sniff the key over SPI when the OS is booting, so what security does it actually provide?)


I'd guess the threat model is imaging the encrypted storage, which can be done even if the computer is turned off, in the hopes of acquiring the keys later to decrypt the image. If you install malware and the encryption is not visible to the OS, all you get is the data on the machine at the time of implant, since storage image decryption is coupled to the hardware. I also imagine it is much easier to exfiltrate encryption keys stolen from the OS undetected than to rummage around and exfiltrate the contents of a hard drive.


> What's the threat model here?

I'm guessing they're protecting devices against APTs: state-level actors with lots of funding, competitors intent on discrediting their ecosystem, NSO Group, etc.

As a side benefit for users and Apple it makes the entire chain more difficult to introspect/attack.


> If you install malware that runs as your user, it can see and edit any of your files regardless of whether or not the OS knows decryption keys for them.

The OS now prompts to grant file access permissions to applications.


Also gives them a bigger lock-in opportunity.


… what lock-in opportunity? Who’s stopping anyone from copying files off a Mac?


Their evil plan is to make their platform so good nobody will want to use anything else. Dastardly!


Ability for swap SSD


Are you asking why they don't the expose the plain-text keys to the kernel software? That's your answer.


> UART hardware in the M1 dates back to the original iPhone!

UART as in a simple serial debug interface?

I thought they are fairly simple - isn't that like celebrating Apple for having the same transistor design since some date?!


They are simple, and yet Samsung managed to come up with about 4 incompatible variants in their other SoC lines, that Linux has to support the explicitly. Apple stuck to the same one they had from the start.


Asahi Linux progress reports rank up there with Dolphin[1] progress reports for being excellent examples of technical writing as well as simply reminding me why I fell in love with computers in the first place. Love reading this stuff even if I may never use it (although I would love to run Linux on M1 hardware someday!).

[1] https://dolphin-emu.org/blog/


Still a bit sad that in both cases its effectively righting of wrongs caused by closed down undocumented computing platform.

On the other hand if the respective authors like accepting such challenges and working on these projects, there is nothing wrong with that. :)


Can't write a good detective story without a mystery, after all.


How about running Dolphin in Linux on M1 software?


Should work just fine. It already runs on macOS on M1, and on ARM64 on Linux in general. Just needs a GPU driver to perform well :)


I wonder if Alyssa (or other devs that took a deeper look at the DCP) have any idea why the builtin HDMI port of the Mac Mini would not send DDC messages to the connected monitor.

I’ve been struggling with this in Lunar (https://lunar.fyi) for a long time and while I tried my best by comparing ioreg dumps and looking at the DCP driver in IDA, I couldn’t find any obvious logic would block this communication voluntarily.

I should mention that on M1, the DDC messages are sent to the monitor by calling IOAVServiceWriteI2C on the DCPAVServiceProxy of the monitor as seen here: https://gist.github.com/alin23/b476a02a8cd298436848e28476aed...

I’m thinking that this logic might either exist in the DCP firmware which is not accessible from userspace, or it might just be a side effect of some out of spec behavior that the HDMI port might have.


The DCP firmware has multiple endpoints. Currently, we only use and understand the main endpoint, which lacks raw I2C/DDC/EDID interfaces. Presumably this is available on another endpoint, but we haven't looked at this yet. Your gist gives me hope.

The HDMI port on the Mini is funny. The M1 supports exactly one internal display (the panel on a MacBook or iThing) and one external display (over DisplayPort/Thunderbolt). This is why M1 MacBooks can only drive a single external monitor.

For the Mini, the "internal" display is an internal DisplayPort connection, converted to HDMI with a mcdp29xx chip, and stuck on the HDMI port. Expect weirdness.


I thought it must be something related to the DisplayPort transport. I noticed in ioreg that the HDMI AppleCLCD2 had downstream=HDMI upstream=DP as its transport params.

So its possible that it isn’t DCP related after all, the mcdp29xx chip might not be implementing the DDC part at all like most USB-C hubs that I have to deal with.

Thanks for the insight!


It's worth noting that the MDCP29xx has its own DCP endpoint and proxy driver, so it likely implements its own DDC channel that way. Obviously, DCP itself needs to get the EDID one way or another. We'll see once we start looking into those add-on endpoints :)


I’ll keep a close eye on your work then!

I don’t have an M1 Mac Mini to do more thorough tests, but one Lunar user reported that the DDC set brightness command worked for a short second on the initial connection of the monitor.

So from that I’m guessing that the MDCP29xx keeps the DDC channel open until it gets the EDID and does whatever handshake is needed and then.. maybe it closes the channel?

Another weirdness anecdote is a user which reported that the DDC commands sent to the Thunderbolt-connected monitor were also sent to the HDMI monitor at the same time, but (as usual) writing directly to the HDMI monitor service did not do anything.

Could you maybe point me to where I should start looking for the MDCP29XX assembly? I previously looked at `/System/Library/Extensions/AppleMobileDispH13G-DCP.kext` but I’m thinking that what you’re talking about might not be a kext.


Thank you for making Lunar. I installed it right away and it's great.


This looks super cool! I just got an M1 MBA, it's fast as lightning. Its nice knowing a project like this is percolating and that in a few years when this daily driver becomes an extra machine there will be fun linux alternatives to try.


If your MacBook is still working by then. Looks like there is a lawsuit against Apple, because there are hardware issues with the screens of M1 Air and Pro. Before there was the USB-C PD to brick issue, but that may have been fixed now. Overall, not the best track record for longevity.


Observing the achievements of this group of young talented devs only increases my imposter syndrome.

Awesome job, love the enthusiasm.


Agreed, I have a M1 mini sitting around doing mostly nothing and I'd love to use it to contribute to the project but I'm not a developer so I don't know where to start. I am definitely looking forward to the mentioned installer release so I can try it out myself though. I can believe the hype about how blazing fast it is on the desktop even in software rendering, given how powerful a Linux VM on Parallels can be on it. It rivals my Ryzen 5 3600 machine and even surpasses it in some metrics, and that runs Linux bare metal.


Do you know about Marcan's YouTube channel? He livestreams (some of) the hacking, so you can actually see exactly how to start! They are super huge packages, but make for excellent ambient stimulation.

I wish he was doing tutorial style videos, too. Pleasant voice, well-spoken and incredible knowledgeable. I bet he could do videos which don't have you thinking "Yeah, but why does this work?".


I tried doing one of those! It's not exactly a tutorial, but it's an attempt at a walk through everything that went into building the hypervisor that we use for hardware reverse engineering and testing.

https://youtu.be/igYgGH6PnOw


I don't follow your feed too closely, as it's usually not within my time budget, so I missed that.

Thank you very much for doing what you do! You are an inspiration to me and I admire the calm and structured way you work. Hope you keep enjoying what you do!


Reverse engineering and hacking can be a crazy time sink. It’s truly a game for those with lots of free time.


There are many things that are crazy time sinks - either because they are difficult, or because they are emotionally or intellectually engaging. Or because they are mandatory in the context of the person doing them.

And the concept of “free” time is also relative. For example, you could argue that child rearing is a game for those with lots of free time.


I used to have the same sentiment when the same dev hacked the original Wii and PS3.


"On typical SoCs, drivers have intimate knowledge of the underlying hardware, and they hard-code its precise layout: how many registers, how many pins, how things relate to each other, etc......

However, Apple is unique in putting emphasis in keeping hardware interfaces compatible across SoC generations ..... the device tree then can be used to represent the dependency relationships between these power domains dynamically. ... This approach is unfamiliar to most upstream subsystem maintainers, but we hope they recognize the benefits over time. Who knows, perhaps this will inspire other manufacturers to do it this way!"

This is really weird commentary to me, as far as I know Device Tree has been the standard for embedded ARM drivers in the Linux kernel for years, and for several years on PowerPC before that. When I worked in embedded linux, often the "bringup" for components in new ASICs was to create the device tree definition. What am I missing here?


This is about compatible properties. On typical DTs and SoCs, you end up with entirely new compatibles for tons of stuff every SoC generation, and it'd never work with the old ones. What we're doing is putting on generic compatibles that old drivers can continue to bind to, and designing the rest of the binding to be generic enough to describe parameters of the device.

So, a GPIO driver for a random SoC might hardcode that it has 42 pins. Ours uses a property instead. The vast majority of clock and power management drivers for other SoCs hard code the clock or power hierarchy and provide static sets of outputs. We put every single clock control in a separate DT node and describe their relationships. A typical cpufreq driver has intimate knowledge of the clocking controls for the whole SoC, and hardcodes the layout of the clock registers. My prototype for that just has a separate instance for each CPU cluster, and describes the performance states in the DT. And so on and so forth.

Basically, on a typical SoC, either a hardware block is identical to last generation's, or it's incompatible. Apple's blocks instead follow patterns, so we're building parameterizable bindings that can handle any configuration of those blocks with a single driver.


This is just embarrassing.

> the M1’s CPUs are so powerful that a software-rendered desktop is actually faster on them than on e.g. Rockchip ARM64 machines with hardware acceleration.


It’s so good that I decided to sponsor the project a while back. I will probably never use it but I really like these guys talent and dedication!


This fascinates me, all these negative comments that seem to stem from projections of fear of failure. Yeah, this project has a chance to fail, like everything else in life.

But to me I think this project will be indeed ready in a few years, and I will certainly be running this on my M1 as soon as it is stable and useful. It will be interesting to see how this turns out, I just wanted to comment on the next-level narcissism going on. Why most of you choose to be pessimistic and make not your problem, your problem, is beyond me.

Keep up the great work Team, Asahi!


> all these negative comments that seem to stem from projections of fear of failure.

No. I criticise this project because it adds value to a device that we should all be boycotting.

We absolutely do not want to see the proliferation of custom, locked down SoCs on the desktop platform with each one being incompatible with each other, and limiting our freedom to run what code we want on it. That is why this project is extremely short-sighted for the future of our computing freedom.

The M1 is a locked blackbox. It is designed to take away user control on both hardware and software. These are legitimate criticism that many Apple fans try to deflect. They claim that the M1 isn't a locked down machine by comparing it with the ios platform as proof. After all, iPhones and iPads have locked bootloaders that prevent you from even running any other OS, while this is not so with the M1 computers.

That's just plain denial. Just look at what has been happening to the Mac Mini:

1. The first few Intel Mac Minis allowed you some level of customisation of both the hardware (change RAM or HDD / SSD) and software (install other full featured OS).

2. Then came the Mac Minis with soldered RAM and SSD. You could no longer customise the hardware. Software was still customisable and you could still install other OSes. (Recall that Apple even offered free drivers for another OS, i.e. Windows).

3. The current generation of M1 Mini now doesn't allow you to customise both the hardware (everything is soldered) and the software. Technically you can install other OSes, but the reality is that currently only crippled versions of Linux and xBSD is available and practically the only full-featured OS available for it is macOS.

These are clear indicators of how Apple has been working slowly to lockdown the Mac platform like their ios platforms. (Right now, projects like these give Apple and M1 publicity without harming their end goals. And so they are tolerated. Want to bet that as soon as some alternate fully featured viable OS appears for the M1, the bootloader will be locked, and the next Apple SoC will cripple it again?).

The frog is still slowly boiling - https://en.wikipedia.org/wiki/Boiling_frog - to keep you in denial.

There's another reason why I call this particular project short-sighted. Remember what happened when Apple introduced the Mini with soldered RAM and SSD's? It wasn't popular and didn't sell. Apple was forced to backtrack and the next Mini didn't have soldered RAM (but the SSD was still soldered). A similar thing could have been possible with the M1 too. Apple has bet their future on the success of their ARM processors. But if people boycotted Apple Silicon desktop platform for not being as open as AMD / Intel, Apple would have been forced to compromise a bit (at least strategically for the short-term) and released more literature to make the platform seem a little more open. And we might have seen Linux and xBSD being supported on the platform by now.


> No. I criticise this project because it adds value to a device that we should all be boycotting.

That's a pretty loaded, holier than thou attitude.

Keep your politics out of my computing and open source, please.


> Keep your politics out of my computing and open source, please.

Open source started as a political movement.

Are you gonna ask me to keep the government out of your Medicare next? :)

(no, that's not an invitation to debate broader politics, it's just the first example that springs to mind)


> Open source started as a political movement.

GNU is not the sole foundation of open source.


Do you really think Berkeley, of all places, working to make BSD available to the world for free, including excising all that proprietary AT&T code, wasn't engaging in a political act?

That the open source and hacking cultures of the 70s, 80s, and 90s weren't founded on an intentional rejection of the increasing balkanization of software and technology after the 1974 determination that software was copyrightable?

I understand, at an individual level, it might seem like throwing some code up on Github with a permissive license might not seem like a political act. But the history of open source is inseparable from the politics of intellectual property, copyright, and all those things that follow, including issues like the right to repair.


I've been involved in open source for a long time (well before GitHub). My experience is that there are different groups with different motivations wrt open source.

As someone who has been in the "open source for the common good" camp, there seems to be a more extreme "open source as a religion" camp.

You're not off and have given me something to thing about.

FWIW Berkley wasn't immediately what I was thinking about, but I am more on the BSD vs GNU side of things.


> Keep your politics out of my computing and open source, please.

Then ignore me and go do the politics that you want rather than unnecessarily choosing to target someone with whom you don't agree or want to meaningfully engage.


Hm.

I've got an M1 Mini laying around (Apple annoyed me enough in the past six months that I've gone away from them entirely, replacing a M1 Mini with an ODroid N2+, among other things). If it's daily drivable, I suppose I should see about getting Linux installed on it. I've not gotten around to selling it yet...


If Apple has really annoyed you, put that Mini on the used market ASAP where it can potentially displace a new unit sale.

Not that I'm unsupportive of the Asahi project, it's just a fact that you're dong Apple a favor by keeping that M1 Mini around collecting dust, while every passing day it becomes increasingly irrelevant WRT offsetting new unit sales.


> replacing a M1 Mini with an ODroid N2+,

Hey, if you're running Linux, you're using my drivers either way ;-)

Now back to typing away at Panfrost on my M1 Linux to debug an issue on the Odroid N2 I have Ethernet connected to the M1...


I put Manjaro on my 2012 Mac Pro and it was so frustrating to even get the boot loader running that I'll never run Linux on a Mac again. Once it was running, Apple's garbage BIOS continued to present issues too.

If I don't have to work with iOS anymore then I'll never even buy another Mac.


> It’s not perfect, as it can’t support a select few corner case drivers (that do things that are fundamentally impossible to support in this situation), but it works well and will support everything we need to make 4K kernels viable.

I'm curious what these corner cases are; could anyone share?


eGPUs for one, but there are bigger problems to solve there. Other than that, IIRC some v4l stuff did this, and a few others. It's a tiny subset of drivers.


> bigger problems to solve there

It's not the fucking "maximum BAR size is tiny" problem again is it? (hello Rockchip :D)


Nah, but I was informed the other day that GPU drivers apparently like to map BARs as normal cacheable memory and... I'm not sure the M1 supports that.

And then if you don't do that you run into problems with apps mapping GPU memory and doing unaligned accesses (plus it's a performance problem).


drm likes uncacheable write-combining as an optimization… but that's disabled on arm64 https://patchwork.kernel.org/project/linux-arm-kernel/patch/... because it can cause image corruption glitches (saw them myself, had to find and cherry-pick that commit into FreeBSD's drm back then)

Generally I don't think "normal" is necessary? In FreeBSD/aarch64 we interpret most ioremaps (all other than WC and WB) as "device": https://reviews.freebsd.org/D20789

and there doesn't seem to be a performance problem. Well, I haven't scientifically tested the performance but SuperTuxKart can do >100fps at 4K on an RX 480 :)


With Device memory you can't do unaligned accesses, so userspace apps that map GPU memory and expect that to work (as it does on x86 and on ARM if you can do a Normal mapping) will break.


Looking at amdgpu_ttm, for '"On-card" video ram' it sets TTM_PL_FLAG_WC, so that would use ioremap_wc, that's not device, we have it as WRITE_COMBINING which is actually WRITE_THROUGH. So yeah normal mappings might be used somewhere.

Where does that limitation on the M1 come from anyway?


The M1 bus fabric is very picky about access modes. We had to add a whole new ioremap_np() to the kernel, because it requires nGnRnE mappings for on-die peripherals, while Linux used nGnRE for everything on ARM64 until now. For PCIe BARs it wants nGnRE instead, and I'll be very surprised if it'll take a normal mapping...


I was initially critical of this endeavor before it started. However, the issues I had in mind have been addressed and this effort is truly impressive. I really appreciate that marcan is committed to going beyond doing this the right way.


I cannot wait for this to come to fruition. Thanks for all the hard work, I'll be installing it as soon as sound and GPU acceleration hit.

Amazing work :-)


Amazing progress. However I don’t imagine GPU driver progress will be fast, or ready in the near future.


Reminder that GPU userspace is passing 90% or so of the GLES2 tests. It's just missing kernelspace, which is arguably the easier part.


That’s absolutely exciting. Thanks for all your efforts and the team’s! I hope to contribute with the project somehow, but you guys are on another “lower”-level :) Cheers


Mind blowing progress! So impressed with the Asahi team.


Meta but does the name have any relation or maybe give homage to the other Asahi?

https://en.wikipedia.org/wiki/Asahi_Pentax


There are lots of things called Asahi (a beer, a newspaper, an ISP, ...), but our project is specifically named after the Asahi apple, which is the Japanese name for the McIntosh Apple.

https://asahilinux.org/about/


That's ridiculously clever.


Their about page only says:

> Asahi means “rising sun” in Japanese, and it is also the name of an apple cultivar. 旭りんご (asahi ringo) is what we know as the McIntosh Apple, the apple variety that gave the Mac its name.


There are tons of "other Asahi" - among other things it's a beer (super dry), baseball team, ramen noodle moniker


I wonder if it would be legal for a veteran of Apple, who has worked on porting MacOS to M1, to take that knowledge of the hardware and assist with BSD or Linux porting? On the one hand its just porting to a piece of hardware, but on the other hand she may be using some proprietary knowledge of a closed interface.

Anyone have experience with doing this for Apple hardware? Does the company come after you if you reveal some of this knowledge to the wider public?

Not asking for myself, BTW, as I have had no affiliation with Apple.


This is a truly amazing project. I'm contributing financially (as I don't have the time to contribute code) to help Alyssa and the rest of the team. If you can, you should too.


Anyone know if the *BSDs are following this work in parallel or plan on doing the same? It would be nice to have some choice — yea I know MacOS is Darwin plus the BSD userland but I like the idea of throwing FreeBSD on an old M1 when I get sick of Linux (even if I eventually go back to a distro…)


So far NetBSD and OpenBSD are working on the M1 port.

https://wiki.netbsd.org/ports/evbarm/apple/ for NetBSD install instructions.

I don't see a focus on the FreeBSD side (yet?) however.


wow they've made a ton of progress -- wish this work got more press. Not to take away from the Asahi linux folks's work but I had no idea the netbsd for example was this far along.


Mark Kettenis has been working on the OpenBSD and U-Boot ports with us, and we'll be relying on U-Boot for our end user installs :)

We're also dual licensing all our bespoke drivers, so the BSDs can take code from there (particularly important for the GPU driver, as there is already a lot of shared code in that subsystem).

I'm actually thinking I'm going to rewrite the WiFi support patch for Linux from scratch, based on the OpenBSD version, just because they did a great job distilling what matters out of the original messy PoC patch that Corellium dumped earlier this year.


amazing -- godspeed!


So good to read Marcan’s comments on this thread, so much context. Thank you!


really naive question:

why not run Parallels since it now uses the native macOS hypervisor.framework, and then Linux / Haiku / FreeBSD / whatever?

i thought of that as it somewhat outsources device driver digressions.


Because virtual hardware will never work as well as real hardware. You're layering one OS on top of another; that's not free. No GPU acceleration, etc.

(Plus, we actually support the M1's vGIC which Hypervisor.framework does not yet, so VMs running on Linux should perform better than VMs running on macOS! Yes, we beat Apple at supporting some parts of the M1 already.)


Because then you still don't own the hardware - Apple does.

Support can be pulled out from underneath you at any point and you're limited to the exposed hardware interfaces.

Throw on top that (at least for me) I have zero desire to maintain a macOS machine.


The hypervisor option to run macOS is a good way out for testing and using. Great job!


[flagged]


You broke the site guidelines badly with this flamewar. We ban accounts that behave that way, so please don't do anything like that again. More explanation here: https://news.ycombinator.com/item?id=28822026.

We detached this subthread from https://news.ycombinator.com/item?id=28763252.


Our project's goal is not to produce an "unsupported hack", it's to make these machines work at least as good as, if not better than, any other machine with "actual Linux support" ;-)


That's cool. Some Macs rank near the top in terms of Linux support and openness. E.g., the MacBook 2,1 is one of the very few laptops supported by Libreboot. Others are totally unusable.

I like Apple hardware (and ThinkPads), but I prefer the openness of Linux (and BSD).

Current MacBook Airs are pretty competitive in terms of price, and the fanless CPU is very appealing. Is it much of a gamble to purchase one now expecting to run mainline Linux in a year or so?

Aside from the new architecture, some components Apple has been using are quite unfriendly. Broadcom wireless cards tend to perform really poorly with open source drivers.


The Broadcom fullmac cards should be well supported in Linux. I've seen people complain about poor WiFi range and the like on other Macs, but that is almost certainly caused by wrong/mismatched firmware and NvRAM distributed with linux-firmware for those machines. We're going to be using Apple's blobs exactly the same way macOS does, which are tuned to each specific machine and module variant, so radio performance should be identical to macOS.

I wouldn't buy an M1 right now... just because the next generation is likely around the corner :-). But yes, I can't promise all the polish or that every single thing will be upstream, but things should be solidly in the "daily driver" category a year from now. I'd be surprised if there was any non-optional (i.e. excluding accelerators - no idea if anyone cares about the Neural Engine yet...) hardware left without usable support by then. GPU is a big question mark that should become clear in the next month or two as I poke at the kernel side, but honestly I expect solid OpenGL a year from now, at the very least.


For the user-mode part of the neural engine, https://github.com/geohot/tinygrad/tree/master/accel/ane helps quite a bit today. There is also no generic neural engine support infrastructure at all in user-mode on Linux. :-(

For the kernel mode part, ANE is behind an ASC.


It's really just a matter of someone caring. I don't see finding myself with enough time to work on that any time soon, but if someone has a good use case for it and motivation to get it done I'm sure it'll happen (and I'll gladly help).

Some things are dodgier - e.g. though supporting the AMX CPU extensions is quite viable in a kernel fork, I'm not sure if that kind of thing would fly upstream. Same with security features like SPRR - not likely to happen. But these aren't really things the average end user has to care about; they're bonus features, not anything core.


Just curious: if someone was inclined to build a set of good (correct style, well-documented, etc) patches to support those extensions, why would the kernel refuse them?

(Put another way: I'm wondering whether you think the difficulty is technical or political.)


It's a bit of both. It basically boils down to it being an invasive change to core kernel code that doesn't have demonstrable benefit to users, and is specific to one platform. If we can point at a specific application and say "look, this speeds up 300% with AMX" then that might help convince people, but there would definitely be quite some political discussion, not in the least because what Apple did is a violation of the architecture.

I'm hoping we can at least push through a prctl to turn on TSO mode for x86 emulation. I think that one will be simple enough and have enough benefits to convince people.


Getting proper AMX support on _x86_ Linux merged is nasty because of state size issues. I know nothing about ARM64 extended state, but some of the same issues may exist.

(The x86 xstate design is horrible. I doubt that ARM64 is anywhere near as bad.)


All I can say is I wish you luck!


[flagged]


Apple spent countless development hours writing an entire BootPolicy framework to allow other OSes (and self-built macOS kernels) to run on these machines. Not only that, their secureboot implementation implements per-OS security modes, which means you can dual boot Linux with a full secure macOS, including running iOS apps and Netflix in 4K. Can't do that with LineageOS!

Linux on M1 Macs is "supported" exactly the same as Linux on any random PC without manufacturer support. Sure, it's more work to get running, because it's a new undocumented platform... but we've signed up to do the work. If Linux on these machines is an "unsupported hack" then so is Linux on the vast majority of desktop/laptop computers people use.


The only reason this project is getting so much PR is because of Apple. Apple's PR machination is eager to show that developers are fully behind its new Apple Silicon platform, and thus works behind the scenes to give this project the publicity they need. (Everybody knows what happens to platforms that don't see developer support).

But more than that, it is also to spread the fudge that the M1 supports alternative OSes, where as the reality is that it will never support a fully features OS - Apple's business and design goals with the M1 was to create a locked desktop that can only run macOS and only allow other alternate OSes to run using virtualisation on macOS, so that the users data can always be mined. And they have successfully done that.

The developers behind this project are naive to believe that Apple will let them sabotage their efforts to lock down the macOS platform, like the ios platform that earns, that earns them billions of dollars. The locked-down M1 desktops / laptops are just one of the last step towards this goal. As I said elsewhere, just look at what has been happening to the Mac Mini:

1. The first few Intel Mac Minis allowed you some level of customisation of both the hardware (change RAM or HDD / SSD) and software (install other full featured OS).

2. Then came the Mac Minis with soldered RAM and SSD. You could no longer customise the hardware. Software was still customisable and you could still install other OSes. (Recall that Apple even offered free drivers for another OS, i.e. Windows).

3. The current generation of M1 Mini now doesn't allow you to customise both the hardware (everything is soldered) and the software. Technically you can install other OSes, but the reality is that currently only crippled versions of Linux and xBSD is available and practically the only full-featured OS available for it is macOS.

These are clear indicators of how Apple has been working slowly to lockdown the Mac platform like their ios platforms.

The frog is still slowly boiling - https://en.wikipedia.org/wiki/Boiling_frog - to keep you in denial.

Now the only thing that remains is for the bootloader to be locked, and that will the last nail in the coffin of the M1 desktop platform. And it will happen in the near future once the M1 platform reaches a critical adoption point.

(I can understand why the project developers are hostile here to critics like me - the good publicity has been bringing them money, and that's quite rare for an open source projects).


[flagged]


And yet here I am, with Linux on an Intel laptop that can't play video without tearing. Our M1 DCP driver already does that properly.

Support from chip manufacturers doesn't mean you're actually going to end up with a better user experience.


[flagged]


Apple are absolutely not going to do that. That would be illegal. Sony lost a class action over this, and Apple have a lot more to lose in the PR realm than them. Please don't spread FUD like this. Apple have never, not once, locked down a Mac, post facto or otherwise. It always has been a platform open to third party OSes.


They did it on iOS countless times to prevent rooting the device. Not sure how that's supposed to be different as the tech converges, but I never owned a Mac so that's all I know of Apple.


iOS devices never advertised support for booting whatever you want; Macs always did. Very, very different product lines.


So the 15" M1 laptop will let you custom boot if you manage to decipher its undocumented internals, but the 10" M1 tablet will actively shut down attempts on the next update. Great.


Yep. This seems to be the point the "FUD! FUD!" criers are ignoring.

Apple has a long, continuous history of locking down hardware. Never having a bootloader unlockable iDevice, unibody, glue, soldered on components that are traditionally user replaceable, soldering in displays and backs when people try to repair batteries, serializing the components so you can't even swap official parts.

Apple has never gone in the reverse direction to make components more easily user serviceable. It's always been in the more restrictive direction. More proprietary with every iteration. Now it's the CPU. Bootloader locking on an M1 Macbook is the next logical step.


If that is the next logical step why would they go out of their way to design a whole mechanism for running custom kernels?

They could've just released a locked down Macbook in the first place if that was the goal.


I'm sympathetic to marcan's points and I don't think Apple is likely to do anything like this, personally, but this argument isn't very convincing. All this means is that they can't release a firmware update blocking use of a third party OS on existing hardware (contra techrat).

OK, great, but we're talking about a manufacturer that makes products designed to go in the rubbish bin 2-4 years after manufacture (witness the half dozen comments by users in this thread saying they have already put their M1 computers in a drawer somewhere!). Apple are constantly iterating, and the goal of a project like Asahi is to stay ahead of the game and continue running on new hardware.

The proper and correct point that techrat should have made is that the second something like Asahi is a threat to Apple for any reason, Apple can release new hardware that's locked down similar to iOS. You'll even have people on this site defending the changes too, in the name of security or something.


If people have “already put their M1 computers in a drawer somewhere” that’s hardly because Apple “designed [the product] to go in the rubbish bin 2-4 years after manufacture”. Apple has a fairly long arm of support for products. The MacBook pros made in 2014 finally this year stopped receiving new os updates (but are still receiving security updates). The iPhone 6s got iOS 15 this year and that’s a 6 year old device. If people are chucking their devices after 2 years that’s on them


The bigger point that I've been trying to make (regardless of whether or not you think Apple would try to lock down an M1 Macbook) is that I don't trust Apple because of their history.

They have a LONG history of making decisions that ultimately are hostile to the user. Apple also has a history of explicitly locking out Linux users.

https://www.phoronix.com/scan.php?page=news_item&px=Apple-T2...

> It looks like even if disabling the Secure Boot functionality, the T2 chip is reportedly still blocking operating systems aside from macOS and Windows 10.

This is further than Microsoft has ever gone with hardware.

Apple has been dipping their toes into the telemetry and ad tracking waters for a while now. If they see Asahi cutting into those margins because people are buying Apple hardware and installing an OS that prevents them from collecting user data, I can believe they'd see them as a threat.

Why? Again, history. Apple has shut down numerous developers who made "competing" features for iOS via apps in the App Store... even though those developers had the app first before Apple decided to add the features into iOS.

But yeah, something something security something...


Lots of companies have a history of making decisions that are hostile to users. If you avoided all of them, I dare say you'd have a hard time using the internet. This isn't good, but it is the state of the world.

Just buy the hardware you want to use and don't assume it will get better over time. Even if Apple suddenly decided they wanted to lock out other OS's (they won't), they can't remotely update the bootloader if you're running Linux.


To my knowledge all jailbreaks on iOS are literally exploiting security vulnerabilities that root your phone. Lord knows what these things installed...


The early ones were often open source, but between competition in the scene and making it harder for Apple to patch them, a lot of the newer ones are not, which makes it a lot harder to trust them.


[flagged]


Of course Apple advertise running your own kernel code on these machines. Here's the official documentation on custom kernel extensions (which are functionally equivalent to allowing other OSes):

https://developer.apple.com/documentation/apple-silicon/inst...

And here's the kmutil manpage that explicitly describes the option they added to support fully custom kernels, which is what we use:

https://keith.github.io/xcode-man-pages/kmutil.8.html

And here's a blog by the head of XNU development at Apple explaining how to build and run your own XNU kernel for these machines:

https://kernelshaman.blogspot.com/2021/02/building-xnu-for-m...

This isn't an accident. This is a documented, explicitly advertised feature that Apple put a significant amount of engineering time into developing.


[flagged]


Are you familiar with BootCamp, an explicitly advertised feature to boot into Windows (https://support.apple.com/boot-camp) - They've advertised it quite a bit. They maintain Windows drivers for their hardware.

Re on the M1 specifically, Apple suggests they'd totally be on board should Microsoft put in the effort:

https://www.macrumors.com/2021/09/14/arm-windows-m1-macs-not...

> Apple's software engineering chief Craig Federighi last year said that Windows coming to M1 Macs is "up to Microsoft." The M1 chip contains the core technologies needed to run Windows, but Microsoft has to decide whether to license its Arm version of Windows to Mac users.

I imagine that once Windows on Arm is supported, they'd be totally be on board.

Perhaps if you weren't so hostile in your replies, you'd be down-voted less. You come across as angry.


Bootcamp is not allowing someone to fully boot any OS from the device. It is still a walled garden that limits hardware access, preventing the user from getting full performance out of the device.

> Perhaps if you weren't so hostile in your replies, you'd be down-voted less. You come across as angry.

You get what you read into it.

Majority of my posts are statements of fact yet they get downvoted because there's no placating the Apple horde with the truth.


No, you're getting downvoted because you're making combative statements and picking fights over something that doesn't matter. Why do you care so much about this? Even if marcan and the rest of the Asahi folks are wasting their time, it's their time to waste, and in the meantime they're clearly deriving some amount of happiness from doing this work.

Even if you're right that Apple deliberately locks things down so the 4th or 5th or whatever generation of this hardware can't run anything but macOS, this project still has value. Maybe not to you, but it will to lots of people. And that's really all that matters. Your negativity here is a waste of time and is frankly boring and off topic.


> It is still a walled garden that limits hardware access, preventing the user from getting full performance out of the device.

That's not been my experience, but I guess in such circumstances your mileage may vary.


I understand you see it that way, but...

> ... there's no placating the Apple horde with the truth.

Pretty much summarizes what's I was suggesting as hostile.


> It is still a walled garden that limits hardware access, preventing the user from getting full performance out of the device.

What limits are you referring to, and by what performance metrics is Boot Camp unable to make full use of the hardware capabilities?


And Microsoft could ask the OEMs to push a UEFI update that enforces secure boot with only the Windows keys. I don’t think they will though.


> And Microsoft could ask the OEMs to push a UEFI update that enforces secure boot with only the Windows keys. I don’t think they will though.

Well, let's be fair: They could try, but again, that'd require those OEMs to cooperate. And you'd still end up with folks like System76 and Framework that'd go their own way because nothing can stop a vendor from creating an open x86-based design. They just wouldn't be able to ship Windows pre-installed.

Apple, by contrast, could just do it by fiat because they own the entire ecosystem, and no one could do a damn thing about it.

That's the difference between an open, interoperable ecosystem and a closed one.

I genuinely don't understand the debate, here. Are you seriously trying to claim that the Apple hardware platform and the x86 ecosystem are equally open/closed?


You're right, we should just give up and devote our time to other things.


What I'm saying is that I feel it's a better use of man hours of open source advocates to code for hardware from companies that support OSS and not one that is viciously hostile to its users.


I'd much rather reverse engineer proprietary hardware; I get to have fun doing that and I get to play the upstreaming game better than any of those companies that "support open source" that you love so much (have you seen the mess AMD made? They have hundreds of megabytes of autogenerated headers in the Linux tree now!), while working with a team of motivated and experienced developers and working to make everyone's life as pleasant as possible, minimizing churn, red tape, and inefficiency.

Seriously, there's no way I'd be half as happy working at Intel on Linux drivers as I am working on the M1. And it's not like they support external developers either - most of the chip documentation for Intel/AMD stuff is not public. You can't actually build working graphics drivers from their public docs, not even remotely close. Heck, I had to reverse engineer AMD's GPU microcode to get the variant in the PS4 to work. Totally undocumented stuff.


I think other operating systems could also benefit from this reverse engineering work as it shows how to create the drivers needed.

If Apple had provided a bunch of binary blobs for Linux, that would have been great - for Linux only.

It will be ironic if the mac mini becomes the most open hardware platform for alternative systems.


[flagged]


I didn't even own any modern Apple hardware until last year. My rationale for supporting these machines is that they're awesome machines, better than anything with Intel/AMD chips, and people should be able to run Linux on them.

Of course, if that doesn't convince you, we can always fall on the fact that you aren't the person writing the code, and in the world of free software you don't get to tell others what they ought to spend their time on.


[flagged]


Well, they do run kooler and quieter than literally every x86 machine I've ever owned, yes ;)


> And yet here I am, with Linux on an Intel laptop that can't play video without tearing.

Odd, because here I am with an Intel laptop running Linux that manages that just fine.

> Support from chip manufacturers doesn't mean you're actually going to end up with a better user experience.

Honestly, this is just ridiculous. Go ask the Nouveau folks how their reverse engineering project is going.

I love the drive you guys have, and I wish you the best of luck. But Apple could pull the rug out from under you any time they want in a future hardware rev (either deliberately or accidentally), and all you can do is keep on reverse engineering in the hopes of keeping up.

Edit: Updated pronouns since apparently the OP is one of the project maintainers.


> Odd, because here I am with an Intel laptop running Linux that manages that just fine.

Then it's not an Ivy Bridge running multiple external displays on KDE.

> Honestly, this is just ridiculous. Go ask the Nouveau folks how their reverse engineering project is going.

Nvidia created an actively hostile firmware situation. We already have the firmware story worked out. Plus having to support a bazillion GPU generations with incompatible interfaces burns people out. We're starting with one, and Apple has a much better track record of incremental change and avoiding hardware churn. This is unique, and we've literally heard from Apple employees that this is an explicit goal. It shows all throughout the SoC design.

> and all you can do is keep on reverse engineering in the hopes of keeping up.

And we will keep on reverse engineering, and we'll keep up :-)


> Then it's not an Ivy Bridge running multiple external displays on KDE.

Tiger Lake (whose microarchitecture I just learned is Willow Cove) running Gnome via Wayland (damn HiDPI...), single external display via USB C dock plus an onboard.

More fully, previously I was running a Lenovo X1C5 with Intel integrated graphics. Again, no issues there. In that case I ran Xorg with TearFree enabled.

I'm currently running a Framework sporting an i7-1165G7 with the aforementioned setup.

It was genuinely a little shocking, having spent my earlier days manually hacking modelines in X, to see this hardware come up flawlessly on first boot with Ubuntu 21.04... amazing what a reasonably open hardware ecosystem will enable. jab jab ;)

> We're starting with one, and Apple has a much better track record of incremental change and avoiding hardware churn. This is unique, and we've literally heard from Apple employees that this is an explicit goal. It shows all throughout the SoC design.

Absolutely a fair point! I concede that the reverse engineering story with Apple may very well be less painful than it is with other vendors. Honestly, I hope for your sanity that it is! :D


I think it's a little bizarre that you seem to think that just because you have no issues with video tearing on a few different Linux-based setups you've used, no one else possibly could on their (likely pretty different) setup.


I think you're being incredibly reductive with your claims about screen tearing. I've run a 1080p display off my horribly outdated i5 520m, and I never noticed any tearing on it. Same goes for my Skylake notebook, desktop Haswell iGPU and even my old Raspberry Pi from 2014. You may have just gotten extremely unlucky with your hardware.


I feel like the "everything works great on my machine, so if it doesn't on yours then you must be really unlucky" argument should have been retired years ago.


On the contrary, I'd argue it's become increasingly relevant in the age of ARM laptops and Windows 11 hardware requirements.


cough You may as well say “you” instead of “these guys” btw, perhaps a careless mistake but you were speaking to the lead developer of the Asahi effort and I would hazard a guess his views on the Linux kernel and vendor support are a tad more nuanced than “ridiculous”


LOL, just because they're running the project, doesn't mean their opinions are sacrosanct or unquestionable.

The statement they made is ridiculous, and I stand by my decision to label it as such.

If you want to explain to me how I'm wrong instead of simply appealing to authority, you're obviously free to do so.


You made an absolutist statement that support from a manufacturer will always result in a better user experience than community support only.

That sounds entirely ridiculous and unsupportable to me.

And please don't trot out the "appealing to authority" bit. That absolutely doesn't apply here; when it comes to an opinion about some hardware/software project, I would 100% take the word of the lead developer on that project over a rando on HN who has for some inexplicable reason decided to pick a fight about something they appear to not have any domain knowledge about.


By your definition, a lot of the hardware that works on Linux is an "unsupported hack". I don't see why this even matters; if someone -- even if they are not the original manufacturer -- wants to support something, then by definition it is not "unsupported". Whether or not it's a "hack" or not isn't really relevant, and it really sounds like you don't understand the output of a good reverse engineering effort. If it works, then it works.

Hell, most Lenovo, Dell, IBM, etc. laptops don't have official Linux support from their manufacturers. They're also "unsupported". I guess that means everyone should just buy System76, Framework, or Purism, or one of the scant few models that a small minority of the big manufacturers actually support Linux on, right?

> Compare this to where people make a LineageOS release for their mobile device. Still unsupported by the manufacturer, but GPL requirements often mean there's still device code to work with, if not at least the device ROM being released that images can be used to boot another OS.

This is a pretty bad example considering that many phones with LineageOS support still require closed-source binary blobs since the manufacturers don't provide source for everything. If anything, LineageOS is more of a "hack" than the M1 effort, which aims to produce open-source drivers for everything.

> Apple is still one of the only hardware companies who have gone out of their way to prevent Linux from being run on their hardware.

I think you're ascribing to malice that which is driven by indifference. The things Apple does that incidentally make it hard to run Linux on their hardware are probably driven more by lack of caring than any deliberate effort. As marcan pointed out in reply to one of my other comments here, Apple does things because they need to in order to get the features, performance, battery life, cost, and security they want. If the "Linux friendly" way of doing things doesn't jive with that, they'll go their own way.

> As such, I cannot see why open source advocates would continue to try to get any distro running on Apple hardware

Perhaps you just lack imagination? Your motivations (or lack thereof) are not others'. "Open source advocates" do what they do for a wide variety of reasons. And the people who are working on this might not even be "advocates"; maybe they just like Linux and don't like macOS, but love Apple's hardware.


> By your definition, a lot of the hardware that works on Linux is an "unsupported hack". I don't see why this even matters; if someone -- even if they are not the original manufacturer -- wants to support something, then by definition it is not "unsupported". Whether or not it's a "hack" or not isn't really relevant, and it really sounds like you don't understand the output of a good reverse engineering effort. If it works, then it works.

See also: the entire PC industry exists because IBM's BIOS was reverse engineered!


Don't let the downvotes here detract you from posting these thoughts again on HN - it's just an attempt to silence you here, and create an echo chamber that BigTech doesn't do any wrong, and they work as the "market" dictates. Your criticism about Apple are well deserved and factual.


I understand the subtlety involved in your argument that Apple may and probably will make closed decisions that strictly benefit their own ecosystem even if at cost of making life difficult for those running Linux on this hardware.

However, “Unsupported hack, at best” is pretty dismissive of the engineering effort going on with this. People running Linux on “windows branded PCs” of the not so distant past could have easily said the same thing because Say Dell and Microsoft would be commercially motivated to only keep Windows running on their hardware. However the ecosystem around Linux made things happen such that it is almost a no-brained that most systems you buy _will_ run say Ubuntu with more than reasonable success.

With enough motivation, and the efforts of engineers such as those listed in this article, I bet we might even see better use of Apple hardware than MacOS. Apple might optimise for their stuff, but Linux can bring the power built for other needs(HPC, Real-time systems etc) to the table and can leap over MacOS given the right abstractions. The future is exciting for the hopeful.


> People running Linux on “windows branded PCs” of the not so distant past could have easily said the same thing because Say Dell and Microsoft would be commercially motivated to only keep Windows running on their hardware.

I don't think this needs qualification. It's still true today.


We recently purchased a cheap (<$500) Dell Inspiron, and it advertises support for Ubuntu in the specs. Not even one of those fancy XPS "developer edition" laptops or whatever, just an ordinary Inspiron.

It's fine to worry about it (and UEFI was a real threat until Microsoft mandated support for other-os booting), but support for Linux in the PC world has never been as fragile as support for it on Macs. You're entirely at the mercy of Apple's whims; I support the Asahi project, but that's just the truth.


How did that Inspiron get its Linux support? Principally it was from hard working hackers reverse engineering the components and getting drivers working through painstaking dev test work. Dell then built a machine with hardware they knew worked thanks to all that effort and slapped a “Works with Ubuntu” sticker on it. That’s the sad reality. The idea that the PC platform and it’s peripherals are somehow an open platform paradise is a delusion.


Exactly. It may actually be in Apple's interest to not actively hinder the hacker community from doing wonders with Apple hardware. After all, a successful ecosystem there will lead to more purchases from Apple!


In fact the Asahi team has said Apple has done a lot of engineering work making sure M1 can and does support third party OSes. They're not directly contributing to or assisting those projects, but the characterisation that they are actively obstructing them seems false, according to those projects themselves.


Linux on basically every computer platform at least started out as, if not still is, "an unsupported hack". Even on x86 it's pretty questionable to call it supported on the vast majority of PCs out there.


I disagree.

Intel and AMD both contribute heavily to the Linux Kernel. This means that on the vast majority of Intel or AMD based systems, you can boot from USB and have a working system.

Apple doesn't even release documentation on their SOCs.

Apple doesn't use standard, open boot methods to allow other OSes to simply boot from USB. No BIOS/Standard UEFI.

There is a huge chasm between what you can do on the vast majority of PCs out there... and what you have to do to get anything else running on Apple hardware. IF you can get anything running. (Eg, locked bootloaders on iDevices)


Intel and AMD don't make the computer (most of the time). They make the CPU. ARM is as open a CPU platform as x86 is at any rate, so if you're gonna pick your line only based on the CPU there's literally no difference there. But you obviously know that's not all there is.

Most computers anyone can buy are full of components that only have drivers or support in the linux kernel because people reverse engineered them. Most are also full of components that linux only works with because of extracted firmware blobs from other systems.

EFI is more or less an open boot system (which.. you know intel macs used that right?), though with plenty of proprietary extensions and alterations in most booting computers, but the BIOS boot that predated it? Yeah you know that was a proprietary system right? Linux booted off it because people reverse engineered and cloned it until it was forced to be an 'open' thing. It didn't just magically happen one day and IBM even sued about it back in the day.


For Linux and BSD, everything has been reverse engineered and hacked from the beginning. Linux has been ported to hundreds of unsupported computing devices. If everyone had your mindset, Linux and BSD would not even exist, only proprietary operating systems would exist. You just don't get it. Linux on M1 is a worthy challenge for the most talented and creative software engineers. If we all just waited passively for manufacturers to provide official support then literally none of the free / open source operating systems would have ever existed.


As a bit of meta-commentary, the fact that anyone is even arguing with the points you're making is deeply disappointing to me.

It seems like there's an entire generation of folks out there who can no longer tell the difference between an open technology ecosystem and a closed one...


Hi I grew up with a TRS-80 and a Tandy 1000. I'm not sure what generation you think I'm from but I suspect you're wrong.

What disappoints me is a "generation of folks" who can't tell that openness is a spectrum and that every bit of openness they enjoy was hard-fought for by pushing the boundaries of the platforms that existed beyond their original intentions, and now treat an effort to do the same on new platforms as some kind of windmill tilting.


> What disappoints me is a "generation of folks" who can't tell that openness is a spectrum

And Apple is very far on the closed end of that spectrum relative to their competitors, and always has been.

There is simply no arguing this point.

> every bit of openness they enjoy was hard-fought for by pushing the boundaries of the platforms that existed beyond their original intentions, and now treat an effort to do the same on new platforms as some kind of windmill tilting.

Let's be real: Asahi isn't gonna change Apple's corporate culture. It's a super cool technical project and I absolutely applaud these sorts of efforts simply because they're cool.

But you're not the Jedi fighting the empire here. You're not gonna blow open the M1 and suddenly change Apple hearts and minds.

The PC industry became more open for one reason and one reason only: Money.

The IBM monopoly was destroyed because clones were cheap, not because of ragtag freedom fighters.

Linux won over in the embedded space because it doesn't cost anything, not because folks suddenly got stars in their eyes over the GPL.

Apple will only change their practices when economics force them to. But it's very clear that they view a walled garden of hardware and software as key to their financial success, and everything they're doing is intended to build those walls just a little bit higher.

So until you can change that financial equation, nothing in the Apple ecosystem will change.

But, honestly, keep plugging away! Have fun! I personally love repurposing hardware to make it do cool things for which it was never intended. When I see projects like this, I think of Everest: We do it because it's there.

But let's not pretend that you're somehow going to change Apple just because you get the Linux kernel booting on the M1.


Fundamentally, if people with your and the spawner of this thread's attitudes had won the day we wouldn't have any of the things this thread is allegedly about loving so much. That's my main point here.

Also I am not involved in or affiliated with or even a likely user of the Asahi project, to be clear. I just find this attitude incredibly frustrating, and I'm definitely bristling at accused of being a young'un or some shit for believing people can actually do surprising things. I am not the dewy-eyed idealist you seem to think I am, I believe what I'm saying here because I've seen it happen over and over and over again.


> I just find this attitude incredibly frustrating

What attitude? The attitude that Apple's hardware is closed and Apple should be criticized for building a walled garden? That we should, where possible, invest in open hardware and technology and put our money where our mouths are, so as to create the kinds of economic incentives that ensure that technology remains open in the future?

Do you really disagree with that?

> I am not the dewy-eyed idealist you seem to think I am, I believe what I'm saying here because I've seen it happen over and over and over again.

What "it" are we talking about here?

What do you think the end game is going to be?

I genuinely have no idea what your argument is, other than to tell me how I'm wrong without providing any specifics regarding how I'm wrong.

Please, enlighten me, I'm happy to listen!


The attitude that pushing on technology that seems hard isn't worth it.

We can do both. We should do both.

At any rate, I'd suggest you go read marcan_'s replies and the OP submitted to HN because the picture you paint and the picture they paint of this platform are not quite the same to begin with. I think the people doing the work deserve a little more credit to describe the platform they're working on.


> We can do both. We should do both.

So, other than for the intellectual curiosity (which is absolutely a great reason!), I have a simple question: why?

Edit:

By the way, thinking about it, I have my own answer to this question: So that this hardware can continue to remain useful long after Apple has ended support for it.

Of course, I continue to believe that it's better, now, to simply buy hardware that doesn't need this kind of reverse engineering to ensure longevity, and I steer my hardware purchases accordingly.

But the hardware exists, and people are buying it, so this project is a way to keep it alive after Apple eventually abandons it (which, granted, could be a long time; while I have a lot of problems with Apple, they are very good about continuing to support old gear).


> Of course, I continue to believe that it's better, now, to simply buy hardware that doesn't need this kind of reverse engineering to ensure longevity, and I steer my hardware purchases accordingly.

For Linux and BSD, ALL OF IT was hacked and reversed engineered from the beginning. With your stupid mentality we would never have any free operating systems, we would only have proprietary operating systems. This is how its done. New hardware is released, someone hacks it, then its supported. If we are forced to wait like children for manufacturers permission or assistance (which is not actually required) then Linux, BSD, and other free / open source operating systems would never exist.

Linux has been ported to every modern computing device on the planet and this M1 port is par for the course. Only an actual idiot would recommend Linux devs should ignore the M1.


[flagged]


Not a "his" thanks.

And no. That is not even remotely my argument.


[flagged]


Just gonna jump in here and say: not cool. Yours is, to say the least, a disproportionate response and pretty unnecessary.


Support can be effective even if it doesn’t come from a first party. As long as someone is signed up to fix your problems, it’s not an unsupported hack.


It’s honestly pretty disrespectful to the people who have put a bunch of absolutely amazing work into this project to dismiss it as an “unsupported hack”.

It’s true that we are never going to see first-party support for Linux from Apple. That kind of sucks, and it would be much better for everyone if they had a more open and documented approach to this new platform. But it does at least have explicit support for booting third-party operating systems, and the attitude of the team behind the project is clearly not “hack together a prototype and we are done”.

I guess the point is that it’s not black and white, and there’s a large spectrum between “first-party support” and “unsupported hack”.


Perhaps you should expand your definition of what 'unsupported hack' means instead of taking it as an insult.

I support the work of anyone who gets Linux running on any hardware, I condemn the difficulty in which Apple unnecessarily restricts people from doing what they want with the hardware they own.

It is an unsupported hack because Apple doesn't provide means (such as documentation or code) for people to run what they want. So anything you can do will always end up being a method that Apple may decide to close up with later software updates and everything has to be reverse engineered from scratch.

Contrast this with AMD and Intel code in the Linux Kernel contributed by those companies themselves and being able to just boot from USB using a user accessible BIOS/UEFI loader.

It's Apple's restrictions that make this necessary to be a hack.


Makes sense, even as someone who wishes I had an M1 mini lying about (all non-trivial purchases need home CFO approval) I would be hesitant to install Linux on it as I'm not a kernel hacker.


Well, this kinda pisses on the work whose progress report is posted here...


No, it doesn't. It pisses on Apple for being an incredibly and unnecessarily restrictive company.


That's absolutely no fun at all!

I've dropped Apple pretty hard (to the point where I've gone back to a flip phone - my objections to Android are significant as well), and I've accepted that if I want to use computers, it's probably best to use weird configurations that are often broken, because it discourages me from spending too much time on them.

I mean, I feel like even using x86 Linux boxes is lazy. :/ This box, currently, is an ODroid N2+ that's working fine. I've got a Raspberry Pi 4 over on the other side of my office (the "solar shed" posted yesterday, for some reason), and I've made that into a nice little desktop too. Still working on Spotify support, and I was actually quite surprised to find out that I can watch something on YouTube today...

My laptop is a PineBook Pro running a custom kernel (I really should push the sleep/resume patches I wrote for the sound card upstream... one of these days...).

Unsupported hacks are fun! They're challenging, and also reduce my dependence on computers, because odds are good that one or more simply don't work at any given moment.

And it's not just computers. The closest thing I have to a "daily driver" (other than the family car, which my wife and kids have priority for) is a 2005 Ural - sidecar motorcycle, evolution on a late 1930s BMW, quite literally the most vile handling thing you'll ever encounter on the road. It works, I work on it, I get places eventually, just no longer at the speed limit.

Anyway, the very insanity of bare metal Linux on the M1 actually appeals an awful lot to me.


[flagged]


Yikes, you broke the site guidelines incredibly badly in this thread, which unfortunately I didn't see at the time. Please don't post like this to HN again! That means not fulminating, not name-calling, not posting insinuations, and most of all not flaming. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

We detached this subthread from https://news.ycombinator.com/item?id=28764935.


I don't think this really makes sense. I agree that you could characterize things like soldering on RAM and storage, or gluing the battery in, as user-hostile (because that means the user can't repair or replace or upgrade), but changing the interface used by a component that's already integrated just won't matter to any user (at least one that is staying in the Apple ecosystem). This sort of thing only matters to people who want to run Linux or Windows (or something else) on Apple hardware.

> The fanboys have arrived. Factual discussion will not be tolerated. Downvotes shall commence without comment!

Please don't do this. Complaining about downvotes is a waste of time, and ultimately detracts further from what you're trying to say.


I don’t get the glued battery complaint though. I’ve replaced the battery on 4 iPhones of 3 different models and the glue is just a sticky blob holding the battery in place. It was trivial to prize the battery off and press a new one into place.


When the glued batteries first started showing up, they were showing up in devices with security screws where bits were not available and the expectation of being able to swap batteries out at will. (Macbooks vs regular Laptops of the day)

The glue was so strong that you could not avoid damaging the battery when attempting to remove it.

You also could not safely remove the battery if it had started to bloat/become swollen.

This was, of course, about 10 years before pulltabs and adhesives that were solvent sensitive were used.


I have a lot of issues with the things Apple chooses to do, but I think "user hostility" is an incorrect interpretation.

They have a very specific customer set in mind and they optimize for that really well. Its unfortunate that they don't cater to every market, but they definitely aren't "hostile" to their primary market.


Forcing people who have broken their new iPhone's screen to trade it in for a refurb because no third party repair shop can swap out a serialized display isn't user hostile?


You are saying that the "why" of Apple doing these things is purely "user hostility", which is highly implausible.

A company does not make decisions based on a pure "will to be evil".

They probably think that the reputation hit from not allowing repair is less damaging than the reputation hit from users dissatisfied with repairs. Other design choices can be for cost cutting in design or production.

So sure, it is not nice for the user, but the reason is not a desire to spite users. They likely simply think the additional costs, tangible and intangible, of being repair-friendly are not worth it.


[flagged]


I really do think you're ascribing emotion and malice to decisions where it doesn't make sense to do so.

> Shutting down iOS emulation projects seems spiteful.

No, it's because Apple "pays" for OS development by selling hardware. Emulation projects cut into that. It's not spiteful; it's a logical business decision, regardless of whether or not we like it.


> Adding code and microchips to enforce serial paring of batteries, screens and camera modules to the device it was originally installed in... is a cost cutting measure?

This absolutely can be the case. What is the engineering cost of supporting third party components?

What happens to apples image if a crucial calibration is not performed after swapping a component? They might lose that customer forever because “Face-ID doesn’t work”.

What is the cost if your PR image is damaged as a result of third party battery fires? “iPhones explode” would be the headline and third-party batteries would be the footnote.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: