Hacker News new | past | comments | ask | show | jobs | submit login
Asahi Linux for Apple M1 progress report, August 2021 (asahilinux.org)
596 points by fanf2 on Aug 14, 2021 | hide | past | favorite | 183 comments



This kind of brilliant work makes me feel very very tiny as an engineer. I struggle after work to find time to learn basic Rust. I’m totally in awe of folks who can do this kind of stuff. It’s just so impressive, and maybe one day I can benefit from all this awesome work.


I feel you. Sometimes I browse projects on GitHub and I'm astonished by what people can do and I can't. Example, OpenCore[0], a famous bootloader in the Hackintosh scene. How can people even start to code this.. Awewome work, awesome people.

[0]: https://github.com/acidanthera/OpenCorePkg


Preface: I can't do this (specifically). But I have done many types of software development across two decades. My journey began with LAMP-style web work, took me to C++ and the desktop (apps, GUI toolkits, browser engines), then to embedded - from smart TVs to smart speakers, to network protocols for drone systems and beefy car infotainment ECUs and lower-level microcontroller/borderline electronics work.

My conclusion: You can get into just about anything, and for the most part the difficulty level is fairly uniform. But there's simply a vast sea of domain-specific spec knowledge out there. It doesn't mean that it's too hard for you or you can't learn it. Anything that is done at scale will fundamentally be approachable by most developers. Just be prepared you'll need to put in the time to acquire a familiarity with the local standards/specs and ways of doing things. Knowledge turns seemingly hard things into easy things, and if it's been done before chances are it's documented somewhere.

The truly hard stuff is innovating/doing things that haven't been done before. Novel development happens rarely.


Yeah, this is my conclusion too. I moved from the Oracle platform to embedded, scientific stuff like reading out custom electronics for IR cameras. And now I'm into iOS apps. It's more a question of what part of the stack feels interesting and doable to you, at a certain period in your professional life.


>acquire a familiarity with the local standards/specs

And with the bugs. Especially with bootloaders because you're in a preboot environment.


I ve done a bittorrent client, a 3D rendering engine on the PS3 and a printer driver for fun and while clearly not at the levels of a bootloader, I can have a few cookie points in interview for originality.

What I learned starting these daunting task (especially the ps3 which was closed specs), is that it s still done by humans and following familiar patterns. Most of the time you bang your head against unclear doc or go through successive layers of abstraction (to do a ps3 rendering engine, better know your basic openGL on PC, which is only possible if you know matrix and vector geometry) but EVERYTHING is possible to reach at a workable level (expert level I feel comes with money incentive, team mates and repetition). I spent 2 years on japanese, 3 hours a day, and could integrate the meaning of Haikus at the end.

I think the only true talent you must have is insane stubborness. To go through, a doc at a time. Usually after the first insane challenge (for me: learning english at a near native level, reading literature or discussing politics) of your life, you understand it s all pretty much time investment.


Did you use PSLIght SDK to develop the rendering engine on the PS3?


It can be daunting to look at a popular project in it's current state. Thankfully with Open Source Projects you can go back in time and see how the code evolved.

Here's [1] the first few commits to OpenCore. Much more approachable and inspiring.

[1] https://github.com/acidanthera/OpenCorePkg/commits/master?af...


Re: finding time. I think this is true for all of life, as we get older our risk and reward profiles change. As we are encumbered by more responsibility, family, mortgage, etc, we stop taking the same risks and start playing life more “safely” or “securely”. Of course it doesn't have to be that way and I personally advocate for a debt free lifestyle for this reason. Too many times have we heard the story of the midlife crisis: the false deity of security and comfort that robs the well paid executive the majority of his or her life. Its a sad story. So my advice is to people is to work less for others and work more for your own projects and passions, life is simply too short to give your years to someone else, even if it pays well.


I generally agree with you but total debt free life is suckery IMO when you are paid 90% by others people debt. It's better to take debt and profit from it (so invest and not consume, at least in current econ climate).


Same. People who can easily straddle the lowest level bits of a machine and also write performant and good c code and then to do all of that and reverse engineer things, amazing.


Building useful and nonporous layers of abstractions and being able to quickly shift between them seems to be a key skill. When each layer leverages another, you can rapidly build something impressive.

This in turn implies apart from technical skill, charting a good course from layer to layer can make a big difference, vs meandering without quite knowing where you are going.


> The silver lining of using this complicated DCP interface is that DCP does in fact run a huge amount of code – the DCP firmware is over 7MB! It implements complicated algorithms like DisplayPort link training, real-time memory bandwidth calculations, handling the DisplayPort to HDMI converter in the Mac mini, enumerating valid video modes and performing mode switching, and more. It would be a huge ordeal to reverse engineer all of this and implement it as a raw hardware driver, so it is quite likely that, in the end, we will save time by using this higher-level interface and leaving all the dirty work to the DCP. In particular, it is likely that newer Apple Silicon chips will use a shared DCP codebase and the same interface for any given macOS version, which gives us support for newer chips “for free”, with only comparatively minor DCP firmware ABI updates.

How interesting: by moving some of the driver to firmware, Apple has effectively made it easier for OSs like Linux to have good support for the hardware, because the OS<->hardware protocol operates at a higher level of abstraction and may work across multiple chips.

The trade-off is that the protocol is not stable and so Linux has to decide which versions it will support, and make sure the OS driver always matches the version of the hardware firmware:

> As a further twist, the DCP interface is not stable and changes every macOS version! [...] Don’t fret: this doesn’t mean you won’t be able to upgrade macOS. This firmware is per-OS, not per-system, and thus Linux can use a different firmware bundle from any sibling macOS installations.


> How interesting: by moving some of the driver to firmware, Apple has effectively made it easier for OSs like Linux to have good support for the hardware, because the OS<->hardware protocol operates at a higher level of abstraction and may work across multiple chips.

If I am remembering right, this was the original idea behind UEFI drivers too — you could in theory write drivers that the firmware would load and they would present a simpler/class-compatible interface to whatever operating system was loaded after that. I think Apple did pretty much this on Intel Macs for a number of things.


Even legacy BIOS effectively served a similar purpose before the functionality provided ended up being too limited. The name BIOS, Basic Input/Output System, in fact focuses on this functionality of serving the I/O function calls more than initialization/booting the system. Of course these days a modern OS just ignores the BIOS and drives the hardware itself once booted (at least partially due to the fact its design predates protected mode).


> If I am remembering right, this was the original idea behind UEFI drivers too — you could in theory write drivers that the firmware would load and they would present a simpler/class-compatible interface to whatever operating system was loaded after that. I think Apple did pretty much this on Intel Macs for a number of things.

Not really; pretty much every UEFI driver shuts itself down when you call ExitBootServices, which is necessary before you can do virtual addressing (as required by every OS)

(My memory is fuzzy but I think the one exception is UNDI/the network driver, but you never want to use UNDI to drive your network card except out of desparation)


If I recall correctly, high end hard drives used a protocol called SCSI in the 90s that was mostly embedded in hardware.


Somewhat related, SSDs run an entire microcontroller on-board to translate from I/O requests from the OS to proper use of the flash hardware.


If I recall correctly that’s not the case on Apple platforms. They do the translation themselves across raw NAND.


No, there's firmware on the Apple NVMe drives doing FTL, etc. and exposing a traditional block device.


It's on the main SoC on M1. (ANS2 block, with the FW being loaded by iBoot1)

NVMe exposed as an MMIO device directly, not as a PCIe one.


My knowledge was about the first iPhone since I swear I sat next to the guy implementing it (I was an intern at the time). Maybe Apple was just writing the FW themselves and I’ve misremembered? Apple devices definitely have better I/O performance than any other mobile device from what I recall.


I remember hearing about how the M1 Macs were very quick to get displays running when they were plugged in and to change the display configuration. I guess the DCP is why.


No, moving it to a coprocessor doesn't really have an impact here. That's just down to their software stack being good.


The iOS software stack? Because the way I heard it, the M1 Macs are noticeably faster than other Macs at these tasks.


Their display controller driver, regardless of what CPU it runs on.


> Apple has effectively made it easier for OSs like Linux to have good support for the hardware

Wow, that's a tall stretch of a conclusion, bordering on ignorance, when the reality is that the M1 only supports macOS! Apple fans shouldn't be blind to the fact that developers are trying to reverse engineer the M1 to run Linux on it. (And they have a long way to go before they succeed). Apple has not even published any literature that can help system developers create and run alternative OSes on it. And this is by design - the M1 (and all Arm processor based macs) are designed to be a a closed box like their other iDevices.

(Honestly, except for an academic interest, I believe open source developers are wasting their time trying to support such a useless "closed" piece of hardware, and instead should be boycotting it - or we'll end up losing control over our desktop computers too as other manufacturers move towards this model too).

> On most mobile SoCs, the display controller is just a piece of hardware with simple registers. While this is true on the M1 as well, Apple decided to give it a twist. They added a coprocessor to the display engine (called DCP), which runs its own firmware (initialized by the system bootloader), and moved most of the display driver into the coprocessor. But instead of doing it at a natural driver boundary… they took half of their macOS C++ driver, moved it into the DCP, and created a remote procedure call interface so that each half can call methods on C++ objects on the other CPU! Talk about overcomplicating things… Reverse engineering this is a huge challenge ....


Oh boy if we didn't even try to reverse engineer things to bring closed systems to the open world ... Where would we even be? I don't think we'd have gotten past UNIX.


Absolutely. I’m currently reading “Open” by Rod Canion, the story about how Compaq reverse engineered the IBM PC’s BIOS. Great read so far.

Think about where we might be if that never happened.. no clones? No open hardware standards? I mean, I guess if Compaq didn’t do it, somebody else would have (maybe?), but yes, thank god for the brilliant engineers of the world who have the focus, diligence, and skill, to be able to do these sorts of things.


When studying history though, we shouldn't lose track of the fact that it needs to be analysed based on the period it happened. From fighting a closed and patented *Nix systems, Linux has emerged as the current torch bearer of open systems in our era. It has made so much stride, that we are even thinking in terms of open hardware today (and it is a reality too, even if not yet successful).

Today's computing world is no longer like the past. Open source is an accepted reality in both the corporate world and among the general population.

You are right that in the past, reverse engineering helped clone hardware and make it more open. It was necessary then. Perhaps I am ignorant here, and I would like insights on this, but I don't understand the practicality of trying to reverse-engineer Apple's custom GPU when it is not even freely available or can even be used in any custom hardware configuration. Especially when alternative GPUs with open source drivers already exist (and need work). And Apple can anytime thwart such efforts very easily with a firmware update.

My main concern is that the BigTech are again closing ranks together to fight against the open movement - today, we are at the cusp of losing our computing freedom again with the aggressive push towards cloud computing, and locked down systems like the Apple M1. Apple is in the forefront here as it has build its business with closed-box systems and resisting against right-to-repair legislation. Fighting and avoiding open systems is in their DNA.

That is why I believe porting Linux to the M1 is not only unnecessary but also a waste of resource and talent. Supporting existing and new open systems (if necessary by reverse engineering it) is more practical in the long run.


It's ancient history but Compaq only reverse-engineered the IBM PC BIOS for their own use.

Some other clone companies directly copied the IBM BIOS and got sued, so it was Phoenix that provided the first 3rd party PC-compatible provably clean-room BIOS implementation.


Ah, didn’t know that thanks for the extra tidbit!


And Science is just reverse engineering reality.


That's a fair point. I am thinking from the other end - if these efforts were instead redirected at supporting open hardware systems, how much more beneficial it would be for the open movement?

By reverse engineering closed systems, you are adding value to it but not changing its nature - you are not really making it "open" (the next iteration will just "close" it again). Meanwhile, Apple just uses these reverse engineering effort cleverly for PR for the M1 through social media marketing. In effect, you are just helping a trillion dollar company design better closed systems while helping them make more money (by attracting naive Linux users to spend money on their hardware).

There are open systems that are struggling because they don't have talented developers working on their projects. Wish these developers would instead work on these projects - Sailfish OS, PineOS, Librem - Purism etc. For example, there are a lot of Intel / Arm tablets that either run only Windows or Android. It would be nice to run one of these Linux based mobile OS distros on it, and create a genuine alternate to the Android / ios duopoly on Tablets, right?


I don't think (and experimentally, I know I'm correct) it's as zero sum as you think it is. These efforts you think better spent on open hardware are actually adding value to the open systems by making transitions from closed systems easier.

M1 is a beast and needs no further PR to attract buyers. You can ignore its advantages and yell out "our system sucks but at least it's open!", or, you can work on bringing the advantages of the product of a trillion dollar company's RND to the open world.


I can understand an academic interest in reverse engineering such devices. But yes, I really don't see any advantages in owning a closed and limiting system like the Apple M1, when better and more open alternates already exist. The limitations of such hyped-up closed system negate any short-term temporary benefits it offers.

And regardless of what you think, the M1 does need constant marketing as its limitations becomes obvious. Apple is betting their future on such closed systems and desperately needs it to be a success. That is why Apple marketing is desperate to spread PR fluffs about it to its Apple fans that ignorantly parrot them on the social media on threads like these:

- Apple has effectively made it easier for OSs like Linux to have good support for the hardware

- Ok, but they’ve done the opposite- helping linux projects get linux on M1 macs.. ( https://news.ycombinator.com/item?id=27943196 )

... and similar statements like these that try to convey that Apple is actually helping developers port Linux to their platform.

Obviously, any posts (like mine) that point out the obvious lie in these statements are downvoted / vote-brigaded because it is in Apple's interest to do so.


> Obviously, any posts (like mine) that point out the obvious lie in these statements are downvoted / vote-brigaded because it is in Apple's interest to do so.

No, quit the Dunning-Kruger, you're being downvoted because most people here disagree with you. Myself included: I don't particularly like Apple, I'm not paid by Apple, I'm a life-long proponent of FOSS and open systems, and I downvoted you because your posts are uninformed and either you haven't thought this through, or you don't have enough context to know what you're talking about.

Insinuations of shilling/astroturfing are against the guidelines, FYI.


Insinuations against particular individuals are against the guidelines, not a blanket ban on mentioning that astroturfing / shillings are an accepted reality of today's social media as part of social media marketing. And I recognize that sometimes what is said is out of ignorance on the part of the poster, (which is what I have highlighted).

That said, I have no hesitation in pointing out that that calling my posts "uninformed", without actually providing any actual context, is just a lazy and aggressive way of avoiding a discussion.

Or you can engage with me and point out where my reasoning is flawed. Here's my stated position and reasoning for what I have said so far, and you can chime in with you thoughts:

First, I started by pointing out that the beliefs of some Apple fans that Apple has made it easy and / or is actually helping developers to port Linux to the M1 is pure ignorance - Apple is doing no such things, but it is in their interest to let the public believe that. What is uninformed about this, in your opinion?

Second, I voiced the opinion that "open source developers are wasting their time trying to support such a useless "closed" piece of hardware, and instead should be boycotting it". The reasoning behind it is obvious - supporting a closed black box system, like the M1, and the proliferation of such devices will mean we will lose more of our computing freedom / right-to-repair on the computers we own.

Is it "uninformed" to point out that the M1 is a closed system that limits us, compared to Intel / AMD alternatives available? Or that the efforts of the developers will be a waste when Apple can, at any time, thwart or derail such reverse engineering attempts on the M1 by updating its firmware?

Third, porting Linux to the M1 will make it attractive, and help its sale. But it is a short-sighted and regressive move that will not help the open-source movement, and will backfire on us as it will help to make such closed box systems popular. The better approach for the open source community is to actually publicly boycott and disparage the M1, so that Apple is forced to open up the M1. Porting Linux to it then would not only be easier, but also makes more sense. What is "uninformed" about this way of thinking?

Fourth, while I do appreciate the strides made by Apple in developing their own custom ARM processors, I do genuinely believe that the technical gains (more computing power) are competitively temporary - both AMD and Intel systems have at some point always shown such leads. And it is not far fetched to say that this technical lead is even negated due to it's closed box nature. Do you think I am wrong in concluding that AMD / Intel will not catch up, or even overtake, Apple ARM processors?


AMD and Intel are far from open hardware. ARM is not open hardware. Sure, they're better documented and therefore somewhat more open in some sense. You can't just go have a fab build you an AMD64 core or twelve and throw the chip on a custom board and sell your own laptop.

You can't just get a small batch of royalty-free ARM cores and make your own phone. Did you know there's still work going on to reverse engineer the Raspberry Pi systems because Broadcom doesn't release full docs?


> What is uninformed about this, in your opinion?

Nothing, that's not the part I (personally) had a problem with.

> Is it "uninformed" to point out that the M1 is a closed system that limits us, compared to Intel / AMD alternatives available? Or that the efforts of the developers will be a waste when Apple can, at any time, thwart or derail such reverse engineering attempts on the M1 by updating its firmware?

You just described reverse engineering. That's what it always looks like. It's not in Apple's best interest to thwart those efforts, and if they do, it becomes a game of whack-a-mole.

What I believe is uninformed about your posts, or at least your arguments, is the lack of understanding of both how we got to this point, and how progress can happen.

We are not at a point in the world where open hardware is winning. Not just CPUs, just open hardware in general. This won't change by ignoring the progress in closed systems. Reverse engineering for compatibility makes the progress more public and more accessible to the open systems. There are fundamental issues that need to be addressed before we can start on relying on open-by-default progress in hardware.

To answer your question about what is "uninformed" about it: Maybe a better world would be pointlessly idealistic. You won't address the issues above with a boycott, and it doesn't matter because a boycott won't happen anyway (if you can't convince a die-hard supporter of open source like me, you won't convince the general public). So it's all a bit of a waste of time; I like progress to not just be theoretical. Reverse engineering for compatibility is a practical way of progressing.

> Do you think I am wrong in concluding that AMD / Intel will not catch up, or even overtake, Apple ARM processors?

I think it's a good bet, but I don't think any of us can say so with certainty and definitely not "when" it will happen. Intel has been spiraling lately, who knows if they will catch up at all?


> By reverse engineering closed systems, you are adding value to it but not changing its nature (the next iteration will just "close" it again).

That argument would totally hold for iOS devices: you might find a bug and create a jailbreak on a particular iOS version, but that "hole" closes by the time the next major version comes out. Apple really doesn't want to allow other operating systems (or even root access).

But reverse-engineering macOS hardware is very different. A solid Linux port like Asahi fundamentally changes the nature of Apple M1 computers: they can be great Linux machines (and given m1n1's versatility, maybe even Windows machines!), not just macOS machines. Sure, Apple might "close" things again in the future, but that future is a long way off. They have given no indication of hostility to Asahi, so it's most likely that Apple computers will run Linux without any trouble until they release their next brand-new architecture. If history is any indication, that transition is likely more than 15 years in the future (68k was 1984-1994, PowerPC was 1994-2006, x86 was 2006-2021).

> There are open systems that are struggling because they don't have talented developers working on their projects. Wish these developers would instead work on these projects - Sailfish OS, PineOS, Librem - Purism etc. For example, there are a lot of Intel / Arm tablets that either run only Windows or Android. It would be nice to run one of these Linux based mobile OS distros on it, and create a genuine alternate to the Android / ios duopoly on Tablets, right?

I agree, that would be great in many ways. But honestly, all other things being equal, given a choice between running Linux on a Qualcomm ARM64, a Apple M1, or an Intel/AMD x64 chip, which one is the most desirable? Which will likely be the most desirable after the M1x or M2 comes out?

At the most fundamental level, it's desirability that drives demand, which in turn decides which systems will be the most well-resourced.


Show me an open source hardware system as performant and power efficient as M1 and I'll throw some support (time, money, press, something) at it and forego using Asahi Linux. If, however, I can run an open software system on the M1 and also still run an open hardware system right beside it on the same desk while the hardware in the open hardware system plays catch-up, I'd rather do that. If everyone forwent open software because it didn't run on open hardware, we wouldn't have Linux, BSD, Darwin, FreeDOS, ReactOS, Haiku, OpenIndiana, X, Wayland, MySQL, Postgresql, KDE, Gnome, XFCE, i3, LibreOffice, Apache, Nginx, HAProxy, Kafka, Pulsar, gcc, clang, rust, Julia, Perl, Python, Ruby, Node, Chromium, Konqueror, Safari, Firefox, bash ... any open software at all.

Your ideal taken to its conclusion is that we wait for RISC-V or OpenMIPS or something to catch up to ARM before we write or use any software. That's simply untenable. What software would we even have to port to such things if we couldn't develop open software on closed hardware? What use would RISC-V be outside of embedded projects without QNX, Linux, BSD, RiscOS, or something else developed originally on closed hardware to run on it?


Many Windows tablets can actually run Linux just fine these days. It's a bit of a pain to install and the specs can be quite underwhelming, but the basic hardware support is there at last.


I have an M1 Macbook Pro and it beats any other laptop I've seen/had when it comes to performance and battery life. I'd love to also have Linux run on it and apparently enough people do as well.

Yes, the hardware is peculiar in many ways and Apple didn't bother to document it (it would have costed them some resources for no clear benefit to their business model) but they also didn't bother to completely lock it down, like they did with the iPhone and iPad. This is what allows other OSes to be ported to it if they are able/willing to reverse engineer and develop support for it.

This reverse engineering work was done so far by just a handful of people who would also like to have Linux run on it, partly sponsored by a few hundreds that share this desire, or just doing it voluntarily as a challenge/interesting project.

The m1n1 bootloader/hypervisor built by the team allows running macOS and logging all the hardware access.

Once the reverse engineering is done for a given component, all the other OSes can implement support for it.

The fact that a lot of the functionality is implemented in firmware blobs makes it easier to support at the OS level, essentially each OS driver only needs to give high level instructions to the firmware, without having to implement all the minutiae done in the firmware.


Since there are clearly a lot of people who have been following the asahi development in detail, I would like to hear your takes on this quote from their FAQ:

> No, Apple still controls the boot process and, for example, the firmware that runs on the Secure Enclave Processor. However, no modern device is “fully open” - no usable computer exists today with completely open software and hardware (as much as some companies want to market themselves as such).

I think they're aiming at purism here, but might have forgotten about the MNT Reform, even though it is currently specced at the lower end of "usable".


There's also the Raptor Computing Talos line ... which is interesting, all open firmware and busses, but even more expensive and less practical unfortunately. https://www.raptorcs.com/TALOSII/


Open firmware, but who says the silicon isn't backdoored? And why is open firmware more important for your freedom than open silicon? What about on-chip ROMs? :-)

In the end, you're always drawing arbitrary lines in the sand. If you really want to go all the way to actually reach a tangible goal, I'm only aware of one project that actually offers trust guarantees backed by hard engineering arguments: Precursor

https://www.crowdsupply.com/sutajio-kosagi/precursor

(TL;DR on the trick is that, by using an FPGA, you make it nearly impossible to backdoor, because it would take a huge amount of compute power to engineer a backdoor into the FPGA silicon that can analyze arbitrary randomized FPGA designs to backdoor them).

For more practical computing, I find M1s have a very solid secureboot design. We can retain that security even after putting m1n1/Asahi on there; doing so requires physical presence assertion and admin credentials and locks you into a single m1n1 version without repeating the process, so we can use that to root our own chain of trust. Similarly we can continue to use Apple's SEP (in the same way one would use a TPM, HSM, or a YubiKey for that matter; sure, it's a proprietary black box, but it also can't hurt you except where you let it) just like macOS does, for things like encryption and storing SSH secrets. And all the coprocessors have IOMMUs, and the whole thing is very logically put together (unlike the giant mess that are Intel CPUs; e.g. we know they have a JTAG backdoor in production chips after authenticating with keys only they have, never mind things like ME) and designed such that the main OS does not have to trust all the auxiliary firmwares.

I'd love to have a more open system competing in the same space, but I have no problem trusting Apple's machines much more than I do the x86 PCs I use these days anyway, and it's hard to find things that compete in the same performance/power space that are more open than either of those, unfortunately.


This thread isn't about OpenPOWER, but as someone typing this reply on a Raptor Talos II with Fedora 34 (and with a Blackbird in the home theatre), it actually is a more open system than the M1 or x86_64 that is in the same performance ballpark. If you want performance per watt, well, the M1 will spank everything right now, but I don't feel like this machine is at all underpowered (which, sadly, is still the case for RISC-V).

I'm no Stallmanesque ideologue; I'm attracted to the M1 because of its low power usage and I've used Macs for years and for my next mobile system I may end up picking up an Air. But as a desktop workstation POWER9 systems like this really are more open and credible alternatives, and I felt it was well worth the price.


> Open firmware, but who says the silicon isn't backdoored? And why is open firmware more important for your freedom than open silicon? What about on-chip ROMs? :-)

Safety is not binary.


Having to trust a particular entity is binary. I have to trust Apple not to give the government a physical-access backdoor to my machine, just like you'd have to trust your silicon vendor not to do the same (e.g. the Intel JTAG password stuff I mentioned).

If you have an open system that does not do robust secureboot (i.e. secureboot that can withstand a physical access attack), then it doesn't matter that it's open; a physical attacker can still break into your machine and backdoor it. If it does do robust secureboot, it is pretty much guaranteed to rely on a proprietary ROM, which will likely have backdoors or exploitable bugs (see e.g. Nvidia Tegra machines; yes you can burn in your own secureboot keys on systems that come unlocked like the Jetsons, but their ROM is a mess and all their security is now moot as you can exploit it). Apple have a lot of experience with this, having long been one of the prime targets, and have put a lot of effort into hardening their boot chain, so I trust them to do a better job than pretty much any other silicon/firmware vendor (and I say this as someone who has looked at the security of many vendors).

People like conflate openness with trustability, but they do not go together. Most open systems I see being developed explicitly choose not to enable or support this kind of security, which makes them less trustable systems than an Apple machine: I don't care that all the boot process is open and I can inspect it, if anyone else who steals my laptop or messes with it while I'm away can steal my data. You need to rely on maintaining physical control. With an Apple machine, you don't: only Apple can break its security, nobody else, so your data is safer as a result. The machine is designed so I do not have to trust Apple's components to be reasonably confident that my machine is safe once it boots into my own OS, thanks to the IOMMUs and general design of the ARM architecture (e.g. a transparent malicious hypervisor isn't really possible). And none of this has anything to do with remote access backdoors (there is nothing like ME on these machines), so the entire story is about what you can do with physical access.

That is not to say there wouldn't be advantages to fully open systems; for example, it would allow for a simpler and more user-friendly installation process than what we have to do with Apple's framework, and it would make the DCP thing much less of a mess. There are definitely practical, real advantages to having a more open system. It's just that all too often I see the trust/security argument being thrown around and... that just isn't true in most cases, when you look at the details.


The mnt reform still needs proprietary blobs.


really? for which components?


DDR4 memory training (Synopsis phy) and HDMI. It is basically the same problem librem5 has: https://www.devever.net/~hl/imx8 . See also: https://mntre.com/reform-irc-logs/2019-04-17.log.html


That RPC interface looks very nice for reverse engineering, but what kind of horror is this going to be in the kernel implementing KMS with JSON^2 and pretend-C++-functions?


Oh yes. We're aware, we've had and continue to have debates about how to tackle the monster... :-)

The C++ stuff isn't that bad, in the end you can just pretend it's messages with fixed layout structures. But yes, the serialized dict-of-arrays-of-dicts type stuff can be approached in a few ways, none of which are particularly beautiful.


> But yes, the serialized dict-of-arrays-of-dicts type stuff can be approached in a few ways, none of which are particularly beautiful.

For what it's worth, this sounds somewhat similar to protobuf (which also supports dicts, arrays, etc).

After spending many years trying to figure out the smallest, fastest, and simplest way to implement protobuf in https://github.com/protocolbuffers/upb, the single best improvement I found was to make the entire memory management model arena-based.

When you parse an incoming request, all the little objects (messages, arrays, maps, etc) are allocated on the arena. When you are done with it, you just free the arena.

In my experience this results in code that is both simpler and faster than trying to memory-manage all of the sub-objects independently. It also integrates nicely with existing memory-management schemes: I've been able to adapt the arena model to both Ruby (tracing GC) and PHP (refcounting) runtimes. You just have to make sure that the arena itself outlives any reference to any of the objects within.

(Protobuf C++ also supports arenas, that's actually where the idea of using arenas for protobuf was first introduced. But Protobuf C++ also has to stay compatible with its pre-existing API based on unique ownership, so the whole API and implementation are complicated by the fact that it needs to support both memory management styles).


For JSON I settled for no transformation to a different in-memory representation, just inline ondemand parsing of a JSON string buffer. Works nicely and you don't need to manage memory much at all.

https://megous.com/git/megatools/tree/lib/sjson.h


Any idea why they did that as opposed to something like RPMsg?


I’m guessing that what Apple did here was:

1. Probably years ago: refactor the relevant components (display driver, etc.) to run as Mach components with Mach IPC, in the form of:

  - One XCode project, with
  - an “IPC client” build target
  - an “IPC server” build target
  - a “shared datatypes library” with well-defined IPC serialization semantics, static-linked into both the IPC client and server
  - a single test suite that tests everything
2. Possibly when designing the M1, or possibly years ago for iOS: split the IPC server build target in two — one build target that builds an IPC-server firmware blob (RTOS unikernel), and another that builds a regular on-CPU kernel-daemon IPC server;

3. Basically do nothing in the IPC client build target — it should now be oblivious to whether its Mach messages are going to an on-CPU kernel daemon or to an RTOS-coprocessor-hosted daemon, as the Mach abstraction layer is taking care of the message routing. (Kinda like the location-obliviousness you get in Erlang when sending messages to PIDs.)

This seems likely to me because there was (and still is!) both an Intel and an Apple Silicon macOS release; and they would want to share as much driver code between those releases as possible. So I think it’s very likely that they’ve written drivers structured to be “split across” Apple Silicon, while running “locally” on their Intel devices, in such a way that the differences between these approaches is effectively transparent to the kernel.

To achieve #3 — and especially to achieve it while having only one shared test suite — the firmware would have to be speaking the same wire protocol to the IPC client that the in-kernel IPC daemon speaks.

And, given that the in-kernel IPC daemons were designed to presume a reliable C++ interface, access to shared-memory copies of low-level Frameworks like IOKit, etc.; the firmware would need to provide this same environment to get the pre-existing code to run.


It's not Mach IPC, it's a bespoke thing only for the main DCP endpoint (they have a whole separate IPC framework for sub-drivers that have more sanely fully migrated to the DCP). It also has nothing to do with Intel, since these display controllers are only in Apple SoCs.

They do have a non-DCP driver they're still shipping, that runs all the same code entirely within the macOS kext. I'm not sure exactly what it's for.


Re: this bespoke protocol, could it be something that emulates the same abstraction as Mach IPC (without being the same protocol), such that the kernel API of this protocol exposes functions similar-enough to Mach IPC functions to the driver that it would basically be a matter of an #ifdef to switch between the two? The tagged-64bit-messages thing sounds very reminiscent, is why I’m thinking along those lines.

> They do have a non-DCP driver they're still shipping, that runs all the same code entirely within the macOS kext. I'm not sure exactly what it's for.

Presumably, they didn’t always have the coprocessor as part of the design, especially at the prototype phase. Imagine what type of workstations were built to prototype the Apple Silicon release of macOS — they probably didn’t have any Apple-specific coprocessors at first (and then they were likely gradually added for testing by sticking them on a PCIe card or something.)


It's much simpler than Mach IPC, and at the same time a strange beast in that they literally chopped the driver down the middle. The same objects exist on both sides, and certain methods call over to the other side. This is accomplished by having thunks on one side that marshal the arguments into a buffer and call the RPC function, and then the other side un-marshals and calls the real method on the object, then return data goes back in the opposite direction. I assume those thunks are autogenerated from some sort of IDL definition they have (they mostly use C++ templates on the argument types to perform the marshaling).

As for the non-DCP driver, the odd thing is that it's specifically for the same chip generation (H13, i.e. A14 and M1), not an older one or any weird prototype thing.

No weird "workstations" were built to prototype the Apple Silicon release of macOS; Apple Silicon has existed for many years now, coprocessors like these included, in iPhones and iPads. The M1 is an iPad chip, the A14X, that they beefed up just enough to be able to stick it in laptops, and then marketing rebranded it as M1. DCP specifically is relatively recent, though, I think it only showed up in a relatively recent Ax silicon generation.


> DCP specifically is relatively recent, though, I think it only showed up in a relatively recent Ax silicon generation.

This was more my point.

I find it unlikely that Apple were doing most of the testing of macOS-on-ARM (which probably occurred for years prior to the M1 announcement, and prior to the A14X being created) directly using the iOS device architecture. Doing that wouldn’t have allowed them to develop AS support for regular PCIe devices attached through Thunderbolt, for example, since there’s nothing like a PCIe lane in that architecture.

Instead, to test things like that, I suspect Apple would have needed some kind of testbench that allowed them to run macOS on an ARM CPU, while attaching arbitrary existing peripherals into said ARM CPU’s address space, without having to fab a new one-off board for it every time they tweaked the proposed architecture.

I would guess that their approach, then, would have been very similar to the approach used in prototype bring-up in the game-console industry:

- Use a regular Intel machine as a host / hypervisor (in Apple’s case, probably one of the internal mATX-testbench-layout Intel Mac Pros)

- Put whatever recent-generation ARM CPU they made, onto a PCIe card

- Build a bespoke hypervisor to drive that CPU, which presents to the CPU a virtualized chipset matching the current proposed architecture (e.g. a DCP or not, a Thunderbolt controller, etc.)

- Have the hypervisor configure the host’s IOMMU to present both virtual peripherals (for bring-up), and arbitrary host peripherals, to the CPU

It’s not like Apple are unfamiliar with “SoCs used to accelerate a host-run emulator”; decades ago, they put an Apple II on an accelerator card and drove it through a host hypervisor just like this :)


> I find it unlikely that Apple were doing most of the testing of macOS-on-ARM (which probably occurred for years prior to the M1 announcement, and prior to the A14X being created) directly using the iOS device architecture.

Of course they would be doing it like that, since the macOS kernel was already ported to ARM for iOS. They even handed out developer kits that literally used an iPad chip.

https://en.wikipedia.org/wiki/Developer_Transition_Kit_(2020...

macOS on ARM is very clearly a descendant of the way iOS works. It's just macOS userspace on top of an ARM XNU kernel, which was already a thing. The way the boot process works, etc. is clearly iOS plus their new fancy Boot Policy stuff for supporting multiple OSes and custom kernels.

> Doing that wouldn’t have allowed them to develop AS support for regular PCIe devices attached through Thunderbolt, for example, since there’s nothing like a PCIe lane in that architecture.

iPhones and iPads use PCIe! How do you think WiFi/storage are attached? PCIe isn't only a desktop thing, it's in most phones these days, the Nintendo Switch, Raspberry Pi 4, etc. Most modern ARM embedded systems use PCIe for something. You'd be hard pressed to find a high-end ARM SoC without PCIe lanes.

They didn't have Thunderbolt, but since Apple rolled their own bespoke controller too, there is absolutely nothing to be gained by bringing that up on Intel first. Instead they probably did the same thing everyone does: bring it up on FPGAs. Possibly as PCIe add-ons for existing iPad chips, possibly hosting a whole (reduced) SoC; both approaches would probably be mixed at different design stages.

I'm sure they also had at least one unreleased silicon spin/design before the M1, during this project. You never quite get these things right the first time. No idea if that would've made it close to tape-out or not, but I'm sure there was something quite a bit different from the M1 in design at some point.

> Use a regular Intel machine as a host

> Put whatever recent-generation ARM CPU they made, onto a PCIe card

You can buy one of those, it's called a T2 Mac. Indeed some Apple Silicon technologies came from there (e.g. how SEP external storage is managed), but not in the way you describe; most of the T2<->Intel interface was thrown away for Apple Silicon, and now the OS sees things the same way as BridgeOS did natively on the T2 on those Macs.


Thanks for all the corrections! I'm learning a lot from this conversation. :)

> there is absolutely nothing to be gained by bringing that up on Intel first

If you're an OSdev fan, then "self-hosting" (i.e. being able to do your kernel development for your new system on the system itself, making your testbench your new workstation) is usually considered a valuable property, to be achieved as soon as is feasible.

Of course, the M1 effort probably had a lot of involvement from the iOS kernel teams; and the iOS kernel folks are a lot less like OS-enthusiast hackers and a lot more like game-console platform-toolchain developers, thoroughly used to a development paradigm for putting out new iOS devices that focuses on non-self-hosted bring-up via tethered serial debugging / whatever the Lightning-protocol equivalent of ADB is. So they probably don't really care about self-hosting. (Are they even there now? Can you iterate on XNU kernel drivers effectively on an M1 Mac?)


The kernel tree was already shared; pre-M1 releases of the XNU kernel for Intel macOS already had (only partially censored) support for Ax line CPUs on iOS devices :)

You can build a completely open source XNU kernel core for M1 and boot it, and iterate kernel drivers, yes. All of that is fully supported, they have the KDKs, everything. It's been self hosted since release (well, things were a bit rocky the first month or two in the public version as far as custom kernels).


> they have the KDKs

Correction: they sometimes have the KDKs ;)


> I'm sure they also had at least one unreleased silicon spin/design before the M1, during this project. You never quite get these things right the first time. No idea if that would've made it close to tape-out or not, but I'm sure there was something quite a bit different from the M1 in design at some point.

Not sure how different it was, but there was a t8028 at some point that never got released.


That's the kind of theory I was looking for, thanks!


Apple have no reason to use existing standards when they can roll their own version tailored to their needs. It's for internal consumption, after all. This is very much a theme throughout their design.


I'm aware of that, but that's not my question. Oftentimes there is a reason why they choose to ignore standards. They added a simple C++ (not full C++) layer on top of their driver code when they took FreeBSD drivers and integrated them into their OS. But there was a benefit to doing so, making the drivers arguably easier to compose.

In this case the answer might be just that they were too time constrained to design something better. But I was just curious if the RPMsg framework has too many issues that would make it unsuitable.


Taking a quick look at RPmsg, it seems it's from 2016. Apple have been doing coprocessors longer than that; Apple had their Secure Enclave in 2013 and that was already using their mailbox system, which is fundamentally different from the IRQs-only RPmsg that keeps all message passing entirely in shared memory.

In general, this kind of message passing thing is easy to reinvent and everyone does it differently. You'll find tons of mailbox drivers in Linux already, all different.


RPMsg isn't really a standard, is it? I think it was added in Linux 4.11, in April 2017, way after this stuff was introduced in Apple devices.


Absolutely amazing progress, I might even give it an install once the USB drivers mentioned at the end are working.

https://www.patreon.com/marcan/ https://github.com/sponsors/marcan


IF you can choose please use the github sponsors instead of patreon :). It's better percentage wise then patreon.


Can anyone comment on the possibility of going in the opposite direction?

I’m considering buying the frame.work laptop, daily driving pop!_os and then virtualizing OSX on it for the few OSX programs I use (this supports framework and not apple).

It looks like it may be fairly easy and possible with a 10-20% performance hit?

https://github.com/foxlet/macOS-Simple-KVM

From what I can tell you can pass through a single GPU with a bit of work, even a iGPU? Is that correct?


Virtualizing macOS on non-macOS hosts works ok, with a couple of caveats:

* macOS lacks drivers for GPUs commonly emulated by VM software, and as such runs absolutely horribly without graphics acceleration because it assumes the presence of a competent GPU at all times - software rasterization seems to be intended to be used only as an absolute last-resort fallback. As such, passthrough of a supported GPU (generally AMD, though Intel iGPUs might work) is basically a requirement.

* It's technically against the macOS EULA. Apple hasn't gone after anybody for it to date, but it doesn't mean they won't. This is particularly relevant for devs building for Apple platforms.


For macOS on arm64, the software renderer fallback is gone.

No GPU? WindowServer crashes and you'll never see the GUI.


I'm also considering this. The software renderer should be a non-issue if you successfully passthrough the iGPU. The PopOs won't have any display device, but you can still access it through ssh from the VM Mac. Look into /r/vfio and the discord.


I’ve never passed a GPU to a mac guest but I have done Xcode work in a VM and it worked fine on Arch. If I had the cash I would too get a framework laptop!


I've used this quite a bit for iOS development with Xcode. It required a patched VMware player or workstation instance. Not sure about other hypervisors. Never had any issues and was running fine on a Ubuntu laptop with 16GB / Intel i7-4720HQ


This and VirtualBox work well for Mac emulation on Linux.


macOS doesn't work on non-apple hardware, including under VMs. Unless you've patched it somehow?


Have you been living under a rock? Do you know what a hackintosh is? It's been going for so long that ccurrent bootloaders allow to run it completely unmodified, working updates etc. macOS also runs under some VMs, with some config tweaks


I'm familiar with that, why I mentioned "patched" in my above comment. It's still a tenuous position to be in, unless you enjoy diagnosing errors rather than working.


Actually, because macOS natively supports VMWare (including ESXI), you can use that quite easily on non-Apple hardware!

Now, this too doesn't quite work out of the box because VMWare’s products have a completely artificial check and wherein they'll refuse to boot macOS on non-Apple hardware by default. To get around this, VMWare needs to be patched, but macOS runs entirely unmodified! And because it's ultimately the same VMWare codebase (quite literally the same binary in the case of ESXI), it runs just as well as it would on a Mac.

Native Hackintosh setups, when done properly, also work much better than many non-Hackintosh users assume. The initial setup process can be quite time consuming and fiddly, but if and when you get everything working, you can not touch it for years and it will be be just like an actual Mac. Even across updates! You absolutely don't need to "enjoy diagnosing errors rather than working," except at the start.


This is false. OSX runs fine on non-Apple hardware without patching.

https://github.com/kholia/OSX-KVM


This uses OpenCore, which does quite a bit of in-memory kernel patching.

Now, the job of a boot-loader is to set up a friendly environment for the OS, so I'm not sure where the line is. But OpenCore goes pretty far!


Re: the installation procedure, why does Asahi Linux have to have its own separate APFS container, rather than being an APFS volume in the pre-existing APFS container (and so sharing the Recovery volume with the pre-existing macOS install, not requiring you to resize the pre-existing APFS container smaller, etc.)?


I actually haven't tried doing multiple installs in the same container, but there's no reason why it wouldn't work as far as I know. It would be easy to change the installer to let you do that.

Though there isn't much of a difference for Linux; you can't actually share the Recovery image (since it's per-OS, even within a single Recovery volume), and since you need to repartition to create standard Linux partitions anyway (since linux-apfs isn't quite at the point where you'd want to use it as your root filesystem as far as I know...) it doesn't make much of a difference if you create a 2.5G stub container too, and makes it much easier to blow away the entire Linux install without affecting macOS. Perhaps in the future, when linux-apfs is solid enough, it would be interesting to support this in the installer in order to have a fully space-sharing dual-boot macOS/linux system, without partitioning.

Also, U-Boot would have to grow APFS support in order to have a truly all-APFS system :) (we plan to use U-Boot as a layer for UEFI services and such, so you don't have to burn kernels into the m1n1 image, which would suck for upgrades).


> and since you need to repartition to create standard Linux partitions anyway

If it's possible to do this in APFS (not sure), the installer could do something similar for APFS to what WUBI does for NTFS: fallocate(2) a contiguous-disk-block single-extent file inside the APFS volume, write the rootfs into it, and then configure the bootloader entry's kernel params with the APFS container+volume and the path to the file within the container.

In this setup, linux-apfs would then only need to work well enough to expose that file's single extent's raw-block-device offset during early boot; then the initramfs script could unmount the linux-apfs volume and mount the ext4 rootfs directly from the block device at that offset.

Yes, it's almost the same as resizing the disk smaller — but it's at least a less-"permanent" change, in the sense that deleting the rootfs file from macOS would give the space back, without needing to know to re-grow your macOS APFS container.


I wrote a tool to do this many years ago on the Xbox 1 (the original) to avoid having to use loopmounted files, but it's not the kind of thing you want to rely on for a modern filesystem like APFS. There's no way to guarantee that the OS won't mess with your data, move it elsewhere, etc (and if it doesn't today, it might in tomorrow's macOS update). I would never trust this approach enough to recommend it for our users, to be honest.


> There's no way to guarantee that the OS won't mess with your data, move it elsewhere, etc

I mean, "moving it elsewhere" is why you'd call into linux-apfs on every boot — to find out where the rootfs backing file's extent is living today. It might move around between boots, but it won't move while macOS isn't even running.

And this approach presumes that there exists an explicit file attribute for keeping a file single-extent / contiguous-on-disk (as far as the host-NVMe interface is concerned, at least.) Such an attribute does exist on NTFS, which is why WUBI continued to work there. I've never heard about such an attribute existing on APFS, which is why I said I'm not sure whether APFS "supports" this.

Usually there is such an attribute in most-any filesystem, though — it exists to be used with swap files, so that kernels can directly mmap(2) the disk-block range underlying the swap file rather than having every read/write go through the filesystem layer.

I know that macOS uses an APFS volume for swap (the "VM" volume), which is kind of interesting. Maybe in APFS, there isn't a per-file attribute for contiguous allocation, but rather a per-volume one?

In that case, it might be possible for the Linux rootfs to be an APFS volume, in the same way that the macOS VM volume is an APFS volume.


> I've never heard about such an attribute existing on APFS, which is why I said I'm not sure whether APFS "supports" this.

Ah, yeah, I've never heard of anything like that. I didn't realize NTFS had this. I have no idea how APFS handles the VM volume, though.


Amazing work and any developer should work on whatever he pleases. But there is also something in me that wished development would focus on something more open. What if Apple tomorrow decides only signed kernels are allowed to boot (to protect the children). I know there currently is no ARM laptop equivalent to the M1, and ARM booting is a minefield. So I get that if the M1 becomes a viable Linux target this would be very interesting.


I think Apple has specifically announced at WWDC that ARM macs won't be locked down? I've also read somewhere that they have specifically written some utilities to allow booting arbitrary kernels — they could have very well not done that, so it has to be very intentional.

Apple engineers are also probably reading all this and are very impressed.


Would love to see a link to that announcement. I'm sure Apple engineers appreciate the skills that go into this, but they have little to no say in Apple's strategic direction which is ever tighter control of the platform. Don't get me wrong, I would love Apple to publicly endorse running arbitrary kernels on M1 hardware.


Kind of unrelated but I would like to benefit from the knowledge of people hanging out here to ask if anybody is aware of how good and how usable is Linux on Intel MacBook? Is there a specific Intel MacBook that's known to be particularly well supported by Linux ? Been searching but I couldn't find much else than vague information and lists of Linux distributions I could install on a MacBook.


Depends on the age of the macbook. Older models have reasonable support, newer models are (at best) a pain in the ass.

I generally hit the arch wikis with specific models for the best information.

This github repos also does a good job laying out current support for the 2016 models. https://github.com/Dunedan/mbp-2016-linux

Frankly, I haven't tried on a newer model then that - I don't buy apple hardware anymore.


I have a 2017 MacBook Pro with Touchbar and have Manjaro installed in it. It works fine but it is not perfect. No reliable bluetooth, never got touchbar working (so no esc and function keys). Its is not my main OS (I use Windows for work), but for studying and trying new things I use it. I wouldn't risk installing it without leaving a MacOS partition.


15 inch Macbook Pro's from 2015 are regarded as the best Macs for Linux and hackers in general. The ports are a big plus too.


I use Arch as a daily driver on my 2013 MacBook. Their wiki has all I needed to tweak to get it running perfect. Needs a few extra steps (had to pass some max specific kernel parameters to turn off thunder bolt for instance) but it’s leaps and bounds better than macOS.


Thanks for working on this! It would be great if making Asahi generally usable was not blocked by the GPU driver. Currently Apple M1 is the only generally-available ARM hardware that can be used for testing other software for compatibility with ARM. So an installable version without desktop support would be very appreciated.


It's not blocked on the GPU driver; you can already boot a desktop on the boot-time framebuffer (this has worked for months). The issue right now is that as I mentioned at the end, things like USB and storage work but aren't quite there yet and are spread around various kernel branches, so our next step is indeed going to be to get that all in shape so that there is one known good kernel branch for people to use. That will give you Ethernet, USB, NVMe, etc. Then we'll tackle the GPU after that.

https://twitter.com/alyssarzg/status/1419469011734073347


That's great to hear, thanks!


I'm typing this from an AArch64 device that's not an M1 and runs Linux. What are you talking about.


Well come on, what is it :)


A Pinephone haha. I post here often enough on it that I'm thinking about adding hackernews to /etc/hosts or deleting my account.


Raspberry Pi?


Just testing non-desktop software on ARM – you could use a Raspberry Pi 4 for that. Or a Pinebook Pro. Or just an EC2 instance!

For "real workstation" class hardware (i.e. working PCIe/XHCI/AHCI on standard ACPI+UEFI systems with mostly FOSS firmware) nothing beats SolidRun boards (MACCHIATObin/HoneyComb LX2K). Yeah yeah the Cortex-A72 cores are really showing their age, oh well, on the other hand the SoCs are not embedded cursed crap :)


Avantek also sells a few workstations that are basically Ampere ARM server boards stuffed into desktop cases. Very powerful CPUs with up the 80 Neoverse N1 cores, lots of PCIe and RAM slots, very expensive.

https://store.avantek.co.uk/arm-desktops.html


What makes all of the other ARM hardware available unsuitable?


Especially for example that a Jetson AGX Xavier with 32GB of RAM is available at the same price point as an M1 Mac mini.

For $50 more, you can get: https://www.solid-run.com/arm-servers-networking-platforms/h... on which you just add your own DRAM sticks too.


If everything wasn't so built around CUDA the M1 GPU actually gives you twice the performance


Source?


Look at any online benchmark? It runs at a similar clock and has twice the tensor cores, the AGX is a 2 year old 12nm chip. Why would anyone be surprised if the M1 was faster?


3 years old in the market.

And since then: 12nm -> 10nm -> 7nm -> 5nm


Why is this the kind of ARM processor you need, and not a raspberry pi?


I find Marcan's live streams on this fascinating/mesmerising, and I'm not a programmer.


Agreed. I have even learned some things casually watching his streams over the last month or so.


here's a (possibly dumb) question - assuming a 100% complete and successful "Asahi Linux", what does this mean for distros?

Is this a kernel replacement, as in I could run Manjaro Gnome, and just load the Asahi Kernel and it all Just Works?

Or will Asahi Linux need to be a full distro of its own, in order to be useful?


It's several things (yes, we know it's confusing). Asahi Linux is the overall project.

m1n1 is our bootloader, which once installed will present a standard (ish) U-Boot/UEFI boot environment. From there you will be able to boot any distro or USB installer, in principle, once the changes trickle upstream.

Our installer will install m1n1 and then give you the choice of also installing a distro image. We'll be providing our own based on Arch Linux ARM, but we're open to other distros providing images so we can add them to the list. We already have interest from Fedora, for example.

So we expect that people who want to run Linux (or in fact openbsd or anything else) on these Macs will first use our bootloader installer, then either pick one of the built-in images or use their own USB installation media from whatever distro.


They are working on creating a full distro, based on Arch Linux ARM, which will be an easy-to-use package for end-users with bleeding-edge versions of the software they develop.[1]

That said, all components will be upstreamed into their respective projects such that other distributions should eventually be able to add support for Apple's line of desktop computers with Apple Silicon.

[1] https://asahilinux.org/about/#is-this-a-linux-distribution


It looks like installing a Linux distro on an m1 Mac requires a special installation package, rather than a standard iso. So Asahi is going to be a distro packaged/distributed specifically for m1 Macs, but changes to the kernel etc are being upstreamed, so other distributions should be able to build their own installation media for m1 Macs too in the future.


We need lawmakers to force Apple and other companies to open documentation that could enable writing drivers etc. Give Apple anti privacy stance, I can see why they don't want a platform they can't control on their hardware. This should be illegal.


Wow that's some progress. This looks pretty close to useable it seems. Is there some kind of expected timeline? Like in 6 months. Kb, wifi etc work. In 1 year we expect the GPU to work?


Unofficially, I still have it as a bit of a personal goal to get the GPU going by the end of the year. We already have the userspace side passing >90% of the GLES2 tests on macOS, it's just the kernel side that's missing, so it's not as far as it might seem.


That would truly be amazing! I don't really expect anything official, I mean software is hard to estimate as it is. Let alone when you are reverse engineering where you never know what kind of insanity you will find.

So for GPU we would need 3 pieces? Alyssa's work on reverse engineering the protocol and writing a driver to submit the correct commands to the GPU. A working DCP driver to handle actually displaying rendered frames. And finally a driver to be able to submit the commands to the GPU. And work on the last of those has not started yet, right? It's hard to piece together how everything fits for me.


There's a whole different coprocessor tied to the GPU for the rendering (AGX), though the generic RTKit/mailbox stuff is the same as it is for DCP. That will largely involve command submission, memory management, and (a relatively big unknown) GPU preemption. In principle, the GPU kernel side should be quite a bit simpler than DCP, since display management is quite a bit hairier than (relatively) simpler GPU operations; most of the complexity of the GPU is in the userspace side. But we won't know until we get there.


How exactly do you test your own GPU "rendering" under macOS?


Essentially by replacing the macOS shader -> machine code compiler with one of your own (or, rather, running it alongside).


And the rest of the Metal. We're directly taking to the macOS kernel driver.


This is so cool and the Asahi linux team has done amazing work.

I can’t wait to use Linux as daily driver on my M1


What are the licensing implications of using DCP firmware from macOS installations - just having the hardware and OS license is enough to use it within Linux?


All Apple Silicon machine owners are licensed to use macOS (and all of its components), among other things, because it is actually a critical part of system firmware (as part of system recovery; the machine boot picker is itself macOS even).

Apple provides restore images for this purpose (downloadable without any authentication). This is how the installer works, so we don't have to redistribute any firmware ourselves. Redistribution is the usual thorny firmware problem; since we don't have to do that, we're in a pretty good position.

Whether Apple's EULA applies to macOS obtained in this way, without actually going through a normal boot for that instance and thus the click-through EULA, is legally unclear. That said, here's a quick analysis:

2A says you can only install and run a single copy of the software. That might seem to imply that you wouldn't be able to dual-boot macOS and Asahi Linux, but this clause is pretty dumb. By design, all of these machines with one instance of macOS have at least two copies of part of the software: the recovery image - is that a violation? What about the fact that the bootability process creates a copy of all system firmware? You end up with 4+ copies of some components. What about installing macOS multiple times? Multiple versions of macOS? Do APFS CoW clones count as copies? If so that's another two copies of the recovery image. If they don't, can we just CoW clone stuff from the currently installed macOS version within the same container instead of doing a from-scratch install? Once Apple's updater breaks the CoW link, does that mean they are causing you to violate the EULA at that point? These are all things Apple obviously intended for people to do with these machines, regardless of what the lawyers say, so I don't think we have to worry too much about it. Legal texts are often completely at odds with the technical realities involved, and in the end nobody cares.

2M explicitly allows you to replace the Open Source components of macOS with your own version. This is exactly what m1n1 is: it is a replacement for xnu, the open source macOS kernel. So this clause explicitly puts at least our original full-fat macOS install process in the clear. The clause also says their warranty does not cover damage caused by such modifications, but does not say your warranty is void by merely doing so (so they'd have to prove any damage was caused by it).

2N says no reverse engineering, but you don't care about that, we do. I'm pretty confident I'm not going to get a C&D from Apple's lawyers for working on Asahi Linux, and if I do, I know how to defend myself and what organizations can help with that. It would be terribly bad PR for them to do that.

4A says you can transfer your Apple software, but 4B says you can't do that if you've modified it under 2M. That might mean that, on paper, you can't give away or sell a Mac that has Asahi Linux installed, without first uninstalling the bootloader and having the next owner reinstall it. Again though, this is a silly technicality. What Apple means is they don't want you patching macOS and distributing it separately. They also say the macOS components are a bundle and cannot be distributed separately, but our installer doesn't distribute anything, it pulls specific components from Apple themselves, which they are distributing.

4C says that you may only use copies made available for restorative purposes for those purposes and may not be resold or transferred. That might put our usage of ipsw restore images in the installer out of compliance, though the "restorative purposes" thing is a bit unclear. Does it count as "restoring" Apple's boot components into a blank OS partition? Obviously we are not reselling or transferring anything, so that part is not an issue.

This is an EULA, and to what extent these clauses are legally enforceable is of course unclear. We aren't doing the big copyright no-no of redistribution, and Apple themselves provide the software for free download, and it is a fact that every Mac user is licensed to use it. So in practice it would be quite questionable to try to go after people for technicalities here.

The better question is whether Apple's lawyers will care, and I'm pretty sure the answer is no. The EULA is there to give them legal ammo against companies trying to do things like provide virtual macOS as a service or installing it on non-Apple machines; they don't really care what end-users do (they don't even care about hackintoshes as long as you aren't selling them, which is a blatant EULA violation). The general idea of running Linux with Apple's firmware is completely kosher per 2M (if done on a single macOS install, e.g. just replacing xnu with m1n1 without a separate OS instance), so there is no legal basis for Apple to block our project entirely. It would be extremely silly if they went after our users to block our specific install process just to annoy them or get them to replace macOS with Linux instead of dual-booting, and it would also put in jeopardy things that regular macOS developers do, like having multiple OS installs.


Here's a copy of the TOS if anyone else needs it to follow along! https://www.apple.com/legal/sla/docs/macOSBigSur.pdf


Thanks for the thorough explanation Hector! Good to hear it's not a legal grey area - at least not anything that would result in cease and desist!


When the project is done will Linux running on an M1 have any advantages over just running Linux on a regular x86_64 system?


yes, completely silent for one (no cpu fan)

also m1 is pretty quick


The Pro has a CPU fan.

Anyway, the ISA does not necessarily imply higher or lower energy consumption.


Pardon my ignorance but what does the m1n1 hypervisor mean? Does this mean that it’s a single core hypervisor vm?


m1n1 is our bootloader that bridges the Apple world and the Linux world together; it is also a hardware reverse engineering platform (remote controlled from a host) and now also a hypervisor.

The hypervisor feature is indeed single core, but that's just because I haven't bothered implementing SMP (would need a bunch of extra locking, but it's doable, just not really necessary for us).


I'm not an early adopter for this kind of thing, but it's great to see progress all the same.


Damn, this is so cool. I can tell I'm going to be sucked back into to trying linux yet again...


Are the ASC firmware blobs encrypted too? ie. are they easily available for static analysis?


Only the SEP firmware is encrypted; all the other copro firmwares are plaintext and your can analyze them to your heart's content.


A dumb question. What are they trying to achieve by encrypting the SEP firmware? If they want to prevent people modifying it, won't signing it be enough? Otherwise this smells of security through obscurity, which, strictly speaking, isn't really security, and Apple engineers are most likely aware of that.


It's not security through obscurity, it's security plus obscurity.

It's a very real fact that if attackers can't find your bugs, they are less likely to exploit them. Apple are well versed in writing secure code, but they slip up sometimes. It seems their calculus at this time is that the time gained by not allowing attackers easy visibility into their code is worth it, in case bugs are found. I don't fully agree with that choice, but I can see why they made it; it is not meaningless.


Security through obscurity can absolutely be an important layer of a defense in depth strategy. It's just useless as the only layer.


There's also the HDCP and some other DRM keys in there... but then, they encrypt iBoot too. So can't be all of it.


Yeah because clearly DRM has never been broken and none of those keys are known to the public already.


They are regular Mach-Os, with some symbols still available too.


Hector Martin is a hacker machine.


This is genius at work. If I was 5% as smart as these guys I'd be doing well.


Interesting. What's the project leaders day job, if I don't mind asking?


From https://marcan.st/about/

"Hello! I’m Hector Martin and like to go by the nickname “marcan”. I currently live in Tokyo, Japan as an IT/security consultant by day and a hacker by night. For some definition of day and night, anyway."

Hacker for hire basically. This project was started via crowdfunding enough monthly sponsors (overnight from his previous reputation and interest in this project) and is somewhat half day job and half night hacking out of personal interest as a result.


Hugely nifty work! Makes me wish I had an M1 system even more so I could help out.


Linux on M1 mini is looking like a dream machine more and more.


On one level yes. I just can't imagine (anymore) pumping money into an ecosystem that purposefully makes it so difficult to run free software.

I admit that excludes more and more modern hardware, but apple is the most expensive and profitable.


Mini is not expensive. At least in its base configuration. It's a pity you can't upgrade it with cheap RAM and disks anymore.


It's unfortunate in a lot of ways that this is all getting written before Rust support is in the kernel. If that was the case then all these new drivers could get written in Rust instead. Oh well.


As described, the crux of this work seems to be learning how to interface with Apple’s proprietary hardware. If Rust enthusiasts want to go back later and reimplement, they’ll have a working open source reference implementation.


Asking people to use an experimental feature that will probably be ready in a couple years only to write stuff today is an extremely bizzare thing to do.


Great work, but as for the installer, use at your own risk or on another spare machine.

Unless you want to risk losing your files or your entire machine. Maybe it would void your warranty if you do this. Who knows.


Installing a custom kernel is an official feature on these machines and won't void your warranty.

I've never had the installer do anything crazy, nor does it support any resize/deletion operations yet, so it's pretty unlikely that it'll wipe your data (it's much more likely to just fail to complete the installation properly).

More likely is that if you boot Linux, a driver bug will cause data corruption. We've had some reports of filesystem issues, though as I said the kernel tree situation is haphazard right now, so we'll see if they still happen once we put together a tree with properly vetted patches :)


> More likely is that if you boot Linux, a driver bug will cause data corruption. We've had some reports of filesystem issues, though as I said the kernel tree situation is haphazard right now, so we'll see if they still happen once we put together a tree with properly vetted patches :)

Well that is another notable risk isn't it? If something goes wrong on the filesystem level (since you admitted it) then the worst case is that the OS and your important files become corrupted some how and it is back to recovery once again, if not then they will use Apple Configurator which requires another Mac.

Regardless, I would most certainly use another machine to test this, hence why I said 'use at your own risk', especially when the installation script is pre-alpha.

What is wrong with such a disclaimer?


Do you really care if Linux corrupts its apfs container on machine 2 instead of machine 1? Just don't go mounting your macOS apfs container and it's no different.

Regardless I think most of the problem was with defaulting to fear mongering, particularly at the warranty, rather than gaining an understanding of what the risks are before making recommending action to avoid said risks.


How is it fear-mongering to say 'Use at your own risk' for installing an OS that is not fully supported and even the installation script is in pre-alpha?

Fine for those who have a spare M1 machine but the added risk comes when it is their only machine with important work and files on it.

Do you want to be liable when someone tries this script out and somehow ends up losing their important work because of those recently admitted 'filesystem issues' on the only machine they have? That is my point and why I said: "I would most certainly use another machine to test this"

I don't see lots of general users racing to install this right now, since they probably know that it is still incomplete with lots of bugs and is still unusable. Unless you want to install it right now on your work machine with important files perhaps? If not then wait.


You can DFU these machines from Linux with idevicerestore, no need for Apple Configurator.


Can you fully restore them right now officially with idevicerestore on all M1 Macs even if they are on either Windows or Linux? Is there any source for this?

Is that functionality in idevicerestore ready to use instead of using Apple Configurator?


I don't know about Windows, but it runs on Linux, yes, and it works on at least the M1 Mac Mini. See: https://github.com/libimobiledevice/idevicerestore/pull/406

There's some bug with the laptops, but it should be easy to fix once someone takes a good look at it. I might spend some time on that; I definitely want idevicerestore to be in good shape before we start encouraging more wide usage of Asahi. (https://github.com/libimobiledevice/idevicerestore/issues/41...)


The common feature with these embedded ARM chipsets is that a faulty kernel can very much hardbrick your machine. There's nothing like the amount of failsafes you get with ordinary PC-compatible hardware. I wouldn't want to rely on any promise that "installing a custom kernel won't void your warranty" - in practice, it very much will if you don't know what you're doing.


You're assuming Apple's ARM chipsets are like other embedded ARM chipsets. They aren't. Things are much cleaner and quite well designed.

Storage-wise, you can't brick the machine by wiping NVMe. You can always recover via DFU mode, which you can also use from another Linux machine by using idevicerestore (no need for macOS). This is very much unlike your typical Android phone, which is indeed a flimsy mess that will brick itself at the slightest touch of the wrong partition in eMMC.

Hardware-wise, there's the usual "you can technically do ugly things with I/O pins", but in practice that's quite unlikely to cause damage. The lowest level, dangerous things are handled by SMC, not the main OS. We barely have to deal with any of that. And this is a thing on PCs too, just read your chipset's user manual and you'll find out the registers to flip random I/O pins.

Firmware-wise, I do know of one way of causing damage to these machines that cannot be recovered by the user: wiping NOR Flash. This isn't because it'll make DFU unusable, but rather because it contains things like system identity information and calibration data that only Apple can put back properly. But again, this isn't much different on PCs; you can brick a PC by wiping the NOR Flash from your OS too. In fact there have been BIOSes so broken that you can brick them by doing a `rm -rf /`, which descends into efivars and the UEFI implementation crashes when it finds no EFI variables.

In order to avoid the NOR Flash bricking risk, I do not intend to instantiate the NOR device in our device tree at all by default. I do not believe we need it for regular operation. Flash memories have specific unlock sequences for writes/erases, so there is practically zero risk that a crazy random bug (i.e. scribbling over random I/O memory) could trigger such an operation, and without the NOR driver bound, the code that actually knows how to cause damage will never run.

For what it's worth, I have quite some experience not bricking people's machines, even when dealing with unofficial exploits/hacks/jailbreaks. To my knowledge, not a single user out of the probably >10m people running The Homebrew Channel / BootMii on their Wii, which I helped develop, has bricked their system due to a problem traceable to our software. Nintendo has a much worse track record :-)

https://marcan.st/2011/01/safe-hacking/


> system identity information and calibration data that only Apple can put back properly

Ha. This reminds me of a Motorola Linux phone I bricked as a kid by trying to write a package manager in C shell (lol) which inevitably led to rm -rf / being executed which wiped a unique per-device security partition required for the boot process.

Why did Apple make the same choice of putting critical data onto an OS-writable flash chip? Chromebooks do it right – everything critical lives in the Security Chip's own on-chip flash, and the list of commands it would take from the host system is very limited by default (most things are only available to the debug cable).


You might be able to restore the NOR with 'Apple Configurator 2', which seems to be a public version of an internal tool, though it's limited to taking an otherwise unbootable machine to a fully-installed-OS machine.

Plug the bricked machine into another Mac using a TB3 cable in a particular slot (they tell you which) and follow the instructions. Worked for me.


It's an SPI flash that isn't mapped to /dev.

As far as I know, fully wiping it is recoverable, but involves putting the Mac into device firmware update mode, and then recovering from another machine.


AIUI if you actually wipe NOR flash entirely, DFU mode won't save you, because the Mac won't even know who it is (MAC addresses, serial number, etc.).

However, I can't claim to have tried it, so there may well be additional safeguards. I'm just not particularly interested in risking it for our users either way :)


Same for x86 machines really, the number of UEFI firmwares that hard-bricked the machine if you dared deleting all NVRAM variables...

(and many more other issues)


That number is not that big, IIRC just a few specific laptops. I haven't heard of any desktop mainboards with that issue.


I remember some Lenovo server boards being bricked by an UEFI variable storage snafu, which was painful to handle.


> For the initial kernel DCP support, we expect to require the firmware released with macOS 12 “Monterey” (which is currently in public beta); perhaps 12.0 or a newer point release, depending on the timing. We will add new supported firmware versions as we find it necessary to take advantage of bugfixes and to support new hardware.

Please don't do this! Monterey is an OS I will never install given that it will include the Apple-provided spyware. This shouldn't be the earliest supported version! I suggest instead finalizing on a current/later release of Mac OS 11 as updates for it will slow down once Monterey is released. A lot of people won't be updating to Monterey.


You need Monterey firmware only. You can just do a side install once (to update your system firmware, which needs to be at least as new as the newest OS bundle) into another volume, nuke it, then install Asahi with the Monterey firmware bundle option, and keep your main macOS install at whatever version you want.


> A lot of people won't be updating to Monterey.

I’m quite sure this won’t be true, HN threads full of uninformed complaints notwithstanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: