Hacker News new | past | comments | ask | show | jobs | submit login
Running GUI Linux in a virtual machine on a Mac (developer.apple.com)
345 points by pram on Oct 9, 2022 | hide | past | favorite | 201 comments



It seems there is some confusion of what this page is about or why it might be relevant. This is not about an end-user product to run desktop virtualisation, but it is a page for developers in the Apple ecosystem to programmatically make use of OS-native virtualisation to create virtual machines with graphics output.

The value in this is not as much that "it is possible" because there are plenty of implementations of this possibility, but it's more about the vendor-native support, vendor-native documentation and examples and the signal it gives about the current state of support.

In the wild, an end-user (that includes developers, it's not a derogatory term, it simply mens the person the product is intended to be used by) might see applications like UTM, Veertu, Q and others use this under the hood.

VirtualBox doesn't work on ARM by the way, they only release for x86(_64). So it can only virtualise, and only on x86 hosts for x86 guests.


> VirtualBox doesn't work on ARM by the way, they only release for x86(_64).

Because VirtualBox is not a CPU emulator, unlike QEMU, MAME/MESS, and other emulators. Virtualization software such as Fusion and Parallels run VMs of operating systems that match the architecture of the host OS. Intel VMs like those in VirtualBox will not run on Apple Silicon, and Apple Silicon VMs won't run on Intel. The CPU instruction sets are not compatible, nor can Rosetta 2 solve this.

But an OS is arguably a tool used to run architecture compatible software, and much software will compile on different architecture platforms, so while it may not be possible to run VirtualBox on Apple Silicon, it may be possible to recompile whatever software is running on the virtual OS to native Apple Silicon. This is true for 10s of thousands of ports managed by MacPorts and much of the code available on GitHub, but it probably isn't true for proprietary software where the source code is unavailable.


I added some clarification. There is no VirtualBox on ARM platform. So you can't virtualise ARM on ARM with VirtualBox, because they don't make ARM compatible releases.

But both VMWare and Parallels for example have made ARM on ARM virtualisation releases of their existing x86 on x86 products.

So if you need something like a sandbox or IR workstation, you can have plenty of virtual environments, just not with VirtualBox as supporting software and type 2 hypervisor.


> But both VMWare and Parallels for example have made ARM on ARM virtualisation releases of their existing x86 on x86 products.

Right, but the VMs themselves are not cross platform. ARM Linux won't work in Intel VMWare, and x86 Linux won't work in ARM VMWare. I am sure you knew, or course, but as you pointed out there is confusion below about VirtualBox working on Apple Silicon.


That is correct. But lets leave the VMs for now. The hypervisors is where the problem is at, if a hypervisor has no release for a specific platform, you cannot use it, period. That is the problem with VirtualBox. There is no VirtualBox for ARM. Only VirtualBox for x86. If you have an ARM host, and you wish you run an ARM guest, you cannot do that with VirtualBox. Because there is no VirtualBox for ARM.


> ARM Linux won't work in Intel VMWare

Newsflash: program for one processor's instruction set won't work on another processor.


If only it was that clear cut.

As you well know, with Rosetta and other technologies (heck, also Wine) that's just not true. With Rosetta you can take any (or most) Mac programs compiled for x86 and just run them on M1 ARM. Support gets even deeper, as with the Rosetta update you will be able to even run x86 binaries inside a hosted ARM virtual machine (!).

So it makes sense for people to be confused as to what's actually supported, and to add the clarification when this is not the case, because some might also expect this could be the case with say Virtualbox.


> Well, with Rossetta and other technologies (heck, also Wine) that's just not true.

Rosetta takes an instruction stream from another processor and translates it to an instruction stream for another processor - that's exactly what I mean - it needs translation.


Sure, and I know you know it.

But it's exactly the availability of said translation that makes the claim "program for one processor's instruction set won't work on another processor" less clear cut than the "news flash" sarcasm implies.

Doubly so since this translation is often automagical and transparent to the one using it (so they expect it to just work in all cases).


Doesn't AArch64 support the same virtualisation primitives that VirtualBox uses on AMD64?


Yes and no. Similar in functionality, different in implementation. So you do get things like IOMMU etc. but you need a different software implementation to make use of it, even if only to use the different ISA and setup procedures.

Imagine it like this:

    ┌─────────────────────────────────────┐
    │higher-level software to abstract it │
    ├────────────────────────────▲────────┤
    ├───▼─────────────────────────────────┤
    │low-level software to make use of it │
    ├────────────────────────────▲────────┤
    ├───▼─────────────────────────────────┤
    │hardware with virtualisation support │
    └─────────────────────────────────────┘

Even within the same architecture you might need different low-level interfacing software to make use of it. Even a type-1 hypervisor like Xen would need to know a bit about the specifics of the hardware to know what features exist and how to use them. Then you get some higher level abstractions that allow you to use a virtualisation interface to make use of the system without having to re-invent that interface every time some new hardware gets released.

So if you have libvirt, you have a high-level abstraction for multiple hypervisors (like KVM and Xen) which in turn know how to interact with various hardware implementations to actually have it act the way we want it.

On macOS there were essentially two low-level implementations, one for ARM and one for Intel, and a higher-level abstraction, Hypervisor.framework, that lets you simply "create" virtual machine instances, which then make use of the various virtualisation features depending on the underlying hardware.

In the early days, the implementations between AMD and Intel were so different that you often needed specific builds for one or the other implementations, and you couldn't have both loaded at the same time (even if just 1 active and every other implementation inactive).


Hypervisor has a core API that is shared across Intel and Apple silicon (e.g. hv_vcpu_run) and then each platform has its own additions on top of this (for example, on Intel platforms you can prod the VMCS). On top of this is the Virtualization framework, which provides a largely unified API to run VMs (dealing with things like screens, pointing devices, etc.) that and treating the actual virtualized processor as an implementation detail, besides you having to provide an image that matches the host architecture. The linked code here uses Virtualization.



For those not in tune with virtualbox, this beta lets you run 32-bit x86 guests on aarch64 macOS


Sadly it does not appear to work that well..

Quote:

There has been no port to M1/M2. All there has been is early work that does nothing useful but somehow escaped the stables. You can safely forget all about it for now.

[0] https://forums.virtualbox.org/viewtopic.php?f=15&t=106919


Yeah it’s a shame still looks promising.


Just want to point out that QEMU can use your CPU's hardware virtualization extensions for VMs of the same architecture as your host, it's not exclusively an emulator.


UTM works great. Emulate X86, AMD64, or run on Apple Silicon. It's essentially a GUI on top of QEMU.

https://mac.getutm.app/


I really like UTM's business model: identical free software if you want it, or 10 bucks if you want automatic updates and to fund UTM development.


I didn't realize UTM was open source, so I followed a few guides for setting up VMs directly with libvirt: https://n8henrie.com/2022/09/linux-vms-on-my-m1-mac/

Having since learned that they are open source I think I'll look into UTM again, but even as a novice I didn't find it too difficult to use libvirt.


The only problem seems to be that it won't allow you to run an earlier OS X that can still run 32-bit programs.


The comment you replied to was about UTM, which does allow running earlier versions of macOS/OS X on Apple silicon hosts:

Virtualizing OpenCore and x86 macOS on Apple Silicon (and even iOS!) https://khronokernel.github.io/apple/silicon/2021/01/17/QEMU...

Run Tiger, Leopard, or any Mac OS X PowerPC version on M1 https://tinyapps.org/docs/tiger-on-m1.html


Their website says differently Note that macOS VM support is limited to ARM based Macs running macOS Monterey or higher. https://mac.getutm.app/


> Their website says differently Note that macOS VM support is limited to ARM based Macs running macOS Monterey or higher.

While ARM virtual machines are limited to Monterey and higher on Apple silicon[0], emulation works just fine for x86, PowerPC, etc.

Here's a video I cobbled together of UTM running Tiger on an Apple silicon Big Sur host: https://tinyapps.org/screenshots/tiger-on-m1.mp4

and another of the same host running Windows XP: https://tinyapps.org/screenshots/20210522-utm-xp.mp4

[0] https://kb.parallels.com/125561 "To run a macOS Monterey VM on Mac computers with Apple M1 chips, Parallels Desktop 17 uses new technology introduced in macOS Monterey, that's why it is not possible to run earlier versions of macOS on a Mac with Apple M1 chips."


That means that the host has to be on Monterey or higher. The guest can be any macOS version.


No, the guest (if using Virtualization) must be Monterey or higher as well.


Those earlier macOS versions don't have the frameworks to run virtualisation with. There are some hacks (mackintosh-ish) that allow you to run very old Intel releases on non-ARM releases, those virtualise the CPU but emulate nearly everything else.


Another problem is that it's not programmatic (e.g., like Vagrant), so it's tiresome to spin up/down a bunch of VMs.


There are several tools to create VMs from the CLI or scripts like QEMU, docker machine, and podman machine.


Wow, you can use Rosetta to run x86 Linux binaries inside the VM on ARM? That seems pretty awesome. https://developer.apple.com/documentation/virtualization/run...


I've tried using this a while back, soon after Apple released Ventura beta to developers.

I can say that documentation (and examples) are quite nice and easy to follow, I managed to start Linux VM with Rosetta support, and SSH, in 1-2 hours.

Geekbench numbers were quite close when running in VM (via Rosetta) and natively, they were only a few percentages slower in VM, so it stands to reason that computation heavy workload would run great.

However, using development tools regularly resulted in segmentation fault or crashing for other reasons (e.g. when doing `npm install`). I didn't dig deeper to figure out what are the reasons for this.

Additionally, real-world (from developer's point-of-view) workloads such as running web servers, building software, or running tests, were quite slow compared to running in an ARM VM.


I found that the Rosetta/Linux support worked best to run x86_64 containerized workloads, and I never experienced crashes working this way. In my experience, this is far more reliable than using something like debian/ubuntu multiarch support to install both aarch64 and x86_64 into the base system.

I ran x86_64 geekbench in a Linux/aarch64 VM using both Rosetta and QEMU binfmt_misc support and found that Rosetta outperformed QEMU by about 3x. The extra speed is welcome, but I do hope that all the dirty tricks Rosetta uses are eventually able to be used by both non-apple implementations of x86_64 emulation and on non-apple ARM CPUs.


Maybe I'm missing something, what are the actual commands for running a Linux x64 binary on macOS ARM?


You don't run the Linux x64 binary on macOS ARM. You run the Linux x64 binary on Linux ARM inside a VM on macOS ARM.


It seems overly complicated. Is there any benefit over using Docker?


I'm not sure what you think Docker on a Mac does. Docker is a Linux-specific piece of software, and the only way to use it within macOS is to run a Linux virtual machine, with Docker running inside the Linux VM.

If the containers you want to run with Docker are x86 software, that Linux VM either needs to be an x86 Linux distro running in qemu emulation of a full x86 machine, or an ARM Linux distro using the new (not yet released in stable macOS) Rosetta for Linux translation.


Well, kind of. Docker is a product, with official support for Linux containers on Mac (and Windows containers on Windows!). Docker for Mac comes with a Linux VM as a feature of the product; you don't need to install it yourself inside a VM (though that works, too).

It does sound like adding Rosetta binfmt_misc support would allow Docker for Mac to ship an ARM64 kernel/VM image instead of an amd64 one and benefit from some performance boost, but potentially at the risk of reliability/fidelity. The entire idea of Docker is that the kernel ABI is a (supposedly) stable interface, and even if your userspace changed around it, a Docker container would have its own userspace and wouldn't care. Running a different-architecture kernel and dynamically translating it necessarily means that there will be visible differences in the kernel ABI. Sure, you can translate those differences, but that gets you farther from the promise.


Sure, there is a VM in Docker as well. I'm sorry if it wasn't clear, I'm looking for the most straightforward way. With Docker it's a single command, it helps with documenting the steps for other coworkers. It feels slow though. I'm wondering if the new way with Rosetta is going to be better in performance.


The most straightforward way is to use a product. This article documents the APIs for those building those products.


> Docker is a Linux-specific piece of software, and the only way to use it within macOS is to run a Linux virtual machine, with Docker running inside the Linux VM.

That is entirely untrue: https://docs.docker.com/desktop/install/mac-install/

Hell, it’s even on Windows.


In all cases, the Docker daemon is running under Linux. The Mac and Windows versions are merely bundling up a Linux VM containing Docker with a frontend that's as transparent as possible, but still with Linux as a hard requirement.

Pretending that the Mac and Windows versions somehow aren't using Linux VMs behind the scenes is of no use to anyone. It's a convenience for users when they can get by with ignoring the VM layer, but a detriment when we see people start talking as though Docker for Mac is functionally different from a Linux VM running Docker, and start assuming that enhancements to running Linux VMs under macOS would be inapplicable to and incompatible with "Docker for Mac".


> In all cases, the Docker daemon is running under Linux. The Mac and Windows versions are merely bundling up a Linux VM containing Docker with a frontend that's as transparent as possible, but still with Linux as a hard requirement.

I really doubt that's the case if you run native Windows containers on Windows.


Thanks for pointing that out. I hadn't realized Microsoft had jumped on the Docker bandwagon to that extent; it's far enough from the topic at hand and from anything I'd ever use that I overlooked it.

So while there is in fact an exception to my previous generalization, there's still no cross-platform compatibility magic to Docker aside from that of virtual machines. If the container OS is different from the host OS (or a different version of the OS, for Windows containers), then using Docker is an instance of using VMs, not an alternative to VMs.


Docker can benefit from it by running 1000x faster if they would be able to leverage Rosetta.


In my benchmarks running Geekbench via docker, Rosetta performed about 3x faster than QEMU's binfmt_misc translation. On certain microbenchmarks it was 10x-50x faster.


Docker is equally complicated under the hood.


Yes, but it's a single command to run, without the need to learn the internals, if the use case is just to run the binary.


This is developer documentation, it's for people who want the internals.

For example, docker-desktop on arm-mac can run x86_64 images by leveraging binfmt-qemu within docker's VM. Changing that to use binfmt-rosetta should have huge gains (as there's specific hardware support to enable rosetta).

Yes it's easier to use docker. Hopefully this trickles down so that you get the benefits of this while not changing your usage at all.


You just run them after you/your distro registers a handler for them. It uses binfmt behind the scenes.

https://en.wikipedia.org/wiki/Binfmt_misc

(To be clear, this is for running Linux x86 inside Linux arm64 VMs on an Apple Silicon host)


You can do this in UTM, it's a GUI that wraps QEMU.


UTM can use both native Apple Virtualisation and QEMU virtualisaion+emulation. The QEMU one can't make use of the M1/M2 Intel memory translation acceleration the same way native virtualisation can, but it can make use of acceleration on ARM64 workloads.


Has anyone seen a guide to installing OpenBSD on M1 Mac using UTM?


No, but given that UTM is a graphical wrapper around QEMU would this be a worthy substitute? https://codeofconnor.com/running-an-arm64-openbsd-virtual-ma...


Thanks. I’ll give it a go.


Only on Ventura, which isn't out yet (and quite frankly, the pre-release version is still a huge mess)


Also, the main page https://developer.apple.com/documentation/virtualization shows support for running virtualized macOS.

This is pretty exciting, and appears to be a macOS answer to libvirt


This link is also interesting:

Running Intel Binaries in Linux VMs with Rosetta

https://developer.apple.com/documentation/virtualization/run...


I can’t help but feel UTM pushed Apple to deliver this polished of a solution? Pulling this 100% out of thin air though.


They’ve been providing these APIs for a while for existing VM solutions in order to get rid of the kernel extensions previously needed. They keep evolving to support that use case.

I’ve long thought it would be interesting to release Linux native apps in this sort of wrapper to the App Store.


I doubt you could. You need special entitlements to sign a binary using these APIs, I dont think a wrapper would be enough to grant you that entitlement


UTM implements VF and QEMU implements HVF as an accelerator. I don't know why theres so much circling the wagons about them itt, these are complementary technologies. VF and HVF are not competition or a replacement.


Isn't UTM a frontend for these APIs?


No. It uses qemu.


Pretty sure I downloaded it and it let me set up a machine with or without qemu. Asked if i want virtualization or emulation.


The one I set up uses virtualization and definitely still uses qemu.

If you're seeing it use the hypervisor framework, it's possible that's changed since I set mine up and they just haven't seen fit to advertise that change on the web site, which still claims that it's "based on qemu."

(I'm using UTM 3.2.4... just because that's what I used to migrate my data from my old linux laptop to my new mac, and I haven't needed to change it.)


QEMU can do both virtualization (through Hypervisor.framework) and emulation. When you select "Virtualization" in UTM you have a checkbox "Use Apple Virtualization" which makes UTM use Virtualization.framework instead of QEMU. For "Emulation" it always uses QEMU.


Yes, these and QEMU


With Apple, it is all about exerting control and crippling softwares that don't align with their goals. Don't be surprised if Apple suddenly forces developers to only use their virtualisation API on macOS, just like they did for application firewalls (that are no longer allowed to have their own custom kernel extensions for "security" and "stability"). They are slowly squeezing macOS to make it more and more like ios.


>are no longer allowed to have their own custom kernel extensions for "security" and "stability"

Why the "scare quotes"? Sounds totally justifiable. Third party kernel extensions have always have had issues with both security and stability.


It's not "scare quotes" - it's to convey sarcasm and justified scepticism. Sure, poorly coded third party kernel extensions can have security and / or stability issue. That doesn't mean that Apple is the only one who knows how to write such bug-free or secure system software. In fact, in the past most of the popular application firewalls (and VM softwares) on macOS came with their own kernel extensions and have worked fine without issues, and millions of users have used such applications without any issues. Now that Apple is forcing macOS developers to use their APIs, these software developers are at the mercy of Apple which is exactly what Apple wants - they will now be forced to negotiate with Apple for any changes or new features they want. Apple can now cripple a system developers ability to provide the kind of system software they want to offer to user. End result is that Mac users and developers both lose.


This isn’t the way Apple wanted it to work. In a perfect world vendors would write good kernel extensions that were stable and wouldn’t crash a system.

Instead, in the real world we got vendors that wrote horrible and unstable code that was required by IT departments. For example, instead of implementing a firewall using Apple technologies, in a stable manner, we got an “enterprise” firewall kext that would crash whenever a USB network card was plugged in. And I mean crash the computer. My USB-C dock was absolutely worthless because of this exact issue.

Vendors forced this on Apple. IT departments that were primarily Windows shops would just blame Apple instead of the vendor. Because their software worked for Windows, so why was it only a problem for Apple? I know the IT department at my $WORK had this mentality.

Now at least I have a stable system and I don’t miss not being able to add my own kexts.


There are also examples of popular applications with kernel extensions that have been used by thousands / millions of users without any issue for years now on various macOS versions. Why should such system developers be punished with an inferior alternative, especially when it also serves Apple's interest by being anti-competitive too!?


> That doesn't mean that Apple is the only one who knows how to write such bug-free or secure system software. In fact, in the past most of the popular application firewalls (and VM softwares) on macOS came with their own kernel extensions and have worked fine without issues, and millions of users have used such applications without any issues.

Apple definitely cannot write bug-free code, but your claim that third-party software worked fine without issues is factually incorrect. Third party kernel extensions have historically been a major source of instability and crashes.


> but your claim that third-party software worked fine without issues is factually incorrect.

That's a misunderstanding of what I was trying to convey. To clarify - I was specifically talking about existing application firewalls and VM applications for macOS that use / used kernel extensions and already have a large userbase on macOS. Their popularity, and relative stability of the kernel extensions these applications used, is proof that Apple's black and white approach of "all non-Apple kernel extensions are bad" is rubbish. Especially when we consider how such a move is also anti-competitive and anti-consumer, and thus helps Apple's business.

Forcing developers to make an expensive migration from their own tried-and-tested and stable codebase over which they have full control, to use Apple's OS API's is a developer hostile move. Even more so when you consider that a developer will now have to rely on Apple to fix bugs or add new features - which Apple may have no inclination to do so so if it feels it isn't aligned to their own business interest (thus anti-competitive).

I personally have been using some of these applications that install custom kernel extension, and my macOS (3 versions of macOS over a period of a few years) has never crashed or become unstable because of them, so far. If an application that used a kernel extension made my system unstable or crashed it, I obviously wouldn't use it and will uninstall it. But completely taking away my choice as user to customise my OS with kernel extensions, and reducing my choices (as a customer) by taking away the ability of developers to offer a non-Apple custom software solution that is better is also obviously anti-consumer.


> Their popularity, and relative stability of the kernel extensions these applications used, is proof that Apple's black and white approach of "all non-Apple kernel extensions are bad" is rubbish.

Thats not what they are saying though. They are saying, when we have official userspace API, we will remove access to the old kernelspace API.

Thats not to say the kext is bad or evil. It is to say you won't be able to accomplish it in a kext using public api anymore.

Were there badly written kexts before? Were there kexts with interactions? Were there kernel API and whole subsystems kept in for years because some popular enterprise product refused to update?

Absolutely. I'll refrain from mentioning the one I had in mind which hit all three checkboxes.


>It's not "scare quotes" - it's to convey sarcasm and justified scepticism. Sure, poorly coded third party kernel extensions can have security and / or stability issue. That doesn't mean that Apple is the only one who knows how to write such bug-free or secure system software.

They don't have to be "the only one who knows how to write such bug-free or secure system software".

They just have to disallow a class of potentialy buggy and insecure software running with kernel level access, and that's enough to keep tons of additional attack vectors off the OS.

I'd rather trust just Apple on kernel-space (which I have to anyway, as they make the kernel), than Apple + any random third party software that wants to run as a kernel extension.


> I'd rather trust just Apple on kernel-space (which I have to anyway, as they make the kernel), than Apple + any random third party software that wants to run as a kernel extension.

Which is all ok till you consider that experienced developers that make good system software can offer better software than Apple, which often may have no incentive to support softwares that isn't aligned with their business interest. Application firewalls is an example, because they can also prevent Apple software and services from data-mining a users personal data. (You may trust the Apple codebase on which all Application firewalls on macOS are forced to run, but what if I don't and want alternative? And if you say, "Then don't use macOS", note that's a very anti-consumer view and is toss to everyone's consumer rights). Virtualisation is another example where non-Apple options can be better than Apple's as Apple would prefer that you do everything on their OS than use alternate ones. Or when you don't want to use Apple software because of the terms you have to accept to use it.


>and if you say, "Then don't use macOS", note that's a very anti-consumer view

Not much more "anti-consumer" than "if you prefer formal attire, then just don't buy Levi's - as opposed to insist the make black suits and bow ties".

Though I understand the convenience - I'd like to have the option myself in some cases.

What I dont like is third party companies forcing you to install a kext if you want to use their product, for non essential reasons (that could be handled in userland). At least this forces them to work with userland APIs (looking at the "haxie" peddlers out there).


> Don't be surprised if Apple suddenly forces developers to only use their virtualisation API on macOS

As far as I can tell, nobody has made a third party virtualization solution for Apple silicon nor is anyone really looking to do so. I don't think this conclusion is particularly surprising.


What about UTM - https://mac.getutm.app/ - based on QEMU? VM applications - Parallels, VMWare, Virtualbox etc. - have long been available on macOS and so its not as if nobody suddenly wants it just because their macOS is running on ARM SoC. There can be many reasons why they haven't yet migrated to Apple Silicon Macs, some of which may be the smaller userbase, the cost of migrating to a new platform, lack of literature or support from Apple, Apple not allowing them to offer their virtualising application using their own custom kernel extensions or Apple asking them to wait and use their virtualisation API under development.

To be clear, I don't believe there is anything wrong with Apple offering built-in OS API solution for applications like firewall, virtualisation, graphic engines etc. The problem is when they say you can only use those and don't allow competition.

When a particular type of software on macOS is forced to use just one particular API, it stunts all such softwares using the APIS as they can't really offer any major distinction and can only offer the limited features the API provides. Worse still is when such practices is actually meant to restrict competition to protect their business interest. For example, it is clear that Apple is pivoting more and more into commercialising user privacy (monetised through its services) and advertising (by data mining personal data through these same services). In such a case, existing softwares that offer privacy protections (like application firewalls and even alternate operating systems on macs) are examples of competing system softwares that impede this goal. Obviously Apple cannot directly ban these softwares without negative user backlash. But it can ensure that such softwares are crippled by forcing users to use limited API's and / or by limiting the operating environment they run on. (The frog is slowly boiling - https://en.wikipedia.org/wiki/Boiling_frog - and we developers and users are slowly losing more a more control over our Apple computers and device).


"Virtualization" in this context is not the framework but a custom hypervisor API rather than Apple's, as some vendors implemented on Intel Macs. I agree with your points about this limiting what you can do, but the replacement API is actually fairly decent in this case–there are some things that could be improved but overall it's not actually that bad.


I do believe most of the current VMM apps - Virtualbox, Parallels, VMWare - that run on Intel Mac do offer their own custom hypervisor solution mostly tapping on to "Intel Virtualization solution" ( https://www.intel.com/content/www/us/en/virtualization/virtu... ) through kernel extensions and / or the macOS hypervisor framework (which I think avoids kernel extension). This is possible because Intel is more "open" and readily shares their hardware literature / SDKs with developers and that is why we have had many custom, decent and performant hypervisor solution on Intel Macs. But Apple has been loathe to release similar literature for their ARM SoCs because they don't want independent competitive alternatives to their system softwares (which is ok up to a point to protect their business, but not if it crosses into illegal anti-competitive behaviour).

Apple has great developers and they can surely build a decent hypervisor API solution. But we'll never know if it is the best if no custom alternates are allowed (either due to lack of hardware literature or due to Apple policies restricting such custom alternatives).


> This is possible because Intel is more "open" and readily shares their hardware literature / SDKs with developers and that is why we have had many custom, decent and performant hypervisor solution on Intel Macs. But Apple has been loathe to release similar literature for their ARM SoCs because they don't want independent competitive alternatives to their system softwares.

Are you looking for https://developer.arm.com/documentation/100942/0100/AArch64-... ?

Access to virtualization features is provided through the hypervisor framework for both intel and apple silicon products. If you have an existing Intel architecture product you might be upset that Apple is going to disable your ability to use virtualization directly via a kext.

But, I have a hard time imagining someone really wanting to use their own hand crafted hypervisor rather than the one Apple built on Apple Silicon for running on Apple operating systems - especially since Apple has committed to providing additional hard things like paravirtualized hardware.


There are reasons one might want to do this; for example Apple does not provide access to all hardware performance counters in the guests they set up. This can be important for certain usecases. That said, beyond this there aren't really any good reasons why you should eschew Apple's solution, so once (if?) this is fixed the existing solutions should be pretty solid.


Fails to build for me on macOS Monterey. It seems like you have to wait until macOS Ventura.


I so, so want to be able to spin up linux VMs on iOS devices.


The opposite would be much more interesting and finally make cross-platform CI bearable without relying on closed source third-parties.


I guess that’s be cool. I just want to take advantage of this phone that I carry around in my pocket that’s more powerful than my laptop.


I'm still waiting for a Wine-equivalent where iOS takes the place of Windows.


Not IOS yet but macOs middle-ware like WINE. https://www.darlinghq.org/


You'd still need a Mac with xcode for the build, no?


Not really, I'm already doing Rust builds with macOS SDK on Linux for M1/M2 which works... Fine I guess, but it's hacky as hell.

It's testing that's the problem really, where you really need the hardware.



The most recent hardware actually do support ARM hypervisor extensions so you can actually run raw linux on iOS hosts :)

https://worthdoingbadly.com/hv/


There is iSH -- it's just a single VM, but it's pretty sweet nonetheless.


iSH is an emulator coupled with a Wine-like translation layer, rather than a VM.


I'm interested to see if this includes an example of harnessing Rosetta-based x86 translation from within the Linux VM. Here's the full documentation https://developer.apple.com/documentation/virtualization/run...


It's 2022 and we are still running installers in VMs ? Give me a pre-installed VM with cloud-init, it'll be fine, thank you.


What is the current state of vagrant on Mac Virtualization? Vagrant is the thing that I really miss on ARM.


Get a Parallels license and use it as the vagrant provider. That still seems to be the only real option. https://www.parallels.com/


With UTM, I can already run GUI Linux arm64 on an M1 mac - on macOS 12.x. Works fine at least for 2d desktop stuff.

I think UTM's arm64-macos guest support on arm64-macos12 hosts already use the macOS virtualization framework? Or is it hypervisor framework? Either way performance is very good.


UTM uses Virtualization for this.


I would prefer to run Linux GUI applications inside docker containers but, that’s something.


It takes some work, but you can pass in an XQuartz display to a Docker container and have it render to that.


Doesn't Monterey still run X - so you could forward x11 over ssh from host into docker?


I don't think MacOS has shipped with an X11 server for a while (https://www.macrumors.com/2012/02/17/apple-removes-x11-in-os...).


Ah, apparently it's possible to install, though:

https://www.xquartz.org/


Yes, in case it wasn't clear, that's what I was referring to. You can install XQuartz, configure it a little bit, and pass the display into a Docker container.


Link?


The closest thing I’ve seen to a comprehensive write-up is this: https://medium.com/@mreichelt/how-to-show-x11-windows-within... (medium, unfortunately)

We’ve been using it experimentally to run an ephemeral instance of Firefox that we can programmatically inject DNS and CA certs into, for localhost HTTPS development. It’s not documented yet, but the config is here: https://github.com/drifting-in-space/plane/pull/201


This is how I do most of my work.

Mac and Windows frequently perform non-consensual changes to my workstation. On the other hand, GNU/Linux is challenging for me to get working 100% on bare metal.

This setup is reasonably fast for my work, which is mostly text editing.


Nice, There is official Apple documentation on running Linux GUI in VMs on Mac. So that means it's going to be Aashi linux competitor?


As I understand it, Asahi can run on bare metal of an Mx machine, including custom hardware/firmware due to their reverse engineering efforts. Running aarch64 Linux within a VM only needs to work with the simplified virtual hardware exposed by the VM.


Are there any good, free desktop hypervisor apps for ARM macOS at this point? I always used VirtualBox on Intel but they don't support ARM guests on ARM hosts so I've been stuck using qemu, which is not so great performance-wise. I know I can use the APIs linked here but I would rather not build my own hypervisor client if at all possible haha


UTM. It's a GUI for both QEMU and the native macOS hypervisor framework


Does it expose the GPU?


When you virtualise macOS it does expose the GPU, but only Metal is supported. However perhaps Mesa will be able to support OpenGL via paravirtualised Metal, for Linux and perhaps macOS and Windows.


Yes, through virtio-gpu.


Does this differ from previously posted solutions that needed to pre-allocate a chunk of RAM to the Linux VM?

This is indeed cool, but RAM on Apple Silicon is relatively low (with only 8Gb in some models) if I allocate more than 2Gb to a VM, the base desktop will struggle.


Would not be surprised if they use the open source GPU driver for running Linux: https://news.ycombinator.com/item?id=33019316

Or if this was a push to release the Linux support.


Not really - https://developer.apple.com/documentation/paravirtualizedgra... . They expose a display object which works with Metal primitives.

Supporting the native GPU would mean they would have to get the guests to update for each new GPU. I'm sure Apple loves the idea they could completely rearchitect their GPU for future Apple Silicon, and it would break no preexisting macOS/iOS software


I haven't touched OS X in years, but I do remember xquartz being pretty neat. I'm sure if you really wanted, you could forward the virtual machine's xserver.

If only you could use a real window manager in OS X....


So we can finally have docker for Mac that doesn't run egregiously slow?


Our solution was to run Linux native on all dev machines. Docker really is a Linux tool and it works much better under Linux. We were running JetBrains IDEs and doing mostly Go, Python and JS development, so the switch was pretty easy and painless.

The switch also dramatically reduced developer time spent solving Mac/Linux incompatibilities in tooling, and let us focus more on our product. Overall we accelerated development by about one day, per developer, per week. I was kind of surprised it was that much time, but looking back, there really are quite a few differences between the GNU userspace and MacOS.


> there really are quite a few differences between the GNU userspace and MacOS.

This is why GNU/XNU OS distros should be a thing. I think this would satisfy most that want a fully functional "Linux" on Apple Silicon, being in its entirety OSS plus the ability to gain a fully compliant UNIX specification in stride, at least in practice if not actually registered.


GNU Coreutils - https://ports.macports.org/port/coreutils/ - partly helps with this on macOS, but yes, custom distros are much better solutions.


GNU coreutils are essential on macOS, and Apple should just include it by default. \o/


Um… GNU’s license is incompatible with Apple; that’s why Bash 3.2 is the current version for macOS.


Not incompatible, except with Apple's business model; Apple has been removing GNU software for more than a decade:

https://news.ycombinator.com/item?id=3559990


I should have been more clear: Apple has and continues to ship GPLv2 software but not GPLv3, since one interpretation of that license is that by including GPLv3 code in macOS, they’d be required to make the source code to at least the entire Darwin (the Unix layer underneath the GUI) operating system available, which is something they probably don’t want to do.

Which is why GNU’s core utilities don’t ship in macOS and why the outdated Bash 3.2 (GPLv2) still ships in the latest versions of macOS.

It’s also why ZSH, which uses a variation of the MIT license, is now the default shell when Bash was the default for many years.


> Darwin (the Unix layer underneath the GUI) operating system available, which is something they probably don’t want to do.

Darwin is open source, and Apple did make Darwin available[1][2], but the issues are it takes some skill, experience, and understanding to put it all together and build it, and many parts have become closed source since Apple first released. Also, there are basically no drivers for anything.

At least two projects derived from it, making it far easier to install and boot, OpenDarwin and PureDarwin. PureDarwin[3] had a couple of bootable releases, but I haven't heard much about it since their last release years ago, but their site says they've been working on drivers and a modified XNU to work without Apple's closed source portions.

IIRC, way back during the end of PPC, I was able to install and boot Darwin CLI on an x86 (probably 486 or Pentium something), but without any drivers, I couldn't do anything with it other than boot it and explore the filesystem. I couldn't even get it to network, so I couldn't install MacPorts, lost interest pretty quickly. It looked exactly like booting Mac OS X without Quartz, Apple's pretty command line.

But the PureDarwin release had X11, and by the screenshots, looked pretty neat. I intended to try PureDarwin's Xmas release in a vm back in the late 2000s, but I was jerked around a lot back then with living arrangements and never got back around to it. PureDarwin's later releases had a lot more functionality.

[1] https://opensource.apple.com/source/xnu/xnu-2050.18.24/

[2] https://opensource.apple.com/release/macos-10124.html

[3] http://www.puredarwin.org/


I did say at least Darwin… the rest of macOS would probably have to open sourced if it included GPLv3-licensed code.


This is not true.


What hardware and distro+kernel version are you using? I am trying to do this on a 12th gen. Intel Dell XPS, and to be honest the hardware support isn’t great. I had to install an upstream 5.19 kernel (rather than 5.15 that ships on Ubuntu 22.04). Sill, at least 2-3 times per week the machine fails to wake from sleep and has to be hard rebooted. Meanwhile my M1 Pro Mac 14” MBP routinely goes 3-4 weeks uptime, with reboots only for OS update. What am I missing?


Anything in the logs? It just works for me on xps (fedora since 28 till latest). Fwiw, my mac has issues with sleep, so there seems to be some luck involved either way.


My last Intel MBP had sleep issues (2019 i9), but this M1 Pro has been bulletproof.

I will look in the logs. I am also planning to “upgrade” to Ubuntu 22.10 when it comes out since it will ship with an Ubuntu-patched 5.19 kernel. Failing that, if I don’t find anything in the logs or don’t have time to find a root cause (more likely), I may have to give up and switch to Windows 11 (sigh).


I just switched from an XPS 15 to an LG Gram. In both cases, I thought I had problems, until I bit the bullet and set up Ubuntu (well, in my case, Kubuntu) to do secure boot. Then everything just worked, except for the fingerprint reader.


I swear I'm not trying to be facetious, not trying to stir the same old tired "hurrr year of the linux desktop" shit...

I have to ask: you accelerate development by 1 day/week, but do your devs lose any time with the accumulation of tiny annoyance such as sorting out bluetooth/audio/hibernation/battery drain?


> I have to ask: you accelerate development by 1 day/week, but do your devs lose any time with the accumulation of tiny annoyance such as sorting out bluetooth/audio/hibernation/battery drain?

No. All of the above have been fine - save hibernate. Standard kit is a multidevice logitech bluetooth keyboard and mouse... no issues. A couple of the devs on my team use bluetooth headsets, and that too is no problem. The LG Gram gets about 16-19 hours on a battery, so we've not had drain issues. Audio hasn't been a problem. We did have issues with hibernation when we switched, and solved it by disabling hibernate, and setting the machine to sleep when the lid shuts. I've left my laptop in the bag, sleeping all weekend and had 70% battery on Monday morning. Incidentally, the battery is really not critical... 65W USB3 chargers are cheap so we mostly run plugged in.

Your mileage may vary. Incidentally, LG is making a great machine right now. Huge 17" display, metal case, keyboard has a numeric keypad, great battery life, and it weighs 1 oz more than a 13" Macbook Pro. Oh, and lots of ports.


Yeah I could see maybe going full Linux if everybody were working on custom built towers where maximum compatibility for each component can be assured and sleep won't be a problem, but I wouldn't want to be stuck with supporting a fleet of laptops running Linux. My Tiger Lake ThinkPad which is a couple generations old (no longer cutting edge) and about as standardized/boring as it gets (Intel for most components, no fussy discrete GPU) runs Linux ok but has sleep issues that hamper its usefulness as a laptop.


Plenty of tiny (and big) annoyances on any platform. For me personally Linux has by far the least.


I get the impression that sleep doesn’t work properly on any current Intel laptop, regardless of OS.

I’d love to hear of a counter example. Use case is close it and stick it in my bag most nights, leave it plugged in and suspended others.

Acceptance criteria: < 5% battery drain per day suspended. Zero wakeups in bag, less than one failed resume from suspend per year (> 99.9% resume reliability). After resume, network / vpn reconnects without intervention in under 10 seconds, and no apps are janky.

Bonus: pressing the power button once when it is off must cause an led or screen to turn on within 1 second every single time.


My criteria was simple... if I put it to sleep it mist wake up.


The things you mentioned can be problematic, unless you buy hardware specifically designed for Linux - then they should work flawlessly (they do for me).


I'm seeing many more folks with Docker centric workflows take the same approach.

It's been mostly painless for me, minus the usual audio/Bluetooth snafu. Even those don't happen enough for me to be too concerned.


You can do this now. I use Multipass to get rapid linux VMs with access to my files.

For Kubernetes, I use Rancher Desktop (which can also do some docker magic)


VirtioFS made it usable for me. Before that, I used Mutagen.


Lima is not too bad (but I'd use Linux if he support was there)


This looks absolutely fantastic, does this run at native speed though.


I wonder why they would publish this when Asahi is already quite usable, and Intel macs can run Linux natively too

Is it THAT much more convenient to do everything in a VM?


Going forward, Apple won't officially support dual boot or alternative operating systems. Unofficially, they'll try their best to leave the options open for third parties like Asahi, but it will never be something they endorse as an option to macOS on their devices.

For example, there will not be an ARM version of Boot Camp published by Apple. If you want to run Windows on an Apple Silicon Mac, it'll likely be running virtualized under macOS or in an emulator under macOS.

If someone wants to write a bunch of drivers and hack a boot loader to get windows running natively - Apple won't stop them, but thats a heck of a lot of work.


If you want to use Messages and Linux simultaneously, a VM is your only option, right?


Does this have to be this complex? Can't I just fire up a VirualBox as I do on Linux and Windows and go ahead?


You sure can. This is developer sample code, rather than something a typical user would use.


I think this code is for who make VirualBox or similiar management software instead of end user. It basically demonstrate how to use macos's api to launch a virtual machine.


You can, but running Linux with a graphical desktop on VirtualBox under MacOS has become extremely sluggish in my experience. I don't know whether newer releases of MacOS or VirtualBox itself are to blame. It was much snappier 8 years ago even though I had a significantly slower laptop then.


>You can, but running Linux with a graphical desktop on VirtualBox under MacOS has become extremely sluggish in my experience.

Huh? It's actually faster than ever, and near native experience.

And the post is not about "how to run" a virtual machine as an end user or dev (for that the way the parent describes is still the suggested way).

It's about how to program running it under the hood if you're a developer who wants to develop something like a program that manages virtual machines or that hosts and runs one.


I'm only referring to the VirtualBox experience asked about by qwerty456127. Running e.g. Debian with Gnome under MacOS became extremely sluggish for me. The screen re-renders slowly, like running a remote desktop over a slow network link. I did a lot of searching and setting-tweaking when I encountered the problem earlier this year. I never found a satisfactory resolution. I tried this fix:

https://mkyong.com/mac/virtualbox-running-slow-and-lag-on-ma...

as well as others suggested on the VirtualBox forums, but couldn't get it running smoothly. I didn't have any trouble with a similar setup several years ago.


Ah, sorry, glossed over the Virtualbox part - thought it was about generally about running a VM on a Mac.

I used to use Virtuabox on x86 Mac until 3-4 years ago, don't know if it was changed - but then again, I used it for headless Linux only.


> It's about how to program running it under the hood if you're a developer who wants to develop something like a program that manages virtual machines

Has somebody already done that perhaps? Is there a reasonable alternative to VirtualBox/WMWare using the optimal way to run a VM on a Mac? I don't mind paying if it's worth it.

I have zero experience of Mac development but I am going to need to run Windows 7 (to use IE6 with old crypto to access an outdated state-ran system) on an M2 MacBook next week. What's the best way to achieve that?


> Has somebody already done that perhaps?

Yes. And there have been several examples of small or independent developers cobbling together VM applications based on Apple’s frameworks in the last year or so. I do my know how many will actually survive but there is quite a bit of developer interest.

> Is there a reasonable alternative to VirtualBox/WMWare using the optimal way to run a VM on a Mac? I don't mind paying if it's worth it.

I don’t know about VMWare, but Parallels does this (for money, I am not a fan of their subscription model). AFAIK the best free equivalent is UTM, which can use either Apple’s frameworks or QEMU: https://mac.getutm.app/ .

> I have zero experience of Mac development but I am going to need to run Windows 7 (to use IE6 with old crypto to access an outdated state-ran system) on an M2 MacBook next week. What's the best way to achieve that?

I would try UTM, personally.


This documentation is more for the people who make virtualbox, rather than end users. All this is stuff that VirtualBox, etc already do on intel, you just don’t see it because you’re the end user.


github.com/cirruslabs/tart is what I use, because it is a command-line tool. also, death to oracle.


Not on Apple Silicon arm64 macs


Anyone think this might work with openbsd?


There is no Virtualization template for BSD, but you can run it under the Hypervisor framework.


are darwin-native containers on Apple's road map at all?

previously: https://news.ycombinator.com/item?id=28430196


Apple unfortunately does not talk about future product plans. We won't know until hypothetically there is something real.

Given that they focus on units of apps rather than of system containers, I would _suspect_ it would be low priority to support a libcontainer API.

Are you actually wanting a Darwin BSD container layer, or just a more efficient way to run Linux containers?


Now all we need is something like this for the iPad.


What's so different compared to VMware on Mac?


This is the mechanism something like VMware would use to support a Linux guest, not something comparable with the full application experience of VMware.


The year of desktop Linux has arrived, where all major desktop platforms run Linux distributions inside virtual machines.


The only problem: becoming root doesn't mean you now control the device (you paid for).


We've sort of been at this point for a long time, though, the abstraction bar is just being shifted up.

Previously, the bar was at the firmware level, where you will never fully control the black boxes governing the security chips, DRM implementations and real time OSes running on the microcontrollers inside the CPU.

Now the OS (especially on Apple devices) is almost becoming like the firmware layer itself, so deeply integrated and locked to the machine, but with the freedom to run other OSes/environments on top.

At least you can (usually) still unlock the OS level...


I was sort of OK with the firmware being inaccessible when it couldn't talk to the internet. But times changed.


I feel like I pretty much have full control over my Mac laptop. Why would you expect to fully control a machine from inside a VM anyway? Sounds like a massive security risk too.


Security is entirely orthogonal here. It's like saying you're ok with not being able to enter your own house because of security.


If I lose the key, sure, getting inside the house will be more difficult, isn’t that the entire point? Still, I don’t understand the relevance of the analogy. I have access to the entire system and can pretty much tweak things as I want. I also happen to prefer the default macOS security model with the sealed system volume. Where exactly is control taken away from me?


> I feel like I pretty much have full control over my Mac laptop.

https://news.ycombinator.com/item?id=25074959


Oh please. You are quoting an issue that happened two years ago and Apple has already acknowledged that their algorithms were crappy and promised to reengineer this stuff. This is like claiming that Linux does not allow you to install software because a repository happens to be down.


So where is the redesign? This is like saying that a repository is down for two years, but it's fine, since they promised to fix that. Also, it's much worse, because Apple knows every app you run, including, e.g., Tor Browser. Do you think it's good for your privacy? Can you yourself fix that on "your" computer, which you "control"?


Apple since implemented encryption for the OCSP protocol they were using (addressing privacy concerns) and most likely improved the infrastructure to avoid similar outages. They also promised that users could opt out of certificate revocation checking, but so far they have not delivered, this much is true.

I am not at all concerned about the privacy of Apple occasionally verifying the developer certificates I use on my computer. I think they have much better things to do then log this information and try to link it back to me. I mean, if they want to know which software I run they have many more — and much better — options to collect my private data than using OCSP. In the end, this is about trust more than about anything else. A Linux software repository could also be collecting data about which packages you download. Second, OCSP is a standard protocol for certificate checking in the web. I mean, Firefox is using it. So is Firefox bad for privacy now? Besides, I think it's entirely silly to be concerned about OCSP privacy when we have things like unencrypted DNS lookup still being the normality.

Look, I fully understand that some people dislike the idea of their OS periodically checking certificates on their computer. Fortunately, there are plenty of systems on the market that can cater to different needs and expectations. You decide what is a concern for you and what not. If you are privacy-oriented to the fault, you probably want to use some secure version of Linux with IP anonymisers, manually installed software and of course everything strictly audited. Most people don't care about this stuff.


(Which they failed to follow up on.)


You can install Asahi Linux on an Apple Silicon Mac. What control do you feel you lack?


Basically just like container based distributions and cloud infrastructure. :)


I'm running Windows and MacOS virtualized in my Arch Desktop Linux from years


compiling curl today on linux, I see Apple has an internal SSL/TLS

    curl-7.81.0$ ./configure --help | grep -i apple
    --with-secure-transport enable Apple OS native SSL/TLS
MSFT is adding new signing keys to build and boot Ubuntu. Perhaps Apple wants to play, too and add their own signing keys to build and boot linux, openssl and libssh.

Let no crucial technology fall into the hands of the un-corporate?


SecureTransport is a system library on Apple operating systems that implements TLS. It doesn’t exist on Linux. The configure help output is showing it on Linux because it doesn’t filter the list of options based on the OS you’re running on. (Nor should it, since you might hypothetically want to cross-compile a binary for a different operating system than the one you’re running on.)


The Microsoft equivalent of this is SecureChanel and none of this has to do with signing keys or booting.


The fact that Apple is working on virtualisation shows that there is a user demand for running alternative OSes on a Mac desktop. And yet, instead of opening up their hardware just a bit (by making available some hardware literature on their ARM SoC) to system developers, they'd rather offer a more convoluted way of running other OSes only on their terms and control. It's all about having control over your personal data folks - with Apple ARM chips, Apple can now ensure and push that the "best" way to run alternative OSes on Apple Silicon Mac hardware is only on top of macOS with virtualisation. This way, even if you use another OS (through virtualisation), macOS can still continue to datamine your personal data.


I believe the bulk of the demand is coming primarily from a need to run Linux containers (e.g. docker images) than out of a desire to boot Linux directly.

While there is demand for the latter (as demonstrated by the herculean efforts going into Asahi Linux), most people buying Macs are probably there at least partially for macOS and the ecosystem it fits into, as well as for the ability to run commercial software packages without a compatibility layer like WINE. All things considered it's probably generally easier to make headless Linux containers run acceptably on non-Linux hosts than it is to get complex UI software running acceptably under WINE or a Windows VM.


I would say people don't care too much and take the path of least resistance. If it was an out of the box option they would definitely love a system with good hardware and a configurable os.

When I spent some time and installed Linux on my MacBook Pro 2015 some people came asking for instructions (and were mostly deterred by the compiling needed to build a driver for the webcam).

Macbooks are unrivaled for their hardware (pretty much any other laptop sucks), the software is meh and keeps getting worse.


Apple isn’t “starting to work on virtualization”; it’s been around for several years.

You can also run FreeBSD: https://dan.langille.org/2018/10/02/running-freebsd-on-osx-u...


There is already much user demand & vendor support for running virtual machines (including for Windows) on Apple hardware, through Parallels, VMWare and Qemu.

Like there are options for containers, it is great to have another option for virtual machines.


Options are good. But the way Apple operates to exert control and cripples softwares that don't align with their goals, don't be surprised if Apple suddenly forces developers to only use their virtualisation API on macOS, just like they did for application firewalls (that are no longer allowed to have their own custom kernel extensions for "security" and "stability"). They are slowly squeezing macOS to make it more and more like ios.


They had the opportunity to do this when introducing the M-series Macs, because under the hood those work very similarly to iPhones and iPads. They could've done a direct copy-paste from the iDevice boot process and called it a day, but they didn't… they went out of their way to develop support for booting third-party operating systems, complete with a path for painless long term support with the flexibility of allowing the OS to choose which version of firmware to run on the hardware (so Apple can deploy compatibility breaking firmware updates for use with macOS without stepping on the toes of e.g. Asahi Linux).

In macOS itself the main goal is to get third parties out of the kernel to the greatest extent possible, which makes perfect sense. Third parties, ideally, should be operating solely in userland, because otherwise you get pointlessly insecure nonsense like cloud file storage apps installing kernel extensions (like Dropbox used to on macOS).


> They had the opportunity to do this when introducing the M-series Macs

No, they didn't because a Mac computer that is fully locked like the iDevices wouldn't have been popular and would have meant a lot of bad publicity for the M1 mac desktops. Apple Silicon M1 / M2 macs can only run crippled versions of other OSes and macOS is being slowly converted to be more and more like ios. It's the Boiling Frog strategy - https://en.wikipedia.org/wiki/Boiling_frog - to ensure that they don't scare away their users. With soldered harware, no viable alternate OSes, and taking away developer options from macOS, all Apple Silicon M1/M2 macs are now just a few more steps away from becoming like the iDevices.


What do you mean by “crippled”? As far as I can tell the only limiting factor on the capabilities of other OSes on Apple Silicon is the state of hardware support, which is marching forward at a brisk pace and it doesn’t look like there’s anything stopping third parties from fully leveraging the capabilities of that hardware.


> No, they didn't because a Mac computer that is fully locked like the iDevices wouldn't have been popular and would have meant a lot of bad publicity for the M1 mac desktops.

Approximately nobody would have passed on a Mac because it could not boot Windows or Linux. This has been more or less the state of Macs since the release of the first M1 devices, and they sell rather well. There is demonstrably quite a lot of interest for these devices running macOS.

> Apple Silicon M1 / M2 macs can only run crippled versions of other OSes

How so? What do they do to cripple other OSes?

> It's the Boiling Frog strategy

Quoting Wikipedia for common phrases does not make you more credible. Again, people have been saying that for more than a decade. It is not inconceivable that it could happen in the future, but then anything could happen in the future. And in the meantime you still sound like a broken clock.

> to ensure that they don't scare away their users.

The whole history of iOS demonstrates the opposite. As said in the parent comment. They’ve had opportunities to actually go to that direction. Nobody was expecting third-party OS support.


> that are no longer allowed to have their own custom kernel extensions for "security" and "stability"

You keep implying that there are ulterior motives without a shred of justification. Why do you think that? What power do they get by doing this?

Also, it does not prevent things like Little Snitch or the various Objective See tools from being developed. There are no fewer firewall apps than there used to be. There are no fewer virtualisation applications either.

What you don’t get is that they started from the position that kernel extensions were an attack vector and a factor of instability. Then, they started addressing each use case for kernel extensions by providing user-level facilities. You probably set the cursor at a different location on the security spectrum, but it does not mean that you are any more right than they are.

> They are slowly squeezing macOS to make it more and more like ios.

Some people have been making that tired claim for more than a decade now. You’be had year to see how it works and where they are going, and it keeps not happening. Why do you think this is more credible now?


> And yet, instead of opening up their hardware just a bit (by making available some hardware literature on their ARM SoC) to system developers, they'd rather offer a more convoluted way of running other OSes only on their terms and control. It's all about having control over your personal data folks

I think the better argument is that it is what Apple says it's about, security. For the same reasons, we don't see Windows source code, nor are internally discovered Linux kernel security bugs advertised until they're fixed.

I personally consider security bugs to be just "normal bugs". I don't cover them up, but I also don't have any reason what-so-ever to think it's a good idea to track them and announce them as something special...one reason I refuse to bother with the whole security circus is that I think it glorifies—and thus encourages—the wrong behavior. It makes "heroes" out of security people, as if the people who don't just fix normal bugs aren't as important. In fact, all the boring normal bugs are way more important, just because there's[sic] a lot more of them. I don't think some spectacular security hole should be glorified or cared about as being any more "special" than a random spectacular crash due to bad locking. -Linus Torvalds, 2008

Instead of rolling over and vulnerably showing soft underbelly, armadillos fold in half, tuck in their legs and head, curling their tail beside the head and pull in tight, so that only their leathery armor shell is exposed. If they did what you wanted them to do, their species wouldn't exist for very long.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: