This is the first time I read about this. How on earth is this not an April’s fools joke?
Apple allows you to run a maximum of 2 virtual machines on Apple hardware.
Someone is going to get sued by Apple for going against this part of the license agreement and I must say I really like that thought because the result of that trial will give clarity to the rest of us.
We had to deal with a gray area in our project as well. DeepL does not allow use of their translator in critical infrastructure. Our client is in the transportation business and has been using DeepL for years to translate emails. They wanted to integrate DeepL into a customer portal that we develop for them. When we told them that we will not be complicit in breaking DeepL’s license agreement, they weren’t too happy.
Apple has largely left the Hackintosh community alone. It seems like they're not interested in going after anything but those trying to commercialise it: https://en.wikipedia.org/wiki/Psystar_Corporation
Don't underestimate the Hackintosh community --- if they could create an SSSE3 emulator to allow installing macOS on CPUs that didn't even have the required instructions, and completely rebuild kernels too, I don't think getting macOS to run on whatever other non-Apple ARM system happens to become popular is out of the question.
It’s really quite fun. Maddening at times, but fun.
For me, back in the day, it was about power: I wanted a Trash Can Pro for development (I had a fully-specced retina MacBook Pro) but I didn’t want to pay Apple tax; I already had a gaming rig. So I dual booted.
At some point “Because it’s technically possible” is a huge motivator for some reason. I was doing Hackintosh circa 2014-2015 and even then, barely knowledgeable about software, I was able to hack through all sorts of esoteric kernel and kext issues to get my machine running. A decade later as a software engineer I still can’t explain half of the issues I encountered and overcame back then. Mostly due to incredibly intelligent hackers and other enthusiasts like myself sharing issues and how they resolved them on forums.
Can’t speak to others but I at the beginning of all this (late 2000s?), I used to build Hackintoshes and briefly sort of made it my life’s purpose to turn all my friends’s Windows PCs into dual boot.
For myself, I needed a lot of parallel processing, and Mac Pro back then just got you so far.
For friends, it was because I’m a Mac shill. This was my way of getting them to buy a Mac when the future of the platform was still a little uncertain.
I still have a “MacBook Nano” which was an ASUS netbook that dual booted
iOS development has to be performed exclusively under macOS. Effectively, many iOS developers not only do not want to pay for Apple hardware that is usually grossly overpriced for its capabilities, but also strongly desire to use absolutely top-of-the-line non-Apple hardware that is way more powerful than what Apple is offering, to run their daily macOS development workstations.
... And hacking entire operating systems under virtual machines is just fun :).
> do not want to pay for Apple hardware that is usually grossly overpriced for its capabilities
This is the myth that just won't die even though every fair comparison of Mac hardware to available PC hardware, carefully matching feature for feature, reveals prices within $100 of each other. What you don't want to pay for is the features that you're not interested in, such as perhaps Thunderbolt ports.
Apple still makes a 40-50% margin on every iPhone sold. The iPhone could be the cheapest smartphone around, and it would still be generally agreeable that Apple has enormous profit margins.
Try 119% margin. It costs Apple $501 to make an iPhone 14 Pro Max, and the company sells it at a base price of $1099. But that is iPhone, not Mac. Apple's net profit margin as of September 30, 2022 is 25.31%, and that is due to their razor thin margins selling Macintosh hardware.
I don't know about cost effectiveness of Apple laptops, since I never use small-screen devices for daily work, but if you are willing to put together a desktop workstation with your own hands from components you hunt down individually at online sales and install the free Linux, you will not spend even half of the cost of a Mac Pro, which has not even been upgraded by Apple since 2019. The 2023 Mac Pro is to cost from $6K to the insane $10K.
> if you are willing to put together a desktop workstation with your own hands from components you hunt down individually at online sales and install the free Linux, you will not spend even half of the cost of a Mac Pro
Show me this put together beige box, and I will show you a machine with missing features found in the Mac Pro. Then you will say, "I didn't want that stuff, I would never use it," but when added in, the prices will be within a close margin.
> when [all Apple-chosen hardware features] [are] added in, the prices will be within a close margin
I do see your point, but even if that's so, the problem of Apple hardware inaccessibility to an average computer hardware buyer cannot be considered as "resolved" or "unavoidable", but only pivots from the circumstance of "Apple forcing high profit margins on hardware buyers" to "Apple forcing a set of (unwanted, yet costly) hardware features on hardware buyers" (through the lack of appropriately varied hardware models and hardware configuration options).
Which is too bad, because if Apple hardware was appropriately inexpensive while delivering the features that I do care about, I would probably consider switching to their hardware and macOS as my primary O/S. (... But then again, maybe not - I really like the latest Linux KDE/Plasma.)
Then margin on Macintosh hardware is notorious for being slim. Where it seems they make their money is in the options for more RAM and larger storage. But Apple doesn't make much profit on the base configurations of any Mac, and until they started charging a premium for more RAM or storage, Apple didn't make much profit on any Mac configuration. Apple's history bears this out, as Apple only started growing wildly with the introduction of iPod and iTunes Store, and again with the introduction of iPhone and iPad.
To me, because those expensive extra features on a Mac are not optional, I still consider it way overpriced. The only reason many of those little extra boosts are so expensive is because they simply aren't broadly in demand.
And of course the physical characteristics - yes, that ASUS laptop has the same or a slightly better technical spec than the MacBook Pro that I want, but it's also made out of plastic and it's half-an-inch thick!
I think there is a different perspective - MacBooks are great development machines, pretty much everything builds without modification, if you need a true linux environment you can always spin one up with docker. I’d bet you that people running MacOS in VMs leads to people running MacOS on their shiny new MacBook.
If you had told me thirty years ago that one day I’d be developing on a Mac I would have laughed, but once I had one I realized developing under Windows was a pain - everything is unnecessarily difficult because Microsoft wants you to develop FOR windows and they want you giving money to all their partners who forever have their hands out, and they make everything else hard.
It’s like a pain that never goes away - you can learn to ignore it but at some level it’s always there.
I hate to give Apple any ideas... but I've always wondered why macOS/iOS support falling back to software rendering, when all of their devices ship with a known GPU configuration. Without that fallback, getting the OS running on non-Apple hardware would take a lot more work.
yeah, "a Docker" is incomplete and incorrect. and annoying. It could have been a character limitation, I guess, but edit it down so it makes sense instead of just ending the sentence prematurely.
It is and isn't. Just like the state of an electron. To truly know if it is sarcasm or not, it will cost you one cat.
However, it's no less serious than the GGP's arms in the air upsettedness about calling the 'gram insta. oops, see, i called it two different shortened names back-to-back. uh-oh, somebody's going to be upset!! ;-)
If you don't pronounce INXS as INXS, you're doing it wrong.
Edit: the name I just gave up trying to understand is Musk's kid's name. talking about a name that a kid will grow up to resent their parents about.
like, a whole bunch: olling n he loor aughing y ss ff to spell it out. obviously, some letters are used more than once, so my math teacher will deduct points for not reducing the fraction.
Oh wow I didn’t know about that. That’s huge. I remember people some time ago buying dozens of Macs to create their own CI/CD with racks of half open MacBooks running 24/365.
The general consenus I've seen is that 1) projects/code that enables MacOS to be emulated is legal, but 2) running MacOS VMs on non-Apple hardware is a violation of the EULA you agree to when installing MacOS.
So if it's on an Apple device (even one not running MacOS as the host) it's fine, otherwise it's a violation of the EULA, and something a business probably doesn't want to get tangled in.
I recently wanted to build Python wheels for a whole bunch of Python versions and packages on Mac ARM64. I ended up using OSX-KVM, and then using cibuildwheel to crossbuild to arm64. Pretty dang easy.
I set up OSX-KVM (https://github.com/kholia/OSX-KVM) and it worked really well. Having archlinux is a huge plus for this (it's why the docker image uses it), as qemu is super simple to setup and get running. Was surprised how easy it was, but given how much effort the bootloaders have had over the recent years for Hackintosh, qemu users can just yoink that bootloader and use it.
IIRC, when compiling any iOS apps, etc. you need a Mac machine. I remember in the past, when I was creating CI/CD pipelines we had to ensure that there was a Mac Mini or some other machine that we had access to in order to automate this.
Seems like this solution would make it much simpler, as we used Docker for everything else.
I've set up some tests that require OS X and usually set up remote VMs for that. This could be a nice alternative to run those using regular (and cheaper) linux instances.
I know you can do the same with KVM but nothing beats a one liner command.
This is a regular VM. Specially it's qemu-kvm. The difference being that this is running qemu-kvm inside a container (remember, containers are mostly just processes). The advantage this gives you is that your run command is much simpler than the alternative.
Because you don't know how to configure KVM or don't want to bother, I guess. This project just wraps another project for setting up normal KVM guests running macOS.
It's hardware-accelerated paravirtualization. You can't just use that inside an arbitrary VM. Every layer (hardware, host OS, every guest OS but the last in the chain) has to support 'nested virtualization', and you have to enable/configure it.
On Linux, this project doesn't require nested virtualization. On any other operating system, it does. In that way, the host OS ends up mattering.
This is a pet peeve of mine, just macOS is infinitely more searchable than all the ways you can spell Mac OS X, macosx, osx, OS X, mac-os-x, I've seen them all at least once. You either always and up with to much results because os and x match to many things, or to little because an exact match on Mac OS X excludes all the other ways people write it.
Anyone building, signing and notarizing osx software inside this? I've got two machines dedicated for that and am terrified if they fail I'll never remember how to set them up. This could be game changing.
To my knowledge, there are two ways to get accelerated graphics in a macOS VM: GPU passthrough, and paravirtualized GPU.
GPU passthrough from a Linux host to a Mac VM is already possible. Even with Docker in the sandwich. Then it’s a question of Docker-OSX being furnished for it.
I believe the GPU passthrough foundation is already there:
”[…] this PR introduces GPU passthrough support. This requires extensive configuration on the host”
—
Then there’s “paravirtualized” graphics. I don’t get the name; as far as I know, this is when the guest OS has a virtual graphics device and a driver for it – with acceleration — and the virtual graphics device is actually served and accelerated like any other graphics-accelerated software by a GPU on the host.
As mentioned elsewhere, paravirtualized graphics are available on mac-host-to-mac-guest, and to my knowledge only mac-inside-mac.
(But the “outside” mac can probably be a virtual machine as long as it has a GPU. Even if it’s a passthrough device. Should be technically possible at least and I would guess that it just works.)
Finally, un-accelerated graphics in a macOS VM are surprisingly fast these days. At least on my Linux machine in qemu-kvm, recent versions of kernel / qemu etc. It’s noticeably slower than having properly accelerated graphics but muuuuch faster than I was expecting from macOS VMs I’ve used before.
Finally, headless access to virtualized macOS goes really fast. Working on a mac VM through mosh-shell is excellent.
Some people need to run macOS-only software, this potentially makes it a bit less painful than the alternatives for the same reason containerized versions of other finicky setups do.
In the README there's a blurb with:
> Run Mac OS X in Docker with near-native performance! X11 Forwarding! iMessage security research! iPhone USB working! macOS in a Docker container!
> Conduct Security Research on macOS using both Linux & Windows!
I can imagine this being useful for CI builds, for instance. On the other hand, this is probably against Apple’s license agreement, so businesses would want to stay away from it.
On the other hand this has a lot of stars and has been up since June 2020, so Apple doesn't seem too eager to cease-and-desist this kind of efforts. Maybe it's out of PR considerations, though.
Great question, but pretty much a completely different use case than the OP. The whole point of this project is to make it easy to run macOS on foreign host operating systems, presumably because the intended user does not own a Mac.
If I had to venture a guess as to why macOS doesn't support its own containers in any way, though, I'd say:
The core use case for containers is deploying apps to servers, and macOS isn't really used on the server. Apple themselves abandoned supporting macOS as a server operating system many years ago.
Sure, but Apple evidently doesn't view that use case as central to macOS' purpose, and the likely reason for that is that enterprises show little interest in running macOS as a general purpose server operating system. People are not running their web apps or game servers or storage networks or transcoders or scientific supercomputers or Kubernetes clusters or videogame lobbies on macOS servers.
This isn't new, and Apple has rolled with it; they gave up making server hardware more than 10 years ago. They finally completely EOL'd the software package they called 'macOS server' like a year or two ago.
Compare all the macOS changes from a 2 year period to the equivalent changelogs to the Linux kernel. It's very clear that macOS is neither widely used as a server operating system nor developed as one.
If macOS were widely deployed as a general-purpose server operating system, Apple would have implemented some kind of container support years ago.
This isn't the only server-centric area in which macOS lags compared to other operating systems. APFS' featureset is anemic for such a recent filesystem with a copy-on-write design.
The core use case of containers is central neither to existing macOS usage nor to Apple's vision for macOS. If there's a question here, it has to be why that is so, not the fact that it is.
Which is why we should vote with our wallets and not buy their hardware if they, Apple, want to keep neglecting part of the software that developers demand.
https://www.vice.com/en/article/akdmb8/open-source-app-lets-...