When would Apple silicons made natively support for OSes such as Linux? Apple seemlingly reluctant to release detailed technical reference manual for M-series SoCs, which makes running Linux natively on Apple silicon challenging.
Right. Same goes for MacOS and all of it's convenient software services. Apple might stand to sell more units with a more friendlier stance towards Linux, but unless it sells more Apple One subscriptions or increases hardware margins on the Mac, I doubt Cook would consider it.
If you sit around expecting selflessness from Apple you will waste an enormous amount of time, trust me.
As I replied in else where here, I do not run any Apple Services on my Mac hardware. I do on my iDevices though, but that's a different topic. Again, I could be the edge case
But if you're being pedantic, I meant Apple SaaS requiring monthly payments or any other form of using something from Apple where I give them money outside the purchase of their hardware.
If you're talking background services as part of macOS, then you're being intentionally obtuse to the point and you know it
All seven of them. I kid, I have a lot of sympathy for that position, but as a practical matter running Linux VMs on an M4 works great, you even get GPU acceleration.
That’s what’s weird to me too. It’s not like they would lose sales of macOS as it is given away with the hardware. So if someone wants to buy Apple hardware to run Linux, it does not have a negative affect to AAPL
I have Mac hardware and and have spent $0 through the Mac App Store. I do not use iCloud on it either. I do on iDevices though. I must be an edge case though.
All of us on HN are basically edge cases. The main target market of Macs is super dependent on Apple service subscriptions.
Maybe that's why they ship with insultingly-small SSDs by default, so that as people's photo libraries, Desktop and Documents folders fill up, Apple can "fix your problem" for you by selling you the iCloud/Apple One plan to offload most of the stuff to only live in iCloud.
Either they spend the $400 up front to get 2 notches up on the SSD upgrade, to match what a reasonable device would come with, or they spend that $400 $10 a month for the 40 month likely lifetime of the computer. Apple wins either way.
Of course this is the reason.
And this is why Apple has become so bad for the tech enthusiasts, no matter how good the OS/software can be, you have to pay a tax that is way too big because you already have the competence that should allow you to bypass it.
It's like learning about growing vegetables in your garden but then having to pay the seeds for it much more because you actually know how to produce value with them.
The philosophy at Apple has changed from premium tools for professional to luxury device for normies that makes them pay for their incompetence.
You also lose out on developers. The more macOS users, the more attractive it is to develop for. Supporting Linux would be a loss for the macOS ecosystem, and we all know what that leads to.
There are a large number of macOS users that are not app software devs. There's a large base of creative users that couldn't code their way out of a wet paper bag, yet spend lots of money on Mac hardware.
This forum looses track of the world outside this echo chamber
I’m among them, even if creative works aren’t my bread and butter (I’m a dev with a bit of an artistic bent).
That said, attracting creative users also adds value to the platform by creating demand for creative software for macOS, which keeps existing packages for macOS maintained and brings new ones on board every so often.
I'm a mix of both, however, my dev time does not create macOS or iDevice apps. My dev is still focused on creative/media workflows, while I still get work for photo/video. I don't even use Xcode any further than running the CLI command to install the necessary tools to have CLI be useful.
While I don't think Apple wants to change course from its services-oriented profit model, surely someone within Apple has run the calculations for a server-oriented M3/M4 device. They're not far behind server CPUs in terms of performance while running a lot cooler AND having accelerated amd64 support, which Ampere lacks.
Whatever the profit margin on an iMac Studio is these days, surely improving non-consumer options becomes profitable at some point if you start selling them by the thousands to data centers.
> So if someone wants to buy Apple hardware to run Linux, it does not have a negative affect to AAPL
It does. Support costs. How do you prove it's a hardware failure or software? What should they do? Say it "unofficially" supports Linux? People would still try to get support. Eventually they'd have to test it themselves etc.
Apple has already been in this spot. With the TrashCan MacPro, there was an issue with DaVinci Resolve under OS X at the time where the GPU was cause render issues. If you then rebooted into Windows with BootCamp using the exact same hardware and open up the exact same Resolve project with the exact same footage, the render errors disappeared. Apple blamed Resolve. DaVinci blamed GPU drivers. GPU blamed Apple.
I don't think Darwin has been directly distributed in bootable binary format for many years now. And, as far as I know, it has never been made available in that format for Apple silicon.
UML runs only on Linux and only on x86, amd64 and powerpc. Which is a real shame, otherwise you could run a full Linux kernel on all these arm Android devices.
UML and "as little overhead as possible" probably shouldn't appear in the same train of thought. I remember it from the very earliest Linux VPS providers, IIRC it only got semi-usable with some custom work (google "skas3 patch"), prior to which it depended on a single process calling ptrace() on all the usermode processes to implement the containerization. And there's another keyword that should never appear alongside low overhead in the same train of thought
Page-grained mappings UML does make for tons of overhead. AFAIR Linux even considered a specialized reverse page mapping structure just to accelerate those, but ultimately dropped it because of memory overhead and code complexity.
Realistically, the overhead isn't ever going to be lower than hardware virtualization unless one goes for an API proxy a-la wine and WSL1 - but that's tons of work.
Your scenario makes me suspecting that it is Xilinx flavored Yocto causing the problem. I think that removing some unused Xilinx-specific layers/recipes can reduce the prologue and epilogue execution time.
Well, Xilinx layers are definitely not the most light-weight ones, though they still shouldn't cause parsing/task init to take minutes. By any chance, are you using WSL? I have heard some complaints about storage I/O performance, when it comes to this...
Nope, mine runs on a native Linux box featuring Intel i7-1370P and 32GB RAM. Maybe Xilinx has some tuning which makes Yocto become slow at parsing the recipe dependency and the likes.
I've tried asking the Xilinx community, and got only a reply saying that there is a database in Yocto which limits the scalability.
Out of curious, does BPF now capable of capturing all the context switch events such as CPU trap?
Also, if the overhead is negligible, maybe the author can try to merge this into mainline with the use of static key to make the incurred overhead switchable. In spite of the static key, the degree of the accompanied inteferences on cache and branch predictor might be an intriguing topic though.
Edit: Perhaps an alternative approach would be to attach probes to relevant (precise) PMU events. There's also this prototype of adding breakpoint/watchpoint support to eBPF [1]. But actually doing stuff within this context may get complicated very fast, so would need to be severely limited, if feasible at all.
In the near future, people can control appliances with purely their own consciousness, and the only prerequisite is that it requires a minimum consiciousness level, which is reachable for most of the human being. Lastly, we usually think that people living in the Stone age are primitive people. Are they?
Perhaps it's like my car. It doesn't know when it turns off the engine that I actually need it in a second, so shutting down at a light ends up being brief and inconvenient.