At the same time, not being part of the Apple ecosystem, should I be worried about the closed nature of this. I have been using Linux for over two decades now, and Intel seems to be falling behind.
(I do realize Linux runs on the M1. But it's a mostly hobby projects, the GPU is not well supported, and the M1/M2 will never(?) be available with open H/W.)
I've not tried the webcam and microphone, which I guess i could from Firefox.
Battery life is less than when the drivers further evolve, because I think it's imperfect in going into sleep mode.
The distro is Asahi Linux, which is ARM ArchLinux. All ARM binaries.
If you follow the Asahi Linux page, it updates super frequently, as drivers get tuned and so on.
not the person you are responding to but i was looking into it today. webcam/mic/speakers don't work but bluetooth does. there is arm to x86/x86_64 translation tools akin to rosetta 2 but they have a lot of warts and are not well supported yet. the most promising one in my opinion is called fex.
Not in the public release yet. An experimental Mesa driver is running (https://rosenzweig.io/blog/asahi-gpu-part-6.html), but the kernel driver is still a work in progress (though it's making quick progress!). The demo at the end of the article is a proof-of-concept with the M1 acting as an eGPU for another computer; not something usable for a desktop environment yet.
in my case i could use it as a daily driver, since i'm just needing a fast browser and Linux with compilers etc. but i've been using macos as a daily driver despite loathing (since the dawn of time) its font rendering.
>despite loathing (since the dawn of time) its font rendering.
Could you expand on that a little please? I've always found the Mac's fonts & font rendering to be most pleasing so I'm interested to hear a different opinion - what annoys you about it?
i've got super sharp vision fortunately, so i see the half shading and such, and it strains my eyes which, otherwise, "expect" to bring edges into sharp contrast.
My eyes love the rendering engine on Win11, or whatever trickery they're using for fonts, and similarly on ArchLinux.
Oh I see, I thought it was the actual font rendering you disliked, I hadn't considered the smoothing being an issue! Between Apple's high DPI displays and my own eyesight I don't notice it (although I do remember when I was younger hating subpixel anti-aliasing when it was still in use, because of the rainbow around characters)
For anyone who's interested, in macOS you can disable this font smoothing with:
It's great for you. But some of us are using Linux in industrial applications. You can't really put an Apple laptop inside e.g. an MRI machine. It may run highly specialized software, needs specific acceleration hardware, etc.
It's going to be a very sad day when consumer electronics win over industrial applications.
Apple hardware has never been about non-consumer, server or industrial applications outside of some film, music and movie studios using mac pros and the Xserve long time ago.
And if your making an MRI machine or other industrial equipment that consumes a huge amount of power, the fact your attached computer uses 300W vs 600W doesn't really seem like much of a big deal.
Apple has a head start with their ARM machines, but I'm also not really worried that the rest of the industry won't catch up in a few years eventually. You can only really pull off the new architecture trick once or twice, and being a leader has a way of inspiring competitors.
Apple's software and OS is also horrible to use in server applications, you only do it if you need to do it, such as iOS CI, device testing and such. Otherwise you avoid it as much as you can.
What are you on about? A €5M MRI machine will have whatever computer its manufacturer will want to support. Which will probably be something like a Core 2 running Windows XP.
None of these machines have used Macs, ever. Why would anything Apple does affect this market?
I don’t think you need to worry about that, those are completely different use-cases and markets. ARM CPUs will be available and widespread in other applications soon enough, and Linux support is already strong in that regard.
No. This is not a good universal solution. What if the machine needs more processing power than one laptop can provide?
Do you want to put a rack of laptops inside the machine, waste several screens and keyboards? Log into every laptop with your AppleID before you can start your machine? It's such an inelegant solution.
Instead, the x86/Linux environment lets you put multiple mainboards in a machine, or you can choose a mainboard with more processors; it is a much more flexible solution in industrial settings.
It would be a gimmick given that real-time workloads can't be offloaded via some serial connection to consumer laptops. You'd still need hardware and software capable of driving and operating the machines embedded in the machines themselves.
No. You want the computer running the thing to be as simple, known, and predictable as possible. So that is necessarily going to be a computer provided by true manufacturer, and not whatever a random doctor feels like using. Consumer devices are compeletely irrelevant or that use case.
While MRIs don't use ionizing radiation like the Therac-25 did, I can think of a few bad outcomes from someone finding a 0-day on anything that can control the machine. And of course if it's read only it still has sensitive medical info we wouldn't want leaked.
You are probably right. But computers were not like that for the last 40 years. I wonder about alternative history without IBM PC Compatible. Maybe we just hit the performance wall and now the only way forward is system on chip. Anyway, better move on and start thinking about your computer as an appliance.
Power use tends to scale non-linearly past a point - disabling turbo modes would likely significantly reduce the peak power use, and ~18% performance differenceis pretty big buffer to lose.
The 6850u also beats it rather comprehensively according to those same results, and that's only 18-25w.
Really, you'd need everything power normalized, and even the rest of the hardware and software used normalized to compare "just" the CPU, which is pretty much impossible due to Apple and their vertical integration - which is often a strength in tests like this.
The 6850U is comparable in power use and still has a big perf gap against the M2 in mosts tests. Though there are some tests where the M2 leads with a big gap too so maybe it comes down to software in a lot of these. Still it seems to me like Apple is not leading.
>Unfortunately for testing, as mentioned, right now there is no Linux driver exposing the M2 SoC power consumption under Linux. Hopefully this will be addressed in time but unfortunately meant not being able to deliver any accurate performance-per-Watt / power consumption benchmarks in this article. But when such support does come, it will likely show the M2 indeed delivering much better power efficiency than the Intel and AMD laptops tested. Even under demanding multi-threaded workloads, the M2 MacBook Air was not nearly as warm as the other laptops tested. It's a night and day difference of the M2 MacBook Air still being cool to warm compared to the likes of other notebooks like especially Dell XPS laptops that get outright hot under load. The power consumption metrics should also be more useful/relevant once Linux has working M1/M2 GPU support in place too.
I mean, you shove a fan on the M2 and it beats itself...
According to this it should be an average of 19.3 Watts with a peak of 31.85 Watts.
Apple also exceeds the stated tdp during peaks as well but we don't have that information atm. And remember there's a 14% perf gap between the two.
My purpose isn't really to say AMD is definitely better since apple still probably takes the win in overall product, I think the MBA is thinner and that's important to me. But it's to show that x86 isn't behind in performance and that you're not making sacrifices in that department to maintain software compatibility with the x86 ecosystem.
That average is over the entire benchmarking suite, including single thread tests and when tests are loading from disk or otherwise not fully saturating the CPU. Some of those benchmarks in that power consumption number are GPU only!
> And remember there's a 14% perf gap between the two
Like I said, power is not equal at all.
> x86 isn't behind in performance and that you're not making sacrifices in that department to maintain software compatibility with the x86 ecosystem
Comparing the lowest end chip from one vendor to the highest end chip from another is not exactly a great look. Especially when the Arm chip is basically matching the x86 one while having only a few years of software optimization work.
I do think M2 is more power efficient, but it seems close enough to me. The Thinkpad in real usecase testing has very good battery life, 15 hours etc doing regular work. I just don't have the perspective that the fact it can scale up in power should be held against it. It's pretty typical when you're doing some super computationally expensive processing to be plugged while it's the casual emails etc that has to have great battery life.
> Comparing the lowest end chip from one vendor to the highest end chip from another is not exactly a great look.
Is it anyone else's fault that Apple only has one sku. The M2 is a 20 billion transistor chip while the Rembrandt is a 13 billion transistor chip. I'd argue that the M2 is higher end one. The laptops MBA/Thinkpad compared are the same price.
> Especially when the Arm chip is basically matching the x86 one while having only a few years of software optimization work.
So we agree it matches lol? That's what I was arguing for. Nowhere did I say Apple sucks. I default to using Apple products and have been for almost all my life. I was just trying to make a case that x86 is good enough too hardware wise.
One could argue that the Ryzen's biggest pitfall is that it hasn't adopted a big.LITTLE configuration yet. Alder Lake keeps it's thirsty TDPs while staying relatively respectful of your temps and battery life. It's not quite as granular as Apple's core clusters, but the work with Thread Director is a promising start. Seeing AMD push heterogeneous systems so far down the roadmap virtually guarantees that they won't get Apple-level power efficiency for a while.
On the bright side, AMD has carte-blanche to design whatever they want. Not only can they one-up Intel by implementing core clusters, but they could also one-up Apple by adding power management per-logical-core, or some weird chiplet optimizations. The sky is the limit, really.
Alder lake is much worse temp wise. Look at the new Dell XPS design. They literally had to remove the F keys to make room for additional heatsink to get the newer Alder lake CPUs to work in a reasonable way.
Those Dell XPS are no better than an Intel Macbook, they're designed by people who can't put function before form and consistently screw up their hardware design enough to avoid like the plague. I'm not the least bit surprised they didn't pick the right chip for the job, two years ago it was Dell sending out emails to XPS owners warning them not to leave it asleep in a bag for risk of permanent damage...
I've tried a few Alder Lake laptops now (and daily-drive a 12700k desktop), and I don't really have any complaints about the thermals. Gaming, music production, video editing, none of it can seem to push the CPU past 40c under extended load. It's a solid chip that stands toe-to-toe with it's contemporaries, and I reckon it's going to get scarily good once Intel transitions it from 10nm++ to 5nm.
I agree but that still doesn't invalidate my point. They had to significantly overhaul the thermal system for alder lake. I validating the point that it uses less power then the prior Intel gens.
As the other reply mentioned they are testing against the M2, and they are also testing the lower powered AMD part 6850U which does best the M2 in some tests.
Not sure why you came out so strong with such a false statement.
Me too. I really wish I could buy a Samsung Galaxy Book Go 360 which is ARM and has amazing battery life, and install Ubuntu on it, but I don't think there's a known possible way to do so.
I really want a competent, high-end ARM Ubuntu laptop to happen. The Pinebook Pro has shitty specs and looks like shit with the 90s-sized inset bezels and 1080p screen.
I just spent a lot of time looking around for a laptop that had good battery life to develop on (i.e. ssh).
I eventually went with the MacBook pro with M2 because - *it actually is amazing*.
It lasts for like 3-5 days of all day use in typical vim/firefox use for me on a single charge.
I debated going with a system 76, falcon Northwest TLX, etc for more power and x86 such that archlinux would be more compatible, but most laptops with x86 processors only have ~1-2 hours with a dGPU or maybe ~<10 hours with windows as the os (~drops significantly with Linux typically).
It's unfortunate, but x86 is really awful in this area - so I went for ARM, and the best ARM based computer i could find (aluminum chassis / great durability) is the M2 based MacBook pro (slightly larger battery than the air).
What's nice is it completely beat out my expectations. I have a nice and fairly new desktop with an i7 on arch. My desktop takes 12 minutes to compile duckdb.
The M2? 6 minutes.
Color me impressed.
Just got it recently, and I'm looking forward to sourcing Asahi Linux on it tomorrow.
Along with Linus' recent push to Linux kernel from the M2, I think it's likely that a very large portion of Linux users will be using apple silicon soon.
Yeah this doesn't work for me. I develop with a lot of sensors and hardware and drivers are a pain in the ass.
I have a box of 30 cameras and exactly zero work on Mac.
Also, fuck Mac keyboards, I can't develop with them, and the constant quacking noises and spinning beachballs that I have to meditate to while I have absolutely NO idea what's causing the delay.
Even Alt+Tab doesn't work correctly, tiling shortcuts don't work consistently, and sending bytes over USB HID API to switch logitech devices isn't reliable either.
(I own zero Macs, all of my personal machines are Linux, I was given a Mac M1 for work and it's inefficient as hell, productivity-wise.)
Likewise, I'd be on that for sure. Right now I'm using older MacBook Air's running Ubuntu as my daily drivers and a big dell at the home office for other work.
Longer battery life and something like the Galaxy Book Go would definitely make me happy.
Apple isn't going to somehow make 64 bit ARM in to something proprietary. Sure, they have their own special instructions for stuff like high performance x86 emulation, but aarch64 on Apple is only going to mean more stuff is optimized for ARM, which is good not only for Linux, but for other open source OSes like the BSDs.
It happens to be a standard ARM instruction, Apple pushes some bits into ACTLR_EL1 (Auxiliary Control Register, EL1, "Provides IMPLEMENTATION DEFINED configuration and control options for execution at EL1 and EL0") in the kernel on context switch. The DTKs used a proprietary register and touched it using msr, but again, no custom instructions.
Apple does in fact ship custom instructions on their silicon, but where those are used, how they work, and how ARM lets them get away with it is a story for another day :)
https://aws.amazon.com/pm/ec2-graviton/ is an indication that Amazon cares about linux support for the arm64 architecture. So the question is how much variance there is to the M1 relative to that.
x86 processors will be produced on the same nodes. Many ARM SoCs require binary blobs or otherwise closed source software, so they are not the best choice to run Linux on if you're approaching it from a longevity and stability perspective.
I think the concern is there is currently no 'IBM-compatible'-like hardware ecosystem around ARM. Raspberry Pi is closest, but nothing mainstream yet. And it looks like RISC-V will have a better chance than ARM.
RISC-V barely has any end-user visible deployment yet. Despite that, it has strong platform standarization (OS-A Profile, RVA22 and standardized boot process through SBI, UEFI specs).
This is all just in time for VisionFive2, just announced. I suspect it will ship in large amounts.
Linux support is about much more than instruction set support. Most ARM chips are shipped on SoCs which can take a lot of work to get Linux running on, and even then it might not run well.
At the same time, not being part of the Apple ecosystem, should I be worried about the closed nature of this. I have been using Linux for over two decades now, and Intel seems to be falling behind.
(I do realize Linux runs on the M1. But it's a mostly hobby projects, the GPU is not well supported, and the M1/M2 will never(?) be available with open H/W.)