The step down from 32GB to 24GB of unified memory is interesting. Theories? Perhaps they decided M4 allowed too much memory in the standard chip and they want to create a larger differential with Pro/Max chips?
Update: I am thinking the 24GB for M5 is a typo. I see on Apple's site the 14 inch MBP can be configured optionally with 32GB of RAM.
I had the same question, but I can only speculate at the moment.
The cynical part of me thinks in a similar line: create an artificial differentiation and push people to upgrade.
If anyone has any real clues that they can share pseudonymously, that would be great. Not sure which department drove that change.
They definitely do that. You could get 64gb ram without going up to the top spec of the Max tier of CPU in the M1 and M2 generations, but with the M4 Pro you can only do 24 or 48gb, while on the lower spec M4 Max you can only do 36gb and nothing else, only the absolute best CPU can do 64, therefore if you were otherwise going to get the 48gb m4 pro, you'd have to spend another ~$1200 USD to get another 16gb of ram if all you cared about was ram.
There may be a technical explanation for it, but incentives are incentives.
you can get 64GB on the mini with M4-Pro so that lays credence to no technical reason, but at the same time if the business reason was strong, why allow it on the mini but not in a macbook? I think this is equally likely to be due to reducing SKUs or something. E.g they found that most people buying 64GB ram do also buy the upgraded processor.
Ya, what you're talking about did spread a bit on the various forums when it became clear they were aggressively segmenting that market.
> E.g they found that most people buying 64GB ram do also buy the upgraded processor.
It seems like the way they've divided them, there's at least one more SKU than there would otherwise be, because of that base M4 Max with only 36gb of ram (can't get it with 24,48,64,96), so if you want the extra few cores, you now have to go to the max Max to get any more ram.
It took me a while to commit to the purchase, because I felt like an idiot implicitly telling them I'm okay with that bs pricing ladder, but at least I didn't over extend and go for the Max. They already charge comically too much for ram and storage.
I could be wrong about this but, if I had a guess, I'd say the 24GB M5 chips/systems exist due to binning.
Apple is designing and manufacturing a chip/chipset/system with 32GB with integrated memory. During QA, parts that have one non-conformant 8GB internal module out of the four are reused in a cheaper (but still functional) 24GB product line rather than thrown away.
Market segmentation also has its hand in how the final products are priced and sold, but my strong guess is that, if Apple could produce 32GB systems with perfect yield, they would, and the 24GB system would not exist.
The memory is not on-die, it’s separate (completely standard) memory chips, either DDR4 or DDR5 depending on which M-series CPU you’re looking at. So binning doesn’t really apply.
Seems like there's a misunderstanding on my part here. <reads more>
Ah, the memory is integrated in the same package (the "chip" that gets soldered onto the motherboard) as the integrated CPU/GPU, and I had understood that correctly. However, I had incorrectly surmised that it was built into the same silicon die.
Thanks for the correction!
Lesson: TIL about the difference between System-In-a-Package (SIP) and System-On-a-Chip, and how I had misunderstood the Apple Silicon M series processors to be SoCs when they're SiPs.
No worries! It’s made more difficult to understand by 1) Apple’s marketing, which does a great job of tricking people into thinking that the memory is actually integrated into the die without actually saying so, and 2) the fast-and-loose use of the SoC and SiP terms, which are often used interchangeably, including by Apple in official marketing materials [1].
M1 MBPs are still great laptops. In fact there are even Intel models from 2019 that are still officially supported. Apple is pretty much the last company it makes sense to accuse of planning obsolescence.
Yup, but only on the hardware side. On the software side, you are entirely at their mercy - unlike Windows which goes to utterly ridiculous length to keep software dating back to the Windows 95 era running on top notch Windows 11 systems, Mac developers are all too used of having to constantly keep up with whatever crap Apple has changed and moved around this time.
I've tried running old Civ2 on a recent windows machine, no dice.
I'm sure it's possible to do that, but the backwards compatibility on Windows is definitely not as good as you say.
That said, I'm also currently, as a fun personal project, converting a game originally intended to work on 68k Macs and which still has parts explicitly labelled as for resource forks, and I've lived through (and done work on) 68k, PPC, Intel, and M-series hardware, plus all the software changes, so I agree with you about Apple.
This gave me a flashback of me as a kid messing around with the "resource fork" of Mac applications. I felt like a major hackerman back then. During the era of "free" dialup ISPs, I would effectively remove the giant ad banners they all had.
That doesn't really have anything to do with planned obsolescence. Causing churn for developers is not intended to make people buy more Macs before they should need to, which is what planned obsolescence means.
A piece of software I got in 1995 (Earth Siege) is reasonably playable on a modern PC, no VM, no emulator, it just works (albeit with requiring compatibility mode).
No piece of Mac software anyone has bought in the late PPC Mac era can even run (!) at all natively on a modern Mac, and even early Intel Mac software will not run on the last Intel generation ever since macOS dropped 32-bit support in userspace entirely. You need to pay the developers for a new version, that's obsolescence by definition and particularly I'm still pissed about the 32 bit removal as that also killed off WINE running 32 bit apps which, you can probably guess, include many games that never got a 64-bit Windows binary because they were developed long before Windows x64 became mainstream (or into existence).
I do love Apple for high quality hardware, but I'll stick the finger to them till the day I die for killing off WINE during the Intel era for no good reason at all.
I understand all that. Nevertheless, it has nothing to do with planned obsolescence.
> You need to pay the developers for a new version, that's obsolescence by definition
Sure, but you don't have to pay Apple.
The entire point of the idea of planned obsolescence is companies intentionally making their products last less time than they should, so you have to pay that company more money.
This is a company making it so you might have to pay other companies more money, because backwards compatibility isn't a priority for them. You can be annoyed by that, sure, but it is not the same thing, and is not obviously corrupt like planned obsolescence is.
The churn means software eventually stops working on whatever macOS version your hardware EOL'd on. For example, builds of Firefox and Chrome deprecate older macOS APIs, therefore they can't run on older versions of macOS. This eventually happens for everything, including Homebrew.
There are tons of applications that strictly don't run from even windows 7 era. Some games work with a couple hours of looking up fixes and patches, some not even then. Interestingly I've been pretty succesful with wine/proton on those
Pulled shenanigans wrt TPM requirements for Windows 10 and 11. Actively trying to make sure people login to a Microsoft Account and making it hard to use Local Accounts.
> Mac developers are all too used of having to constantly keep up with whatever crap Apple has changed and moved around this time.
Mmm...
Win16 API
Win32 API (including variants like GoodLuckSystemCallExExEx2W(...))
MFC
ATL
.NET WinForms
.NET Avalon/WPF
Silverlight
MAUI
...
The thing is, MFC/ATL are _still_ supported. With the last release in October, 2024. And the Win32 API is so stable that people are joking that it's the only stable API on Linux.
.NET technologies... Yeah, MS dropped the ball there.
For what it's worth I'm running Mac mostly, outside of ham radio stuff because there's just so much stuff that only is available on Windows.
The thing with all the mentioned APIs is that, excluding 16 bit stuff (that got yeeted in Win7 x64, but if you did need it you could run W7 x32), you can still run software using them without too much of a hassle and you most probably can compile it if you need to fix a bug.
Good luck trying to get a Mac game from the 90s running on any Mac natively without an emulator/VM in contrast.
It does feel like planned obsolescence when companies like Apple limit software support for older hardware, Ubuntu run smoothly on much older devices. They could certainly do better by extending support and focusing on sustainability.
I see this criticism of Apple all the time and it’s completely at odds with my experience.
Our family iPad Pro is older than my 8-year old son, and still gets security patches. My wife’s phone is an XS Max, launched in 2018; iOS 26 is the first release that doesn’t support it - it will continue to receive security patches for the foreseeable future. My son’s school laptop is my old 8gb 2020 M1 Air, which continues to have stellar performance and battery life and could run Tahoe if I was crazy enough to want to upgrade it. My work machine is a 2021 M1 Pro that runs just as great as the day I bought it (thanks, Al Dente!). My 3 Apple TV 4Ks are I-have-no-idea-how-old but they are still being updated and just get out of the way like a TV box should.
I have no particular love for Apple (or any other company), but they’ve always treated me well as a customer. I can’t really think of another tech co that seems to make people as irrationally angry. Is it their marketing? I hate their marketing too. But their products and support are great.
Same. I have an M1 Max Studio and it's just laughing at the little workloads I throw at it (pro photo editing, music production, software dev, generally all at the same time).
It just never sweats AT ALL - it feels like a decade from obsolescence based on what I'm doing now.
It would have to be an order of magnitude faster for me to even notice at this point.
When they stop releasing security patches for that OS version 2 years later, it becomes more risky to connect the thing to a network. Or take in any data from the outside, really, whether it's via Bluetooth, or USB drive.
And then there's 3rd party software that will stop supporting that old OS version, in part because Apple's dev tools make that difficult.
Eventually, Apple's own services will stop supporting that OS - no convenient iCloud support.
Finally, the root CA certs bundled with the OS will become too out of date to use.
I'm planning on putting Linux on my Intel Mac Mini soon. But when a M3+ Mini goes out of support, will we have that option?
I’ve got a 2010 MBP that’s still perfectly suitable, but without OS updates, I can’t get a browser that websites will load cleanly on, can’t use Xcode, bunch of the Apple services the company hooks you on don’t work, etc. Used OpenCore bootloader to extend its life into newer macOSes, but that’s getting hard to keep up with. What a (e)waste.
Hadn't thought of doing that - I'm not a natural Linux person myself and I'm repurposing it for an 11yo. But maybe it's not so different from their school Chromebook for what they need. Just removes some of the nice Apple family features and the apps they'd be inheriting, but that's what I get for not paying the tax with new hardware purchases.
I’ve got a “late 2008” MacBook Pro that connects to sites ok in Firefox. That seems to be the browser that does the best at long-term support for old Macs.
Both those machines will run the latest Ubuntu just fine, and the latest Chrome (or Firefox) on it.
Just copy the LiveCD image onto a USB stick, insert, boot holding down the Option key, and you can try it without actually installing it (i.e. leaving your MacOS untouched).
Sure. But my needs haven't exceeded that RAM. I just want to keep doing the things I was doing for years on it happily, but security updates, broken services and website bloat have intervened.
Just switch to linux and it should just work. There are distros that use very little ram and it stays updated. Noscript can help you block javascript on websites
A 15 year old device can be still as capable as a raspberry pi and those work fine now for modern computing
Depends if you use xcode or not...I still have my macbook 12inch, for work use, it is amazing, but I can't run the latest xcode, making it defunct for some of my uses. It would be fine running xcode weak as it is; i am sure. Liquid glass might have killed it tho.
Patches for old OS versions are unfortunately not 100% covering all security issues. Apple is often arguing that vulns can only be fixed in actively supported versions.
You're clearly running low-intensity tasks (pro photo editing, music production, software dev, generally all at the same time) instead of highly-demanding ones (1 jira tab)
Obsolescence comes when Apple conveniently "optimizes" a new architecture in the OS for a new chip... that conveniently, ironically, somehow severely de-optimizes things for the old chips... and suddenly that shiny new OS feels slow and sluggish and clunky and "damn I need to upgrade my computer!." They'll whitewash it not as planned obsolescence but optimization for new products. Doesn't have to be that way, shouldn't be that way, but its incredibly profitable.
Maybe by that time ARM linux on this platform will be excellent and we can migrate to it for old gear. I still have a 2011 MBP running Linux on my electronics workbench and it is just fine.
You should wait until next Fall if you don't really need to replace your M1 Max. Rumors say that Apple's going to redesign the Macbook Pros next year with an OLED screen.
I would rather buy the last refresh of the old design. Waiting for a redesign is risky, as some redesings are just bad (like the touchbar MBP). And Apple is opinionated enough that it often refuses to admit its mistakes and sticks to them for years.
The butterfly switches break easily and replacing the entire keyboard because of it is a pain. I held on to my 2015 intel MBP for ages waiting for them to address that.
I had one for a few years. The keyboard was bad, and there was no physical escape key. There were lot of accidental clicks with the touchbar, as it had a different logic (touch to use rather than press to use) than the other keys, or the function keys on every other keyboard. And I was using USB-A and HDMI adapters all the time, as the laptop lacked essential ports.
The first M1 MacBook Pros had both the touchbar and a decent keyboard. I love mine so long as the driver running the touchbar doesn't crash, which it does sometimes necessitating a reboot. My main problem is how few programs actually ever made good use (not just some use) of the touchbar.
As for the dongle issue, that went away when I upgraded to a USB-C monitor at home and USB-C equipment at work. I can dock to a monitor or plug into a projector to give a presentation and charge with the same cable. At this point I don't want an HDMI port, and I'm kind of sad that the next laptop will probably have a dedicated charging cable.
I travel quite a bit. HDMI remains useful, as most monitors / TVs / projectors I encounter still don't have USB-C input. USB-A is also somewhat useful, as I charge various devices from my laptop to avoid dealing with too many international power adapters.
The most common ports I need are roughly: 1. USB-C; 2. HDMI; 3. USB-A; 4. second USB-C; 5. third USB-C; 6. second USB-A; 7. DisplayPort; 8. fourth USB-C.
I still have both 13" and 15" Touch Bar MacBook Pros from 2016, and the keyboard is hands down my favorite laptop keyboard to type on since the Lenovo X220. The new ones aren't _bad_ but not as nice. The physical escape key doesn't matter to me, I have had it mapped to caps lock forever.
I also used to use the Touch Bar for a status display for things like tests, it was honestly great. Do not miss the battery life and performance compared to my subsequent Apple Silicon laptops, but definitely miss the keyboard.
I think it’s because of the non optionality of it. If you could have gotten every but sans/includes the touch bar people could have simply made their choices based on preference.
In the end they reverted because they were not willing to make it optional. They also never released a touch bar keyboard for desktop, which would have made it more useful perhaps
My 2019 MBP has a touch bar and a physical escape key, so at least some models did have one. I agree not having it would make the touch bar way worse. As it is I don't mind it.
i also found that weird when I got it but I got used to it quickly. It's not my main work machine but I use it for a couple of hours every evening and stopped thinking about it. I do sometimes accidentally bring up Siri when I mean to hit the backspace key.
As someone who went all in on the 2019 i9 Intel MBP months before Apple announced the M1 MBP, I can tell you this strategy is not always optimal. Years of managing overheating and underperformance due to said overheating has not been fun. Especially when I found out about the benchmarks showing those M1s were running circles around the laptop I purchased, for a fraction of the price
I grabbed a broken 2019 i9 and repaired it. I thought I had fucked up the repair because it kept thermal throttling but after researching a bit and eventually comparing to a known good machine it appears that I did fine and no, it just does that
Apple has had missteps of course, but you can usually buy last year’s model, right?
OLED is much better than other display technology, and they’ve done other OLED screen devices. It would be quite surprising to see them screw this up—not impossible, sure. They could screw up some other design element for example. But, it would be somewhat surprising, right? And OLED is a big change so maybe they won’t also feel the need to mess with other stuff.
Everything I recently researched about display technologies, mini LED has no image retention/burn-in issues, and renders fonts better compared to OLED. It seems you want OLED for media (and mobile, since you often alternate entire screens), IPS for work, and mini LED as a more expensive compromise without burn-in, that does text as well as IPS, and media almost as well as OLED. I wonder why would they even want to use OLED on work screens with lots of static content, did something major change about the tech such that it doesn't suffer these issues anymore?
I think OLED burn in has been mitigated fairly well recently. At least, I have a Linux laptop from 2021 that I use for work as well as fun, no particular care taken to avoid it, but no burn-in so far.
Font rendering, hard to say, I think it’s just preference.
Terminals look very nice with actual-black backgrounds.
I have a Samsung QD-OLED monitor from 2023 which has very noticeable burn-in at low brightness levels. This is from the era of "OLED burn-in has been solved," and it's soured me on OLED monitors since I do photography as a hobby and don't want burn-in affecting how I see images on my screen. I think it's fine for televisions, but I don't like it for PC use where I have static windows on my screen for a long time. I even used dark mode and still got burn-in pretty quickly, for example where it draws the border between side-by-side windows (so, a vertical line down the middle of my screen). Once I noticed that, I started resizing my side-by-side windows so their border isn't in the same place every day, but the damage is done.
Comments like yours make me feel justified that potential burn-in issues were why I stuck with an IPS panel when I purchased a new monitor earlier this year.
My past monitors have lasted me 5-7 years in the past, and I only upgraded for size (once) and gsync (also once).
I don't want to be forced to buy another one just because of burn-in.
Interesting. Since I use the pretty barebones Linux config (i3wm) and haven’t tried to avoid static elements, I have a lot on my screen. But, I tend to keep my screen fairly dark just for comfort. It is also 1080p, and not super high dpi, I wonder if bigger pixels are less fragile.
The notch is bigger than it should be for sure, I would've loved for it to be narrower. But I don't really mind the trade-off it represents.
You could add half an inch of screen bezel and make the machine bigger, just to fit the web cam. Or you could remove half an inch of screen , essentially making the "notch" stretch across the whole top of the laptop. Or you could find some compromised place to put the camera, like those Dell laptops which put the camera near the hinge. Or you can let the screen fill the whole lid of the laptop, with a cut-out for the camera, and design the GUI such that the menu bar fills the part of the screen that's interrupted by the notch.
I personally don't mind that last option. For my needs, it might very well be the best alternative. If I needed a bigger below-the-notch area, I could get the 16" option instead of the 14" option.
I don't have a problem with the notch, I have a problem with the icons not showing in the status bar and there isn't a *** way to show them. It's so difficult to add a overflow button that shows the hidden icons?
My REDMAGIC Android phone is like this too and I love not having a stupid notch cut out of the screen. I've hated them since the very first time I saw a iPhone X. Can't believe such a ridiculous design defect infected Macbooks too :/
It's not visible at all. The camera is just placed behind the screen.
OLED screens are inherently transparent, there is just a light-emitting layer in them. You put your camera behind the screen, and either make the few pixels on top of the lens go black when it's on, or you use a lot of software to remove the light that comes from the screen and clean up the picture.
They have the solution with the web cam near the hinge that I mentioned. I had a couple of Dell XPS laptops like that. It's fine if the webcam is really just an afterthought for you, but it does mean the webcam has a very unflattering angle that's looking up your nostrils.
I use my webcam enough these days to take part in video meetings that it'd be a pretty big problem for me.
Checkout the Dell XPS 13 9345, webcam is on top but with thinner bezels than a Macbook, it's got a Snapdragon ARM processor for good battery life, OLED screen, upto 64GB RAM, and is smaller and lighter than a Macbook Air
Snapdragon X Elite 2 processor will be out next year for the refreshed model
You're looking at the wrong laptop, the Dell XPS 13 9345 has a ~88.6% screen to body ratio, the Macbook Pro 14 M4 2024 has a ~84.6% screen to body ratio.
The weight is the big one for me - only 2.5 lbs vs 3.4 lbs
Remember the Dell has an 18 month old processor, X Elite 2 coming out next year.
On the contrary; now might be a good time to get an M1 Max laptop. A second hand one, ex-corporate, in good condition, with 64Gb RAM, is pretty good value, compared to new laptops at the same price. It's still a fantastic CPU.
At your own risk — one place is ebay sellers with a large number of positive reviews, (and not much negative), who are selling lots of the same type of MacBook pros. My assumption is they've got a bunch of corporate laptops to sell off.
Honestly the only Apple Silicon e-waste has been their 8GB models. And even those are still perfectly good for most people so long as they use Safari rather than Chrome.
I finally replaced my m1 mini because of memory capacity (16GB doesn't cut it for me and jumping to 64 was worth it), but I'm having the same feeling about my M1 pro MBP with 32GB. It just still works so well for nearly everything I do.
Personal workloads that benefit from upgrade: Running a Python script that's CPU limited, aligning genomes in parallel on all cores. It's common that I need to wait 2min for those tasks to complete. Shaving off 30s for faster iteration loop. is meaningful.
I am in the same boat as my Rust compile times are solid. I'm good for now, but with the M4 max twice as fast, upgrading to the M5 max next year could be a tempting upgrade.
I have M1 Max 32GB and I think I'll go with M5 Max simply because I need more RAM. I am constantly swapping about 16GB. I don't feel it that much, but it bothers me.
I do a lot with VMs, and other memory intensive things so I went with 128GB of ram. I'm hoping for a laptop with 256GB+ in a few generations and one with more or less double the oomph would be nice. Everything can be faster, bring it on!
Did a M1 Max (32 GiB, 1 TB -> 64 GiB, 4 TB - Z14X000HR) upgrade in early 2024 for ~$1800 USD with ~20 battery cycles and 99% battery health. Avoiding *os 28 because I refuse unusable, battery-wasting bling.
Rumor has it M6 Pro will be a total redesign. Whether that's a good or bad thing depends on how much you trust Apple to nail a next gen design first try again
My M1 Max works just fine. Everything is as snappy as it was the day I bought it. I don't see any reason it might need a replacement any time soon. (The fact that I don't install major system updates unless absolutely necessary probably helps too)
I was thinking similar thoughts about my M2 Max MBP. I look at the newer chips and wonder at what point will (or has it happened already) will the base M chip outperform my M2 Max? I'll probably hold onto it a while anyway -- I think it will be a while before I find 96GB limiting or the CPU slow enough for my purposes, but I'd still like to know how things are progressing.
Serious questions. How is Asahi these days? Is it ready as a daily driver? Is it getting support from Apple or are they hostile to it? Are there missing features? And can I run KDE on it?
Much less active than it used to be when it was run by Hector Martin. The core development is a lot slower. Although the graphics stack, for instance, has reached a very mature state recently.
> Is it ready as a daily driver?
It depends. Only M1 and M2 devices are reasonably well-supported. There is no support for power-efficient sleep, Display Port, Thunderbolt, video decoding or encoding, touch ID. The speakers overheat and turn off momentarily when playing loud for a longer period of time. The audio stack in general had to be built from ground up and it seems to me like there are bits and pieces still missing or configured sub-optimally.
> Is it getting support from Apple?
Not that I am aware of.
> are they (Apple) hostile to it?
Not to my knowledge.
> Are there missing features?
Plenty, as described above. There has been some work done recently on Thunderbolt / Display Port. Quite a few other features are listed as WIP on their feature support page.
> Can I run KDE on it?
Of course. KDE Plasma on Fedora is Asahi Linux's "flagship" desktop environment.
"power-efficient sleep" refers to discharging 1-2% battery over night rather than 10-20%. I.e. there's room for improvement, but the device can still be used without worrying much about battery life regardless (especially given how far a full charge gets you even without sleep).
> Display Port, Thunderbolt
Big item indeed, but it's actively worked on and getting there (as you mentioned).
> video decoding or encoding
Hurts battery performance, but otherwise I never noticed any other effect. YMMV for 4K content.
> touch ID
Annoying indeed, and no one has worked on this AFAIK.
> The speakers overheat and turn off momentarily when playing loud for a longer period of time. The audio stack in general had to be built from ground up and it seems to me like there are bits and pieces still missing or configured sub-optimally.
Sad to hear since I thought the audio heat model was robust enough to handle all supported devices. On my M1 Air I've never seen anything like this, but perhaps devices with more powerful speakers are more prone to it?
My experience is also based on a M1 Macbook Air. I have repeatedly experienced sudden muting of the speakers for a second or two while playing conversations on a high volume.
I only assume it is caused by thermal management of the speakers but I did not actually verify it.
Perhaps check if there are any log files in /var/lib/speakersafetyd/blackbox. The fdr files in particular contain human-readable error reasons. If there are no log files, it's probably something else.
Am I misrepresenting the situation or did the whole project seemingly fall apart over an argument between Hector and Linus Torvalds in the mailing list about getting some driver merged?
I would consider that to be a misinterpretation. The whole project did not fall apart because Hector Martin left. But as with any project where the leaders depart, it definitely got slower.
The argument was originally about merging some Rust code into some parts of the Linux kernel if I remember correctly. It did not involve Linus Torvalds directly. Rather, the respective maintainers of those specific parts were unwilling to merge some Rust code, mostly because they did not know Rust well and they did not want to acquire the responsibility to maintain such code.
Asahi will probably only ever be feasible for years-old hardware. macOS is a total non-starter for me, so maybe one day I’ll end up with one of these, but only as some kind of tertiary / retro machine.
Not the OP, but its a non starter for me because, I _was_ a mac guy for 10 years or so, but I changed job to one that required I use windows for game dev, and I discovered how locked in I was, and how painful it was to change. I'm not going back, no matter how nice the hardware is.
Yeah, given all the people with passion/ability for low-level reverse engineering have left the project, I don’t think we should ever expect to get greater than M2 support from Asahi. Maybe one day another project will pick up the ideas, but for anyone not wanting to use years old hardware, the dream of Linux almost natively existing on modern Apple silicon remains just that: a dream.
On macbook air M1 Asahi is pretty usable when it comes to hardware support. And been usable for at least 1 year.
Though either Fedora itself, how it built with Asahi or just running it with little disk space end up with freeze on boot after random updates. Twice, once without even rpmfusion enabled. Either some weird btrfs issue or I dont know what.
Like I'm Linux dude for two decades and dont do anything fancy, so this is weird. Switched to Asahi Ubuntu on ext4 and it working great so far.
Interesting to see that over 5 years (M1 was 2020), the benchmark performance has not quite doubled. Is this an indictment of Moore's law, or just Apple over-speccing the M1 and slowly decreasing that over time?
Moore's law has never been an absolute and it's also about the number of transistors per mm/^2 ... not speed. Sometimes progress is a little faster and sometimes it's a little slower.
Thank you. Looking at replacing an Intel MacBook Air, I hope there are price drops on the "outdated" M4s (although an M2 phased out early this year would do well enough...)
And the fastest M4 max was already fastest single and multicore CPU by a decent margin, while the fastest non-Apple CPU was only specialized for single or multi.
The single thread performance for modern high performance CPUs are all very close to each other. Apple's latest usually has a small advantage because they're the first to use TSMC's latest nodes, which is good for something like 15-20%.
The fastest multicore CPUs are the ones with a lot of cores, e.g. 64+ core Threadrippers. These have approximately the same single-core performance as everything else from the same generation because single-core performance isn't affected much by number of cores or TDP, and they use the same cores.
That article points out that GB5 and GB6 test multi-core differently. The author notes that GB6 is supposed to approach performance the way most consumer programs actually work. GB5 is better suited for testing things like servers where every core is running independent tasks.
The only “evidence” they give that GB6 is “trash” is that it doesn’t show increasing performance with more and more cores with certain tests. The obvious rejoinder is that GB6 is working perfectly well in testing that use case and those high core processors do not provide any benefit in that scenario.
If you’re going to use synthetic benchmarks it’s important to use the one that reflects your actual use case. Sounds like GB6 is a good general purpose benchmark for most people. It doesn’t make any sense for server use, maybe it also isn’t useful for other use cases but GB6 isn’t trash.
> The only “evidence” they give that GB6 is “trash” is that it doesn’t show increasing performance with more and more cores with certain tests. The obvious rejoinder is that GB6 is working perfectly well in testing that use case and those high core processors do not provide any benefit in that scenario.
The problem with this rejoinder is, of course, that you are then testing applications that don't use more cores while calling it a "multi-core" test. That's the purpose of the single core test.
Meanwhile "most consumer programs" do use multiple cores, especially the ones you'd actually be waiting on. 7zip, encryption, Blender, video and photo editing, code compiles, etc. all use many cores. Even the demon scourge JavaScript has had thread pools for a while now and on top of that browsers give each tab its own process.
It also ignores how people actually use computers. You're listening to music with 30 browser tabs open while playing a video game and the OS is doing updates in the background. Even if the game would only use 6 cores by itself, that's not what's happening.
Ok I had time to read through this, and yeah I agree, multicore test should not be waiting on so much shared state.
There are examples of programs that aren't totally parallel or serial, they'll scale to maybe 6 cores on a 32-core machine. But there's so much variation in that, idk how you'd pick the right amount of sharing, so the only reasonable thing to test is something embarassingly parallel or close. Geekbench 6's scaling curve is way too flat.
The purpose of a multi-core benchmark is that if you throw a lot of threads at something, it can move where the bottleneck is. With one thread neither a desktop nor HEDT processor is limited by memory bandwidth, with max threads maybe the first one is and the second one isn't. With one thread everything is running at the boost clock, with max threads everything may be running at the base clock. So the point of distinguishing them is that you want to see to what extent a particular chip stumbles when it's fully maxed out.
But tanking the performance with shared state will load up the chip without getting anything in return, which isn't even representative of the real workloads that use an in-between number of threads. The 6-thread consumer app isn't burning max threads on useless lock contention, it just only has 6 active threads. If you have something with 32 cores and 64 threads and it has a 5GHz boost clock and a 2GHz base clock, it's going to be running near the boost clock if you only put 6 threads on it.
It's basically measuring the performance you'd get from a small number of active threads at the level of resource contention you'd have when using all the threads, which is the thing that almost never happens in real-world cases because they're typically alternatives to each other rather than things that happen at the same time.
It is worse. The use case of many threads, resource contention, diminishing and eventually negative returns does exist and I've run into it, but it's not common at all for regular users and not even that interesting to me. I want to know how the CPU responds to full util (not being able to do full turbo like you said).
It's not trash - it's quite nice for its niche. It's just not very scalable with cores, so it's best interpreted as a benchmark of lightly threaded workloads - like lots of typical consumer workloads are (gaming, web browsing, light office work). Then again, it's not hard to find workloads that scale much better, and geekbench 6 doesn't really have a benchmark for those.
For the first 8 threads or so, it's fine. Once you hit 20 or so it's questionable, or at least that's my impression.
I get how even for multithreaded workloads, having a few fast cores is often better than the equivalent many slow cores. Or NUMA. There can be value in a test like 8 threads full load regardless of how many cores there are. But Geekbench 6 isn't that either, at least according to the chart showing sharply diminishing returns after 2 cores.
Yep. Still, I think it's a pretty decent benchmark in the sense that it's fairly short, quite repeatable, does have a quite a few subtest, and it's horribly different from the nebulous concept that is "typical workloads". It's suspiciously memory-latency bound, perhaps more than most workloads, but that's a quibble. If they'd have simply labelled it "lightly threaded" instead of "multithreaded", it would have been fine.
As it is, it's just clearly misleading to people that haven't somehow figured out that it's not really a great test of multithreaded throughput.
They're going to have a hard time selling the M5 when compared to the M4 Pro. Geekbench for that chip is 3843/22332, which is slightly slower for single core but better for multi, but also has thunderbolt 5 instead of 4.
Fortunately they will be selling the M5 Pro against the M4 Pro (and more likely, their expectation is no one with the current Pro is going to upgrade for one generation) so it will be easier.
- M1 | 5 nm | 8 (4P+4E) | GPU 7–8 | 16-core Neural | Memory Bandwidth: 68.25 GB/s | Unified Memory: 16 GB | Geekbench6 ~2346 / 8346
- M2 | 5 nm (G2) | 8 (4P+4E) | GPU 8–10 | 16-core Neural | Memory Bandwidth: 100 GB/s | Unified Memory: 24 GB | Geekbench6 ~2586 / 9672
- M3 | 3 nm (first-gen) | 8 (4P+4E) | GPU 8–10 | 16-core Neural | Memory Bandwidth: 100 GB/s | Unified Memory: 24 GB | Geekbench6 ~2965 / 11565
- M4 | 3 nm (second-gen) | 10 (4P+6E) | GPU 8–10 | 16-core Neural | Memory Bandwidth: 120 GB/s | Unified Memory: 32 GB | Geekbench6 ~3822 / 15031
- M5 | 3 nm (third-gen) | 10 (4P+6E) | GPU 10 | 16-core Neural | Memory Bandwidth: 153 GB/s | Unified Memory: up to 32 GB | Geekbench6 ~4133 / 15,437 (9-core sample)