One thing which I don't think has been commented on is the marketing effort behind the M1.
It's getting a lot of prominence and its use across lots of computers means that there can be a consistent message that "M1 is great - get a new Mac and get an M1".
It also provides an opportunity to distinguish between M1 and the next generation M2 (presumably).
I've always thought Intel's marketing was a bit confused - i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.
I am continually astounded that the quad core sandy bridge core-i7 I bought in January 2011 is still completely serviceable for pretty much all tasks outside of gaming. The bezel and overall form factor is clownishly large by todays standards, and the screen is starting to look a bit faded, but as a portable but mostly stationary PC it still works great alongside my main laptop I use for work.
I knew back in 2011 I was buying something that was going to be pretty future proof, but 10 years... I would have laughed at you.
I think that’s because 99% of regular computer tasks don’t need 24 cores operating at 5 billion cycles per second.
The differences between an i7-4xxx and i7-11xxx are more about hardware acceleration, eg HEVC and SIMD. That’s why they look so much better in benchmarks that are designed with that in mind.
My main desktop is an unRAID server that hosts multiple VMs. Only yesterday I discovered that I accidentally only gave Windows four (of my 16) cores. Literally 3/4 of the processor was unused for months, and I didn’t notice any slow performance the entire time.
My new M1 constantly freezes, and I’m pretty sure it’s because of chrome. Or the secret SSD thrashing design flaw that Apple is hoping nobody talks about till they quietly fix it.
Was a bit disappointing. I was like, finally! A new thing! I’m freeeeeee..... wait no it still freezes, I miss sandy bridge, etc.
Heavy load in my case is a few dozen terminal tabs, six of which are running TPU training jobs, 40 chrome tabs, a few of which are playing twitch streams and YouTube videos, a half dozen instances of pycharm, one webstorm, one clion, around 35 macvim windows, and a few dozen PDFs open in Preview. I’m not exactly representative, but I think the stddev of usage patterns is high across developers.
The unfortunate part is, when it freezes, it does the beachball for a full minute before I successfully execute a “switch to activity monitor / s click topmost process / force quit / confirm” sequence. Each step takes around 45 seconds to complete. I assume Apple QA never tested the extreme edges of workloads.
One poor guy on Twitter is already up to 18% of his SSD lifetime, in the first three months of having an M1. We had a heart to heart about how silly it was that the drive will die within a year, that Apple better replace it under warranty, and so on. But it’s a coin toss whether he’ll just have to eat the cost and buy a new one: https://twitter.com/_wli/status/1364934834229977090?s=21
This is heavy load for any machine unless it's configured as a bona fide workstation. Any machine, any OS (with M1 MacBook Air's HW configuration) would trash its SSD while swapping in that case, IMHO.
> I’m not exactly representative, but I think the stddev of usage patterns is high across developers.
When I'm in development mode with all cylinders firing, I have a single Eclipse window in CDT perspective, a couple of terminal tabs, Zeal (or Dash) for documentation and a couple of firefox tabs for documentation not available in Zeal.
Music is either supplied by a single YouTube tab or Spotify, and that's it.
The most unfortunate component is CPU in my case. I try to ooze every single bit of performance from it while running the application. Memory controller comes the second due to transfer operations but, I neither max out the RAM or cause system to do heavy swapping.
I'm always kind of conscious about the hardware resources of the system and minimize my resource usage habitually, without killing productivity.
Unfortunately, and somewhat sadly, and melanchololy, I must report that I've been at OS version 11.2.3 (the latest) and run almost zero Rosetta apps: KeepassX, Beyond Compare, and Optimal Layout; of those, Optimal Layout has read 100GB, which is almost nothing.
In comparison, my kernel_task process is at 43TB bytes written, and WindowServer is at 8TB read.
So, as with cancer, there are multiple contributing factors, and hopefully Apple will ship an update to fix it... someday.
Swamping an SSD with write requests and forcing it to write-level aggressively is trashing an SSD IMHO. Since it both silently grinds to fulfill the requests and loses a lot of lifespan during the process.
Apple doesn't and shouldn't care about optimizing for your use-case. At some point, the user should be responsible for stopping processes when they're not using them anymore.
I was going to say, there's no humanely possible way you can watch and absorb the content from 5+ videos at the same time right? Like what's even the point? (I presume tabs with videos on them that aren't playing still perform around the same as tabs without videos at all, since there's no need to re-paint frames.)
I’m reminded of the podcast ATP and the great episode ‘The Windows of Siracusa County’. John’s window management system is both baffling and amazing. It’s a good listen.
I understand the idea of integrating everything to reduce the electronic footprint, but some stuff like RAM and storage should be serviceable outside of replacing the entire board..
Is the the 8GB model? I had the same issue with my workload and found that it was always under high memory pressure, upgrading to a 16GB model fixed it for me.
It’s the 16GB. The memory pressure is indeed the issue. I find it hilarious that Apple accidentally inverted their LRU into an MRU and constantly swaps for no reason, yet it’s still so fast that nobody even noticed during QA. Mine has already read/wrote 48TB in the first three weeks; at this rate the SSD will be dead in three years.
(If you haven’t looked into SSDGate yet, be sure to check your smartmon to see whether you’re affected.)
Emulated apps don’t use more memory. It’s more likely Activity Monitor calculates their usage wrong, it doesn’t show very accurate numbers in the first place.
Chrome is by far the worst-performing browser on my (Intel-based) Mac with many tabs open. It doesn’t come anywhere close to keeping up with my browsing style, wasting tons of system resources, glitching, crashing, etc.... and then oh wow the battery drain.
Safari is best, but Firefox is also pretty decent nowadays.
I have the same issue with freezing regularly. Using emacs aarch64 build and clang from llvm sources ( or OS X native clang, either one ) compiling in emacs window (Mx compile) and it will stop, hang for about 10 seconds, and then reboot. Rosetta isn’t in this chain. Haven’t looked to find why yet, but task switch to Apple news while I’m waiting and I can see the build in emacs halt for a while. Task switch works after a while, but sometimes it doesn’t and a watchdog kicks in it seems.
I actually have an orthogonal issue, but I wonder if it's related. When I have an external monitor connected via a Thunderbolt dock, the device takes forever to wake from hibernation (that is to say when I've left it on sleep for several hours). i.e. 10+ seconds, which is kind of ridiculous in 2021. This doesn't happen when the monitor is not connected. I also have a similar amount of things open as you described
That's a perfectly valid question. Lots of people use Chrome because they've used Chrome, and that's the whole reason. Some people need Chrome because it supports some site or tool that they require. I recommend that everyone else at least try Safari and see what they think of it.
Last time I tried it (couple of months ago) there were practically no extensions and Bitwarden didn't work in incognito. And Chrome's UI/UX is simply great. And dev tools as well.
Sure, I've never used Safari enough myself to have an opinion (since it's never been an option for me due to various things), it might be equal to Chrome or even better (I like Safari's clean UI). I'm mostly comparing it to Firefox which is quite clunky in my opinion (I've given it many chances).
And I'm actually typing this on Safari now and it looks like the issue has been fixed - Bitwarden now works in private mode. Maybe it's time to give it yet another shot. :)
Don't know if the numbers are comparable but in Chrome's Task Manager the numbers are 224 MB and 121 MB respectively.
Edit: This might be an old (unfixed) issue https://discussions.apple.com/thread/6640430 ... This sucks because I have a specific doc and the calendar that I always keep open in pinned tabs. There are some other sites with seemingly quite high RAM usage as well. Well at least nothing is lagging so I guess I'll just chug on and see what happens.
Edit2: This behavior is just insane. I closed those two tabs and suddenly Gmail shoots up from out of the blue to 1.22 GB?! Then I reopen the two previous tabs (doc/calendar) again and Gmail stays the same while Calendar goes on a diet and sits at 187 MB (doc at 983). This is super weird. I'll just keep them open and see how it behaves overall, might just be wonky numbers?
I recently got an M1 mac myself and tried Safari out again -- I seldom used it on my 2016 Pro (preferred Firefox) but it's supposed to have stellar battery life, so why not give it a go?
Nice that they've got built-in tracking protection now, I guess, but I left Twitter running in a tab overnight and when I woke up Safari was reporting over a gb of memory use for that tab alone. Something in how they cache or handle JS for some of these long-running services, or maybe having to do with service workers ?, Safari just seems to consume a lot of memory. It's not a huge issue I guess, with the way the M1 never seems to have any memory pressure issues, but it's part of why I remember switching off Safari in the first place.
If anyone knows why Safari seems to consume more memory with these long-running processes, I'd be real curious to know why...
Battery life has been the reason for me trying it in the past as well.
And yeah I mean even though Activity Monitor is reporting high RAM usage I don't seem to have any memory pressure issues so it might not be that big of a deal (MBP 2015 w/ 8 GB RAM).
I'm actually considering getting an M1 myself next week. Have been waiting for the initial kinks to get sorted out and most apps ported to it and I think it's time now. :) SSD wear seems to have been resolved with the latest Big Sur versions and fewer Rosetta apps and now even ST4 beta has an M1 build.
My justifications are that I'll get an excuse to upgrade to 16 GB RAM and I'll get touch ID so I can have a more secure password and not have to type it all the time. And with the stellar performance and battery life of the M1 I can just keep using Chrome (Brave). :P
New tabs are opening in weird places, really faaaar to the right skipping between 6-7 (unrelated) tabs. Really cumbersome to navigate.
Horrible selection of extensions still. Really missing searching for any tab with Cmd+Shift+K (I often have ~100 tabs open across multiple windows) that I get with the Tab Switcher extension.
There's a Noscript equivalent extension but it costs $3...
And I can't initiate a search with "url bar -> you[tab]<searchterm>[enter]" (for a Youtube search).
Meh. Safari just doesn't work for me. Maybe if I sat down for a week or two and wrote a bunch of extensions myself but I don't see the value in that.
On the plus side Safari does seem snappy but then again I only have ~30 tabs open (across 6 windows) currently. So I might give it a win on speed but it definitely loses in overall usability.
Will try again in a year or so.
Edit: OH SHIT this is a definite dealbraker - apparently I can't select multiple tabs by Shift-clicking to pull them out into a new window. That completely kills my workflow of separating topics into different windows, and/or closing tabs in bulk.
Edit2: Wtf, how didn't I notice this earlier - I cannot see the URL when hovering over a link. How does anyone consider this within the realm of good UX?
This is perhaps the real genius of the M1. It's a great chip and all. But when you make it the only choice, people are finding the only choice is more than sufficient. And now Apple only has to produce one piece of silicon for their iMacs, iPad Pro, and Laptops. What a boon for logistics.
The success of the M1 is a boon for Apple, but I'm not sure that only having to produce one CPU to be used across all devices is where the optimization lies, especially considering that Apple is selling the iPhone SE with an older CPU and not simplifying by not selling it.
Just because it's being sold today doesn't mean it's actively manufactured still. They could have stockpiled the older processors or have spare inventory to continue meeting demand
I have a 15" retina MacBook Pro from 2012, and another from 2018. The one from 2012 is fine for everyday use. But the difference in single core performance, and also IO, is substantial.
That's really important when it comes to a lot of typical development work. For example Rails starts at least 2x faster on the newer laptop.
My gaming PC is almost 10 years old and is with an i7 3770. Only upgrade was a GTX 980 five or six years ago.
That thing flies for anything and everything I put it through.
One day I'll retire it as a gaming machine and convert it to a Linux server or a casual workstation and I am sure it's going to be even faster than Win10.
My primary desktop is a 3770K running Linux.
That CPU's the unexpected beast. Just churning through what you throw at it with incredible speed even for today. I process RAW photos, encode some audio and develop high performance scientific software on it (testing it lowers my heating bills, somewhat).
I want to try one of the new Ryzen CPUs, but I have no reason to upgrade.
I’ve been doing big C++ builds (think gcc, LLVM-sized projects) on my Ryzen 9 5950X and I wouldn’t have it any other way. This thing saves me so much time every day. It’s so nice not to have to do remote builds to get a quick prototype.
The best part is that it was a drop in replacement for my 2700X. I already had a great Noctua air cooler so I was all set.
Ah, Ryzen 5000 and Threadripper 3000 series are absolute beasts, no doubt about it.
I want to work with Rust more and when I get to that point I'll likely start gathering money for a TR Pro but until then, and me mostly working with dynamical languages, even old-ish i7 CPUs are more than adequate.
Yeah, I'd like to upgrade my gaming system (4970k and a GTX1080ti) but past wanting to play at 4k and experience ray tracing there is almost no reason to do so.
Now, if the 3080 was actually available at MSRP I'd probably have upgraded, but that's at least a year away from happening.
IMO go big or go home in this case. For a gaming machine I'd aim for some of the Ryzen 5000's (likely even the 5700 is an overkill) and an RTX 3090 so I can play 3440x1440 @ 144FPS (or maybe even 4K @ 120FPS).
As it is, I almost have no reason to upgrade my gaming machine right now. Playing on a 21:9 wide display (2560x1080) and never go below 70-80 FPS in any game. Usually it's 100-120 FPS.
Yeah, I have two thinkpads -- one from 2010 and one from 2012. Both of them have enough grunt to get through most any task, the heaviest of which is compiling large codebases. One of them was my daily driver for eight years; I only upgraded to a Ryzen because I wanted MOAR COARS and more RAM.
It's funny how today's computers could theoretically handle a user's workload for decades yet Apple builds them to fall apart after four years. There was a Hackernews story recently about a guy faffing around with old 68k Macs from the late 80s, he said that those old Macs were rated by Apple to last for fifteen years. And yet they come from an era of much more rapid utility decay for computer hardware, when a computer would be utterly obsolete after just a few years.
I think you might be going to the other extreme just by a tiiiiiiiny bit. :D
It's true that a lot of people are just fine with a a 10-year old i5 or an i7. Hell, a lot of store owners and front-end offices grumble at an i3 CPU because it's still too powerful for the daily activities of the staff.
But when it comes to routinely compiling C / C++ / Rust then you need all the power and hardware innovation you can get.
This would be more convincing if I wasn't reading it on a 2013 Macbook. But yes, Big Sur is the last major update available and in a few years it's going to be obsolete.
I built my own gaming pc back in Nov 2010 with an i7-970 24g ram. While I have upgraded the GPU twice and upgraded to larger SSD's as prices have come down, I am still using the same machine for gaming. Not that what I am doing is very intense or hardcore intense graphics. I can comfortably play all the Blizzard titles, modern FPS like whatever COD is newest, etc. Maybe not on max settings anymore.
Still, I am impressed by how much I have gotten out of my machine.
Yep and take computer from 2001 and run it in 2011, you wouldn't have been nearly as happy with the performance. There have been incremental upgrades but mostly just the core count which is great for handling server stuff but not nearly as noticeable as Hz counts increases
The other main improvement in the past 10 years is the power draw. You can get more than 10 hours of battery life in a machine that weighs less than 3 lbs. That would have been unthinkable a decade ago.
CPU improvements are only part of the story in making that happen but the are certainly far ahead of where things were in the not so distant past.
The point is that the technology for long life laptops has been ready for a long time. It's just that Intel marketing calls all the shots and they only wanted to sell multicore space heaters with space heater memory to sweeten the deal. With this burden on chipset side, it was impossible to compensate with extra amp hours and maintain portable weight.
> The point is that the technology for long life laptops has been ready for a long time.
You're desperately moving the goal post to ridiculous places. It's like claiming that walking on the moon is supposed to be considered normal just because the technology has been ready for a long time.
That claim is irrelevant, isn't it? I mean, what good does a "technically it's possible" claim do if a) no, it's clearly not possible outside exceptionally rare circumstances b) the average consumer device is way behind any of those outlandish claims.
If you took the small 2010 macbook air and doubled the size of its battery you'd be extremely close to 10 hours and 3 pounds, so I wouldn't go with "unthinkable".
My desktop is 2010 vintage 6-core Phenom II 1100T with 16GB RAM, the motherboard is officially maxed out, but I've been told it can take 32GB just fine. It has been upgraded with an SSD and more recently a Radeon RX560, and it really does everything I need. Newer games do need lower settings, but it'll motor through most games at 720p. I added a PCIe USB-C interface card, so I'm not even stuck at USB 2.0 anymore.
I have considered replacing it with something smaller, now that all of my files are on a NAS and I don't need room for 5-6 drives in this machine. But I just don't feel like I would be getting enough of an improvement over what I already have, even with hefty new parts.
My laptop is a 2011 vintage X220i, and that is starting to feel a little bit behind, mostly thanks to the i3 inside. But it connects to 5GHz WiFi and is fast enough for browsing and video playback, so for now it keeps on truckin'.
Actually the best upgrade would be a decently powerful 12-14" laptop paired with an eGPU dock. That setup would handily replace both machines, but eGPUs just aren't a mature configuration yet, especially combined with Linux. Maybe in a few years.
Late-2012 ivy bridge i5 Mac Mini here. For a dual core machine it's really not all that bad of a daily driver experience. Makes me sad that I have to spend a crazy amount of money to get 16GB of ram and 1TB of SSD in a M1 Mini to match what I currently use. (although I understand the 1TB of SSD in the M1 is vastly superior)
Just for the fun of it, I tried to play WoW classic on my late-2012 mac mini and it ran like a toaster oven, but it worked just fine. Pretty fun. After putting the SSD in it, of course.
I used to play WoW on a G4 PowerBook! To be honest, it was decent, but the resolution was not great. I remember vividly having to look at the ground and zoom in all the way just to be able to go from the bank to the AH at IF. Hopefully whatever Intel GPU you have in your Mini is better than the Radeon 9700 in my old laptop.
Yeah smcfancontrol is a must. I keep the fan around ~3000rpm during moderate use and bump it up to 5000rpm or max it out if I'm going to game or render something. I've been tempted to look into TB2 to TB3 adapters and eGPU's but its a janky solution to extending the life of an 8+ year old machine.
Those things are beasts and have incredible ergonomic profile. I just bought one last year, with an i7 and 1tb ssd. Love it almost as much (for other reasons obviously) as my $23,000 Mac pro music rig purchased about the same time.
I have a near death grip on my late 2012 Mac mini i7 (upgraded to 16GB and 2 1TB ssds) It has stood the test of time and to me is an engineering marvel.
Aside from gaming it can last another 10 years.
For longevity, I keep it on its side and replaced the bottom lid with an aluminum mesh lid. This has made it near silent for 80% of my tasks (programmer centric). I also regularly clean out the fan.
Same here - I was tempted to upgrade when the newer mac intel models came out, but the soldered RAM / SSD bullshit kept me off them. I've hear that the 2012 mac mini can theoretically support 32 GB RAM (as other similar i7 processor devices can) - I think I'll try that 5 years down the lane for the heck of it. For now, 16 GB RAM and an SSD disk make it a very capable machine, that can (as you rightly pointed out) last for another decade.
I got a 2009 Mac Pro with dual sockets at a state auction and put in a couple of inexpensive hexcore Xeons. The sum total after 2 graphics cards and 48 GB of RAM was less than $750 and it runs everything pretty well.
If it weren’t for the fact that it’s not good at being a laptop (chunky, heavy, hot), I’d be perfectly happy using my old QX9300 (Core 2 Quad) workstation laptop for day to day. With an SSD and the Bluetooth+Wifi upgraded to Intel AX200 it’s only marginally slower than my newer x86 boxes at most things, and its 15.4” 1920x1200 display is still nicer than what ships on a lot of laptops today. Wouldn’t want to use it as a dev machine, but any less demanding usage would be fine.
Two years ago when I got the Lenovo e590 (Intel Core i5-8265U), I benchmarked running the unit tests of my project on it. (though I didn't really intend on using the e590 for development)
I compared against my old desktop from 2010 (1st-gen Core i5-750).
Turns out that the old desktop won by a solid 10% margin! A whole decade of new Intel generations still can't make up for the disadvantage of using a mobile CPU.
P.S. The e590 barely survived 2 years and is dead now (broken USB-C charging port).
I'm also staring at a Mid-2011 Sandy Bridge quad-core iMac that's been on my desk since May 2011.
It'll be replaced come August-ish with a 32-core Zen 3 Threadripper (with 256GB of ECC). Though in a few years I'll likely put an M3 Mac Mini alongside it to figure out when I'll feel comfortable going to ARM full time, as all my customers (and their applications) are still native x86 only... for now.
There is only one other popular task that a decade-old CPU can’t handle acceptably well: running a web browser. Outside of that, computers have been good enough ever since HiDPI/Retina screens became standard.
I have a quad core i7 from 2013 and the laptop still runs just as fast. I had to replace the hard drive and the RAM is maxed at 16GB (not good for virtualization), but it runs like new except for some odd screen recording hiccups with Microsoft's Xbox thing, but really all of the performance problems seem to just be driver bugs in relation to Dell's way of doing things.
> I think that is because the performance has been the same for the past 10+ years.
Ah yes, the exaggerated funny phrase. On a serious note, the performance difference from an i7 7th generation (7700k) to an i7 10th generation (10700k) felt quite impressive.
Not a surprise. Stronger cores and double the cores and threads. And that's not a theoretical difference, they had a performance uptick of up to 100% even in games [0].
The 7700K and 10700K use virtually the same Skylake architecture cores with very minor tweaks between them. The only CPU performance gains Intel made in the 2015-2020 period were clock speeds, core counts and support for higher memory speeds. The 7700K and 10700K even share the same integrated graphics architecture.
I think that's correct. But it is all true at the same time. The cores are stronger because of the higher clock they can achieve and how the turbo clock works, and the higher core count does make a big difference. Higher memory speed also helps (though the difference in practice on Z board should be just a matter of what was typically on the market). Intel UHD 630 is even usually a tiny bit faster than the Intel HD 630, even if that's also only from the clock difference - only now with the UHD 730 and 750 did they improve things there.
Most people have no need to compare CPUs between mobile/non mobile or generation. They just need to know if the laptop they're looking at has the high end / middle end / low end CPU. You'd be comparing a new laptop with latest(-ish) CPUs.
The i3/i5/i7 makes it plain & simple to know that.
You don't need to figure out what the "middle end" CPU is called this year. It's the i5, problem solved.
Except you go to the store and there's laptops with 3 generations of cpus in there and the 10nm/14nm split for tenth gen so it's entirely possible to run into a laptop with an i7 worse than the i5 in the laptop next to it on the shelf/store webpage.
This reminds me of when the iPhone first came out. Back then, most phones from other manufacturers had an assortment of random names and model numbers that carried no hint as to which one was better or newer. Meanwhile, iPhone N+1 was obviously better than iPhone N.
Samsung quickly learned to play the same trick with their yearly Galaxy S and Galaxy Note releases. That decision probably contributed to the market share they've been enjoying.
The confusing names can be good for big box stores because they prevent comparison shopping, if every store gets their own SKU then they can offer to beat competitors' prices without ever having to do it. Although for phones, they were mostly handed out by carriers at the time and people didn't shop on their own.
It's even worse these days. There are letters in the middle of the model name, e.g. i7-1165G7, not just at the beginning and end! That thing doesn't even sort. You have no idea where to position it without seeing the spec sheet and Passmark scores.
We consumers have it easy, though. Go find some Xeon or EPYC model numbers and try to figure out which digit means what. They make absolutely no sense.
The decision that people think about most is when to upgrade (by definition it stretches over a longer time period than the single point purchasing decision).
That's why we have iPhone 7/8/9...13 and similarly for Samsung etc.
If your i3/i5/i7 designation gives no clues then that's not helpful.
Yes, this point is so obvious, I feel like I've got to be missing something. I am a professional programmer. I am vaguely aware that Intel chips have X-Lake based codenames. I have no idea how to compare the chip in my current laptop to a new laptop based on Intel's marketing names. To convince myself to upgrade, I just go to Geekbench, because nothing else in the marketing tells me the new chip is better. Seems like a very fundamental marketing failure that could be resolved by exposing the X-Lake names to the public in some friendly form.
X = 3/5/7/9 = market positioning bucket. Higher is better
Y = generation, currently at 11. This is basically a numeric form of the "* Lake" naming, except for 10th gen where Ice Lake (10nm, better power efficiency) and Comet Lake (14nm, higher max perf) co-existed in laptops
ZZZ = position within that generation, higher is better, more detailed than the i3/i5/i7/i9 bucketing.
Suffixes:
H = "High Performance" - laptop CPUs with higher perf and higher energy usage. The fastest laptop chips in a given gen previously got branded "HQ" (originally the Q meant quad core), but otherwise HQ = H.
U = "Ultra portable" - laptop CPUs with lower perf and energy usage. Usually can boost well for short tasks, so you might not notice if the most intensive thing you do is compiling code but fall flat for longer workloads such as gaming.
Y = Lowest power usage - These are all garbage, to be honest. You might have one in your windows tablet or netbook.
M = "Mobile" - dead these days, as the H chips replaced them, there were a couple of gens where H and M co-existed with H > M > U.
G_N_ - Integrated graphics rating. All G3 cpus in the same gen will have the same igpu. This only exists on tenth gen/eleventh gen 10nm chips. These chips are more efficient than H/U series chips at the same perf, but don't yet reach the performance peaks of the H series due to lower clock speeds.
K - Unlocked overclockable CPUs, mostly desktop chips, but HK chips for laptops exist too.
F - No integrated GPU. Mostly desktop only.
T - Power efficient desktop CPU (thanks mehlmao)
These suffixes can be combined, e.g. HK for overclockable laptop cpus or KF for desktop cpus with overclocking and no IGPU.
---
So the rule of thumb version for a laptop buyer is H = high power, U/Gx = better battery life, within that, pick highest perf ranking number (ZZZ) within your budget. Deduct 1-2 positions per generation out of date. If you need to compare cross-gen or vs AMD, you need to go look at reviews.
The G numbers in particular are pretty meaningless to consumers. Basically all Intel iGPUs fall into a bucket that is "Good enough for windows or esports titles, not good enough for new or recent AAA games".
Other features are more of interest to DIY builders, nobody is going to sell you a laptop with no igpu and no dgpu, and if you're not interested enough to read up on this, you certainly don't care about overclocking.
Oh wow, this is great Macha! You've provided a concise solution to a real puzzle who's importance fit precisely in the space occupied by "important enough to know" and "not important enough to research it myself". Intel should put your explanation in a PDF and distribute it widely. (Or better, review their marketing approach.)
I think Intel's marketing may have fallen down because:
- At one point everyone upgraded their PC every few years anyway.
- Just having Intel on the box was sufficiently impressive to make the sale.
Now they really need to push their latest CPUs as the latest and greatest.
I like the Lake names but I've never understood from a marketing perspective (what have chips got to do with lakes - were they designed near these lakes?) they feel more like code names that accidentally got leaked.
Many years ago Intel got sued for using code names that were things like "Hendrix" so they started using code names that were geographic features that could not be "owned" by others...
Thing is, unless one is doing AAA games, surviving Android, Rust or C++ builds, or trying to see how much Docker instances it is capable to stuff into the machine, any 10 years old computer will do let alone a randomly chosen one at the computer store down the street.
Even ARM, if it wants to displace Intel, it will be selling cloud pizza boxes or phones to replace those that get stolen/broken.
I am not arguing that the product naming is perfect, only that is not completely absurd as others claimed.
Comparing the performance of laptop and desktop is a niche case. Most people know whether they want a laptop or a desktop and then choose among those categories.
> Comparing the performance of laptop and desktop is a niche case. Most people know whether they want a laptop or a desktop and then choose among those categories.
I think you will find many people thinking that is a very useful thing to compare.
That's already too low-level for most people. They care about the overall device model, not the components inside. This is why Apple has been so successful, everything has a simple model and version number or year attached, with generally more performance as numbers (storage, ram) get bigger.
When you can walk in to a store and be presented with laptops with "Core i5" that are 1 or more generations apart, and where a newer processor isn't always better than an older generation (different architecture, cores caches etc), I think having more than "it's an new laptop and it has an i5" is quite helpful. I think with the Apple chips, there is always a progression so far. If they keep that up, the marketing is on point and you'd always know that what you are buying is "one better than last year". I appreciate we can't guess what they will actually do, and the reality might be as diluted as the ia32/X64 marketplace is now.
But if you're thinking "Should I upgrade my 6 year old i7 system?" or "Will I notice the difference from spending an extra $200 on my CPU?" these days it isn't easy to come up with a sure answer.
For me, the buying strategy these days is to decide on the features that are most important (screen, keyboard, RAM, battery life, etc.) Find a machine that satisfies those key elements and then pick a price point and buy. Doing anything more leads to the paralysis you mention.
The performance difference between i7s of 10 years ago versus today is more than 2x per core, and the number of cores have also more than doubled in that time period.
That's over a 4x performance difference in 10 years, it's not quite the good old days of Moore's law, but it's still an exponential improvement.
That sounds like an S-curve, not an exponential improvement. You can make any monotonically increasing function look exponential if you only have two points (f(n+x)/f(n)) and you get to pick both n and X. "It doubled over the past fifty years!"
If looking at all the points shows the time it takes to double is way longer than it used to be you're probably on the top of the S.
Which, ok, we know we're headed to an S-curve instead. There are limitations on what we can do on silicon, right?
It's just that the CISC/Intel S-curve is way lower than the ARM/M1 S-curve. Intel got used to leaving a margin of performance sitting on the table because it'll be faster next year. They got lazy. As chip gains have slowed those margins have started to add up.
Not really. I had this exact case when modern i7 laptop had kicked the butt of my 10 year old desktop speed wise at the same number of cores. Not sure how's this trend would go when started now though.
The overall performance of Intel's range of processors may not have changed much, but what they called an i3, i5 or i7 sure has lately. For example, the popular i7-7700k is most similar in terms of core count, clocks etc to the current i3 chips, and I don't think there was any real equivalent to the current i5 or i7 chips back then. If I remember rightly, the generations prior to that were still seeing actual performance improvements generation-on-generation too.
Yeah the marketing effort is immense, and it's really surprising to me how many eagerly lap up the angle Apple is pushing without much reflection, even on HN. Like the good old Apple hype days.
Now to be fair, the M1 is an impressive iteration on the A* chip line.
But the
Ryzen 5000 mobile chips really don't look bad at all in comparison, and a 5nm version of those would level the field.
> But the Ryzen 5000 mobile chips really don't look bad at all in comparison, and a 5nm version of those would level the field.
Not look too bad best to say.
Even if we add a one node advantage, we still have a ~15 watt chip making 20W-30W chips sweat.
If you downclock Zen 3 to match 15W TDP, you may well get M1 beating it by double digit margin.
M1 is simply a way more efficient chip than any X86 chip can be because of 40 years of architectural advantage.
- latest x86 chips have patently gigantic decoders metastasising into backend. They are very tightly coupled to pretty much every other piece of logic.
- x86 memory model, and blockage behaviour costing huge transistor count to work around
- better SMP efficiency because of laxer memory model, and SMP logic being deeper integrated into cores, rather than being an afterthought
- generally better register utilisation, at lower transistor count, and the upstream software ecosystem historically having better understanding how to work with large register count
- X86 cores prefetch from both main memory, and in between caches is more expensive, and less efficient because it has to rely on more complex logic, and do more guesswork than in ARM.
The list can go on for a few more screens.
Remember. Even if M1 is a SoC, it still manages to beat Ryzen which barely have much besides cores, memory controller, PCIE, and USB. If you give a 1 node handycap to Ryzen, and only compare cores, you will still get M1 having almost 3 to 4 times advantage at performance per square mm.
It's really sad to see X86 development digging itself into a ditch with "40 years binary compatibility at any cost."
With all regards to Su Lisa, I believe they very much understand all that, but nevertheless still signed under the idea that X86 market will never go anywhere.
I believe it's trivial for both AMD, and Intel to easily slash X86 transistor counts, while increasing the performance by double digits if they can go in err of the X86 ISA convention on just the few most egregious anachronisms.
Trust me, it does not have 40 years of architectural advantage, just an advantage. It's not going to take Intel "40 years to catch up" give me break. I have an M1 mini anad it's a great little desktop machine for browsing and light development but it's not 40 years ahead of intel.
40 years of architectural advantage is X86 electing to forego results of 40 years of advances, and improvements that every other sane ISA had, and instead trying to add them by increasingly complex "workarounds."
Giant transistors counts go to allow X86 cores to not to break ISA compatibility with a 40 years old chip, while trying make new ISA features to live along with it.
5800U (Zen 3 mobile) and M1 are actually very close multithread perf wise despite one being a 8 core part and the other a 4+4 part. And with more power use for Zen 3 too.
What hurts Zen 3 there is while the M1 maintains the full clock with all cores busy, Zen 3 has to downclock away from its max turbo clocks.
Ok, but that's not what I'm talking about. I'm talking about the 5800U@15W.
On Cinebench, the 5800U gets much more than the M1 in multicore and slightly more in single core. It even edges out the M1 on Geekbench, though Gb is a poor benchmark.
Cinebench is a rendering load that isn’t that much optimised. (Doesn’t even use the newer AVX levels when available, and isn’t properly optimised for Arm either).
Cinema 4D, the program that Cinebench benchmarks, normally does the renders on NVIDIA GPUs, not CPUs. As such, it’s a very poor benchmark.
And those 5800U results are at probably much higher than 15W. (Because that’s the base config, OEMs are free to ship with higher TDPs)
The examples I took were 15W. The M1 is also ran at much more than 15W in some models.
Geekbench in general has very heavily biased for ARM as it does not run the same code on both. Cinebench doesn't have this issue. And while yes nowadays a lot of rendering is done on GPUs for C4D the class of programs is path-tracers and they are often ran on CPUs for many scenes.
Geekbench 5 runs the same tests on both platforms, and always did. What you say as heavily based for Arm comes with no evidence.
> The M1 is also ran at much more than 15W in some models.
Nope, it's the same M1 for both, same voltage/frequency curves and top clocks. It's the whole point of having a chip that's named the same. '
No modern laptop CPU has a headline power use number. They all try to use the headroom that they've been given.
Cinebench is very far from being the end-all of benchmarking that you say it is here. And there are far more optimised renderers around if you want to benchmark that. (RTX makes the matter moot nowadays anyway)
It does not use AVX-512, or older AVX levels much for that matter, for a workload that is SIMD-friendly. For Arm, they leave lots of perf on the table too. (Cinebench uses Intel's Embree renderer, with AVX-512 disabled)
Geekbench 5 is designed to be a composite index of multiple benchmarks to be more realistic to some extent than using just one. You can also access to the scores of the subtests too.
Area isn't what affects power usage, switching activity is. [1] Besides, the cache size doesn't negatively influence or constraint the design of the core logic, but highly complex instruction decoding absolutely does.
[1] Well, mostly. There is such a thing as static power draw, but I suspect that the transistors in the L3 cache are optimized to have lower static leakage than the transistors in the core logic, which are optimized to be fast.
Leakage is actually a significant part of idle power consumption, especially at smaller process sizes, so much that some CPUs have the ability to turn off parts of caches when idle.
I repeat my stance that x86 instruction decoding is a tiny part of a processor, and things like vector units (which are also often powered down when idle) and reordering logic take far more power.
There's a paper about this that compares the efficiency of different ISAs, and basically concludes that ARM and x86 are no different in that respect. Only MIPS is an awful outlier.
That paper doesn't show what you think it shows. There are too many variables between these CPU implementations to support the conclusion that the ISA doesn't matter for power consumption.
I don’t think “impressive iteration” begins to describe it. I am very rarely impressed by new tech these days, and the M1 is the first piece of new hardware I’ve seen in years that seems downright magical.
My previous laptop was a 2016 MBP with an i7, and the M1 destroys it. I can build large projects like LLVM or WebKit fast and it doesn’t even get hot.
I’ve also been running Linux VMs with the new Parallels and they feel native speed.
Rosetta 2 is incredible, Intel apps are very responsive on M1.
My work computer is a high end 16” 2019 MBP with 64 GB of RAM and honestly if it weren’t for needing the extra RAM or the occasional x86 VM I’d trade it in for an M1 in a heartbeat.
I’ll agree the Apple hype is unrealistic at times, but the M1 is one that absolutely deserves it.
You're certainly correct for, say, highly-controlled benchmarks but don't discount subjective user experience. I've heard _far_ more M1 users talk about the subjective feel compared to even the previous generation MacBook hardware whereas it's been a long time since I've heard that about an Intel to Intel upgrade (basically since the Core -> Core Duo period) other than people with GPU-heavy needs and that seems like an interesting data-point to me.
I do wonder how much of that is just getting a nice new laptop? If the average user was handed the equivalent laptop with an Intel CPU but told it was an M1 would they notice? Would they also think it was nice and fast?
Here is why I think your hypothesis is not true: people were buying new Macs also before M1, but it did not generated the same reactions. So newness is not the cause of this or at least it is not the only cause.
Did Apple not market new hardware for a decade prior to M1? I don’t think this is sufficient to explain the difference, especially given the supporting benchmarks.
Possibly but it feels like that should have but has not happened to anything like this degree when people were getting shiny new Intel CPUs after the early 2010s.
Which Ryzen laptop currently competes with the latest macbooks in terms of form factor, performance, battery life and build quality? I'm honestly asking, I've been thinking about getting a Linux laptop in a while and I would be curious to see what the playing field is like
Have you used a recent Macbook? The Macbook Pro keyboard is unique in that is is the only time in my life I have ever experienced a keyboard breaking.
I think people are a little too nostalgic when talking about Apple build quality. There's a 2011 MBP here that still works well. But recent releases... no better than the competition really.
AMEN to that. I still use my fully repairable MBP 2010 for web surfing and light tasks (with 8GB and a SSD its fine). On the Keyboard note, I've bought an expensive Eurocom (Clevo) Xeon laptop that had keyboard issues after a few years. Its not only apple that did bad with bad keyboard on expensive products (Fortunately I could repair it myself for 50$). Today apple product should ship with rossman videos and a SMD workstation if you plan to use them more than 3yrs ;)
I considered that keyboard to be designed to fail. I'm not interested in gambling that another component won't fail the same way. It's easier to use products that have not demonstrated such designed failures.
The good news however is that due to this garbage tier engineering I've discovered that desktop Linux is more stable for me than post-Catalina OSX - I'd have probably never found this out otherwise.
Designed to fail is preposterous. Apple lost a ton of money due to that problem, both directly in paying for repairs, and indirectly in damage to their stock price.
It's pretty clear what happened, and it's unfortunate but banal: someone went too far with making the key mechanism as thin as possible, and all plastic to make it easier to manufacture. Ends up the plastic wasn't strong enough for the little pins in the mechanism.
That's it. Just an ordinary design mistake. Embarrassing but no conspiracy.
FWIW they've resolved it. The keyboard on this M1 MBA is perfectly fine.
> It's pretty clear what happened, and it's unfortunate but banal: someone went too far with making the key mechanism as thin as possible
I see it as a symptom of the Jony Ive ideology, untethered from realty by Jobs, poisoning Apple in the early 2010s. iOS7 flat UI, overthin devices, keyboards.
Signs are pointing that Apple is recovering from this (new aTV remote).
I don't see any evidence that Apple's stock price was affected. Apple can ship any garbage (they shipped a keyboard they knew was defective for three years) and people will buy it because they're locked in to Apple's services, another reason they're pushing services so hard at the expense of the rest of the company.
The new keyboard is what they should have had the last few years. A refinement of the scissor switch keyboard from the glory days of the MacBook. It's great.
> Have you used a recent Macbook? The Macbook Pro keyboard is unique in that is is the only time in my life I have ever experienced a keyboard breaking.
Given that you know the keyboard has been fixed, is there a reason why you didn’t make it clear that you were talking about older machines and not current ones?
You make it sound like the current machines have the flawed keyboard, which is not true as far as I am aware.
I'm typing on a current-gen macbook, and the keyboard is fine. Butterfly keyboards were flawed, but I think it's also a bit disingenuous of the parent comment to imply that this is a reflection on overall mac build quality. Those generations of macs were bad products for several reasons, but the general build quality of those machines, and everything in Apple's line is quite high comparing to competitors
I prefer the touchpad on my Surface Book 2 to the one on my Macbook Pro (2019). The one on my Macbook is comically huge, and I also am not in love with the feel/click noise compared to the Surface Book. YMMV.
I have both an XPS 13 Developer Edition from 2018 and two MacBooks from 2019. The touchpads on both 2019s are not good. They both have difficulty inertial scrolling and the touch rejection is far too sensitive.
Coming from lifelong Linux-land it's actually comical reflecting on how poor these things operate in day to day. The most prominent issue for me is how poorly they operate with my USB-type C dock I use for multi-monitor.
No idea, it's very new so the driver side is probably flaky. This being said Lenovo is shipping it on Thinkpads and the likes, so Linux support shouldn't too far fetched.
I have a dell XPS and it's a dream. Probably the closest thing to a good trackpad. That being said, Apples keyboards, even on the MBP are below par. They don't feel like they're supposed to be put through their paces. At work they make us use Macs and iMacs and to be honest, I damn near threw the wireless keyboard in the trash but Apple will win the screen wars till Jesus comes back though.
I'm probably not the person to ask, I only use the touchpad as a last resort.
(Honestly I never even noticed that the apple touchpad was better than any other touchpad, but I understand that touchpad users will notice things I didn't.)
Test it for what exactly? All ultrabook keyboards are garbage, but they shouldn't out-right fail. "Testing" one isn't going to tell me if it will fail, it's just going to feel as crappy as the other keyboards.
If a product fails within the first 10% of expected lifespan I'm no longer interested in that product. If it fails _AND_ the manufacturer refuses to take responsibility until months of bad press and class action lawsuits then I'd have to be insane to trust them with another device.
The point is the keyboard on a 2018 Macbook when Jony Ive was allowed to push his nonsense is just not the same keyboard as on a current Macbooks.
It's a different keyboard. Different mechanism, very different feel, larger key travel, much more comfortable.
I have a 2018 i7 MBP and a 2020 M1. The keyboards feel incredibly different. On top of that the 2020 M1 has a real Escape key which is a huge upgrade for anyone in unix land/developer land, etc..
The 2019+ keyboard has had no controversy or reputation for failing.
My 2018 one does feel like it's on it's last legs FWIW. It's a work machine.. just hoping to get a refresh before it dies.
Dave2D just reviewed the new surface laptop comparing it to the new MacBook Air: https://www.youtube.com/watch?v=WKN9nvXTGHE ...and he got pretty good battery life (11 hours on his own benchmark).
edit: form factor, performance, build quality.. is also up there, I just didn't mention it, because I think the battery life is the killer feature of the M1.
It is a good battery life for a PC laptop but it is still significantly behind of M1 based Macs while also having significantly lower single core benchmarks compared to M1. And the performance difference is even bigger when both laptops are on battery power.
Microsoft is almost always a generation behind with tech in surface products it's ridiculous because they look so appealing otherwise. I just feel like I'm buying last years model on launch.
That is really a problem only if you watch tech news. In reality people use computers for years, so having a 4 or 5 year old tech doesn't really matter that much.
Developers should be upgrading on 2-3 year cycles max - the leap in performance of 2 generations noticeably impacts performance and unless you're doing some really trivial stuff - performance matters.
Check out Chrome build time benchmarks between 3/4xxx and 5xxx series (or any dev tool related benchmarks). 10-20% off your iteration cycle is noticeable - and that's a single generation jump.
Increasing iteration speed is a big productivity boosts, seeing professionals with 5+ year old devices is just ridiculous to me, you spend 8+ hours on that device - investing 2000$ for an upgrade every two years is not that much to ask.
Developers may have a reason to upgrade (many don't as ssh isn't going to get any faster-er with a fancy new CPU), but they constitute the part of people that read tech news. This is a minuscule part of general population.
Compiling Chrome is a nice benchmark but in reality dev cycles are quite smaller (or you might want to optimize your toolchain).
Finally, it's quite common for developers to forget the kind of hardware their users are running on. Then they produce software that only runs well on the next gen hardware. If you develop on a beast, then at least test whatever you are producing on a 3-4 year old machine to see how it will work for the clients.
Personally I'd like if developers stayed a couple of generations behind on their computers, so the rest of us aren't forced to upgrade regularly to keep software running at a decent speed.
Sadly, with the current trends I think the device will be more likely rendered by an irreparable hardware failure before it reaches end of life due to speed.
I have a 2014 retina iMac. At some point in the next few years, I’ll probably replace it with an M series iMac, and then put Linux on it. It’s hard to imagine that it will ever really reach end of life due to speed at that point.
I haven’t really felt this way with the Surface Book. The first gen was rough, but I’ve been using an SB3 for several months now and have nothing but praise for it.
Probably a Thinkpad. But depends on completely the type of work you wanna do. With anything you end up buying it's a long term investment. So whichever gets your job done and whichever has a good build quality (also something which you can easily upgrade yourself and repair easily) is the one you should buy.
Not a huge sample size, but the screen on my X1 Nano looks excellent. Competitive with my 2018 11” iPad Pro’s screen visually, if a bit lower PPI.
That’s one of the highest end ThinkPads though, so I guess that’s to be expected. Have heard that getting a good panel on more mainstream models like the T-series can be tough.
Not entirely for sure, but form factor is limited by thermal requirements (well, mostly, Apple and other OEMs have gotten into trouble by ignoring this in the past, like the first i9 macs), and that is influenced by cpu architecture.
I had an Ideapad 5 (4800u) for a few months, it was a great little Ryzen laptop. Battery life was 7-10 hours of "real" use, CPU performance is righteous, and build quality is pretty surprising for sub-$500. Great little Linux laptop!
I think the OP is saying that it could be a mistake to focus on the processor so much when it might turn out that worthy competition is just around the corner. Apple have ever focused on processor like this before, instead opting to focus on the whole product, fit and finish, etc. Which arguably still very few rival them on even today.
Yes competitors may have worthy opponents to the M1 "just around the corner".
But do you think that Apple stopped innovating after releasing the M1 last year? Don't you think the M2 (or M1X whatever it's called) is well into design and prototyping at this point?
The broader point, I think, is that Apple has never really competed on specs before. Not meaningfully, anyway. They’ve competed on whole product. Competing on specs opens them up to new challenges.
While the competition might have M1 competitors (perf/power) "just around the corner", Apple has the M1 out right now. You can go buy an M1 Mac today. If I need to wait until next quarter or the second half of the year to get an M1 competitor that doesn't help me much today.
It's difficult to compare the Apple Silicon versus PowerPC, because the new Apple hardware is far more ambitious, the whole platform. I would expect that the memory SSD controllers are to some degree licensed from other industry players. The CPU architecture is something else again. Issues eight instructions per clock -- per core. The M1 core design has more in common with IBM mainframe POWER9 than with a typical x86 core. It's remarkable to see this design ina low end consumer device. A mobile.
> There have been just so many genuine tech news articles of the form, "Future Apple Competitor Product Beats Current, Widely Successful Apple Product".
It's the twin brother of all the "Currently, successful Apple product feature technically existed on [not-Apple] Product years ago (even though basically nobody used it)."
People read those articles and conclude it's all marketing hype but never ask themselves why it seems to happen again and again where simply checking off the box for having a specific feature never seems to actually lead to broad adoption. And if it truly is ONLY marketing hype leading to those ecosystem effects, then maybe you should start thinking of marketing hype as a feature?
I can answer why _I_ downvoted the comment. It was this part:
> [...] it's really surprising to me how many eagerly lap up the angle Apple is pushing without much reflection, even on HN. Like the good old Apple hype days.
You can make a point without implying that anybody who doesn't agree with you is incompetent (the "Apple hype" meme is really stale).
But let's say that a 5nm Ryzen comes out ahead for mobile usage. (But not necessarily by very much.)
Chips like the M1 (or ideally, something similar made available to other device manufacturers) is still really interesting. The ARM architecture can scale down better than x86/x64 can, allowing a device maker to hit low end performance targets with less cost than x86 chips can manage. For the low end, it also tends to provide better power consumption for a fixed performance target than the x86 offerings.
This makes M1 type chips really interesting for any manufacturer who want to make devices that span a huge scale of performance requirements, from slow kiosks performance range to the decent laptop range. All being the same ISA makes sharing firmware and applications across the lineup easier than having to switch to x86/x64 above some threshold.
It also does not hurt that M1 means that the really popular for development Apple devices are now running on ARM, which helps ensure that more software has been ported to (and/or tested on) ARM devices. ARM's weak memory model for example is something that can trip up a lot of software that tries to implement low lock algorithms, but whose designers were only familiar with the x86/x64 memory model.
> This makes M1 type chips really interesting for any manufacturer who want to make devices that span a huge scale of performance requirements, from slow kiosks performance range to the decent laptop range.
This is purely theoretical. Apple will <<never>> sell its chips to other companies.
So if you were expecting Apple Silicon on its own to start a revolution, that's never going to happen.
The poster stated "M1 type chips". That is not setting an expectation that Apple will sell their chips.
Microsoft already have a Surface product with an ARM chip that didn't get updated in this round AFAIK. It already has solid ARM performance, but it's not as good as M1. It was let down with how well x86 emulation worked, but there seem to have been improvements including x64 support. If they focused on improvements with the chip design then they could see improvements along the lines of Apple. It's a question of whether the investment is made rather than whether it's possible.
While a lot of the M1 advantage over Intel and AMD is process node, the advantage over the rest of the ARM and RISCV ecosystem is substantially more than that.
The team that built the M1 is a collection of top tier ASIC designers that came from several acquisitions, from PA Semi forward. The Qualcomms and Caviums and so on just aren't in the same league. Lord spare me another tarball of garbage and random kernel patches called an "SDK." They don't really have the same bench and don't pay top dollar and it shows.
It is unlikely that there will be an M1 equivalent from any of those guys. Intel? AMD? Absolutely.
Why would they? Apple likes vertical integration, it's not advantageous to them to sell off a differentiating part of their product, the hyped processor, so that others can use it, often competitors.
Back in the 90s, when Jobs was gone, Apple licensed its OS and almost went bankrupt as you could simply get a cheaper clone of Apple's computers with the OS on them. They won't repeat that mistake again.
Not really in any other way than speculation. No one can use M1 tech unless they steal it. Sure they can build what they think are similar designs to it but they don't know if they're going down the right path. M1 isn't just another ARM design, it's solely done by apple.
I think the issue with the comment is that it comes off as biased or distracting (regardless of the intent). The conversation is about Apple Silicon vs Intel, and then it veers off topic with discussion of Ryzen.
Also, saying that people are "lapping up" the Apple marketing suggests that it's more sizzle than steak. But it's completely undeniable that the A chips, and by extension, the M1 is a beast.
Apple Silicon is NOT just an ARM CPU. It's also a deep integration with macOS, and a unified memory architecture (less unnecessary copying on the bus). For example on the M1, certain very common operations for macOS apps, like freeing Objective C pointers, which happens millions of time per second, are optimized at the hardware level. The M1 is not a CPU.
So no, the Ryzen 5000 with 5nm would NOT level the field.
> For example on the M1, certain very common operations for macOS apps, like freeing Objective C pointers, which happens millions of time per second, are optimized at the hardware level
I mean, this is _kind_ of true, but it's not unique to Objective C, the M1, or even particularly intentional. The _ARM_ memory model is more conducive to reference counting in general than the x86 one. Nothing special about Apple's chip in that respect, tho.
> and a unified memory architecture (less unnecessary copying on the bus).
So does every other Intel & AMD laptop CPU from the last 5+ years. So does every Qualcomm SoC from the last 5+ years. This has been incredibly common for years and years now.
This does a lot less than you think it does, particularly since there's very few consumer workloads that switch between CPU & GPU.
From what I understood, ARC operations are cheaper on M1 because of weaker memory ordering constraints. But I thought this was and ARM thing, not M1 specific, but I could be wrong
Overwhelming majority of MacOS/iOS software is written on Obj C and Swift, which have the same underlying semantics.
That said ARC doesn't have such a great impact on performance as some people (esp. fans of Java's GC etc.) would convince you. Most ++/-- operations on the refcount are eliminated by the compiler.
Wallclock time isn't the only performance metric to care about. ObjC had a GC in the past and lost it in favor of ARC for a reason; it's a poor fit for iPhone-sized devices.
That was the marketing reason, the official reason was that fitting a GC into a language with C memory model wasn't never going to work, and there were plenty of crashes and memory corruptions coming out of that.
So they made the only sensible decision, and just like Microsoft did with COM, they added support to the Objective-C compiler to automate retain/release messages in OS X Frameworks.
Naturally being Apple, that pivot had to be sold in a way that the reason was being of RC being better and not because of the technical failure to have a tracing GC in Objective-C , while remaining compatible with C memory model.
Apparently, moving the GC document with all its caveats regarding possible programming issues, out of search index also helps to keep the story as well.
No, the performance issues are real. GC programs have more page demand and higher peak memory use, which is specifically what the iOS memory model (jetsam and memory compression) can't deal with well. They can have some wins due to compacting.
There is of course a GC language on iOS (JavaScript) but expensive webpages get killed quickly.
And yet Microsoft was able to do very good phones with GC languages with very good performance for 200 euros, maybe Apple should talk to their runtime engineers.
Wasn't most of the system stack in C++ since it was shared with either Windows or Nokia? Either way, they went out of business, so we can't tell if they can support a phone 5 years back with new OS features.
The worst case performance is very bad though. I have done some playing around with pushing Swift in terms of performance, and it's very hard because of ARC.
It's not about garbage creation, is about reference cost. With ARC, you pay a penalty every time you pass a reference type, not only for memory churn. You can avoid this using value types, but this often results in a lot of unnecessary copying.
It's not unavoidable, but Swift doesn't give good tools for avoiding ARC-based performance cliffs, so it takes a lot of profiling and effort compared to other languages.
It's ok to generate garbage if it goes away immediately; that's essentially a stack allocation. Performance problems come up when you have a mix of lifetimes. (that slows GCs down and causes heap fragmentation outside copying GCs)
I can't find the interview now, but I distinctly remember someone at Apple saying that Cocoa apps benefit from hardware-level optimizations of common Objective C operations in the M1.
I didn’t buy the hype until I tested an actual M1. It exceeded the hype. It feels like when I was a kid and went from a 386SX to a Pentium at 4X the speed, except the M1 was also a quarter the power consumption.
It’s the first time in a over a decade I’ve felt a revolutionary step forward in a chip and the only time it has ever come with less power.
Process node is a factor but I find it hard to believe it’s the only factor. I know enough about the X86 legacy tax to know that is not the case.
Apple didn’t do black magic. They just took a cleaner superior CPU architecture and made a desktop class muscular chip with it. Other ARM manufacturers could equal or exceed the M1 if they wanted. X86 has been delivered a death sentence.
IMHO the greatest threat to AMD is ARM not Intel, and vice versa.
> "Apple didn’t do black magic. [...] Other ARM manufacturers could equal or exceed the M1 if they wanted."
I see lots of people saying "If only $company had an ARM device", and I don't understand it. My experience of ARM device is slow laggy janky products - from competing smartphones, tablets, router and network device management interfaces, year on year, the pairing is always (arm + fucking slow).
It's only Apple who have managed to get "desktop class" performance, and at this point I'm more willing to attribute that to Apple doing black magic than to ARM being inherently superior.
Microsoft have piles of money, thousands of developers, they make hardware, they've been working with ARM since the likes of the Compaq IPAQ PDA and Windows CE 21 years ago, and the Surface RT in 2012, their ARM devices are nothing to speak of. Google, industry gorilla of tech with the finances to match - see the famous Gruber/DaringFireball about iPhones far ahead in single core JavaScript performance year after year after year. Tech giants like Samsung, Qualcomm, Sony, who have been designing and building chips for decades, with their own drivers, firmware, and customized Android builds, and building their own flagship products.
Apple whomps in from "nowhere" with the fastest ARM device anyone has ever seen, and the critics response is "anyone could do that if they wanted to".
Then why don't they want to?
[Edit: Consider also that Apple spent $278M on acquiring PA Semi chip design company in 2008, and $600M on Dialog Semiconductor in 2018, and Microsoft spent $2.5BN on Minecraft. I say these companies "are tech giants with finances to match", but it's not like Apple had to lean hard into their hundreds of billions cash pile to do what they've done. Instead they had to do something that appears to be "black magic" (i.e. desire + leadership + execution + long term planning + ???)].
Microsoft has to more or less take what Qualcomm will give them. Qualcomm's virtual monopoly on high end non-Apple phones has made it complacent.
> Apple whomps in from "nowhere" with the fastest ARM device anyone has ever seen
Not really from nowhere; they've already had the fastest (non-datacenter) ARM chips anyone has ever seen, fairly consistently, in their phones, for years.
Apple has had a vision for this for at least 12-13 years. Most likely, ever since (before) the transition to Intel.
The first A-series chip they officially released and named was the A4 in March 2010. That is 11 years ago. That is the beginning of M1. And the reason is originally in this amazing Ars Technica article [1]. Apple has always wanted to fully control the entire stack.
Any competitors in the same space lack either the vision, or the products, or the money to do the same thing. In the case of Microsoft and Google, their internal politics don't allow them to do the same thing (Google couldn't care less about hardware because the Web is where Google's money at; MS cares about hardware but traditionally they have relied on throwing money at partners to make them come up with solutions).
Indeed, which is two things against "anyone could do it" - it took Apple 13+ years to achieve - "using ARM" isn't enough - and competitors haven't been taken by surprise, they've had 10+ years to mount a response and haven't.
Apple is pretty uniquely positioned to make the architecture change as smooth as it has been. If you look at competing smartphone SOCs it's not hard to believe that a technical M1 competitor could be put together, but getting it adopted would be much, much more difficult.
It seems so obvious to me that Apple's chief strategic strength here is that they can essentially force adoption that it's very strange to me that your comment is the only one I've seen on this whole post mentioning it.
One of the ways we already know that beefy ARM processors can work well (and with better power efficiency than Intel CPUs) for heavier workloads/at higher scale than mobile or embedded devices is that it's already being done in the server space. Is scaling ARM processors up for the desktop radically different? (If so, I'd love to learn a bit about how )
The problem that remains for anyone who wants to sell PCs with something other than x86 is that desktop users want their apps, and for Windows users those apps are distributed as x86 binaries, and users aren't in control of that (especially for unmaintained or legacy software). Software publishers won't build or optimize for a new architecture if they suspect it's just going to be a flash in the pan, and this creates a chicken-and-egg problem.
But Apple can just declare that their new architecture is the only option for you if you're buying a new Mac, and a ton of their users will come with them no matter what. App publishers will just deal with it, knowing that they have to play along if they want their application to have a first-class experience on Mac. This is a huge deal, and not a position any particular PC manufacturer could hope to be in!
I'm sure Apple has done lots of technically impressive stuff that I'm not even competent to really appreciate in order to make this architecture change happen. But other vendors are more capable of technically impressive stuff than they are of forcing adoption for enough users to reach critical mass to get people who sell/distribute proprietary software to come along with them.
Apple is in a position where it can move swiftly and decisively. They have an army of software engineers who can port Apple's OS and associated Apps to a new architecture. They can force their entire ecosystem to move to a new architecture, and are emboldened by the fact that they've done it twice in the past (the first time they were more hesitant).
If Samsung wanted to pull down the latest Qualcomm ARM chip, slap a Macbook-sized heatsink on it, and go head-to-head with Wintel and Apple, they would fail. What are you gonna run? Linux? Some abortive version of Windows on ARM that doesn't support the vast number of x86 apps?
Even ChromeOS is missing barebones necessities like Photoshop, so professionals won't touch it. So the ChromeOS market is doomed (at present) to have "thin & light" and "long battery life" as it's only selling point.
I think we'll see M1-style ARM devices come out of the woodwork in 2021-2022, running ARM Windows and Linux and ChromeOS, simply to attempt to copy Apple. But I don't think they'll be as successful, simply because the apps aren't there, and no one can force developers to port to the new architecture as Apple can.
Microsoft built Windows kernel to have different backends (Itanium, ARM) and different front ends (Win32, Windows Subsystem for Linux), they make Hyper-V and did a "run your old apps in a Win 7 VM" for a while, they've pushed .Net since the early 2000s where the intermediate code isn't so tied to hardware. They could plausibly have done something for compatibility like Apple does with Rosetta.
Thing about your comment is "Who would buy it" assumes it's basically the same and people are indifferent. My griping is assuming ARM is generally worse. But if we take this alternate world seriously then the reason for ARM is superior performance, so the answer to "who would buy it" and "why would developers switch" is driven by that - like when Apple was falling behind with PowerPC - it wouldn't be Microsoft pushing it on an unwanting market, it would be a demanding market pulling it with demand.
If Samsung could push a better-than-Intel chip, would Microsoft not want a piece of it?
If Samsung Chromebooks were suddenly more powerful than an Intel i7, would developers not sit up and take notice?
My contention is they can't, because ARM isn't that different, and it's Apple's black magic that is the real difference.
> It feels like when I was a kid and went from a 386SX to a Pentium at 4X the speed
I remember the jump from the 486 chips to the Pentium being that dramatic. Heck, 486DX2-66 to P100 brought Linux kernel compile times from 45 minutes down to like 5 minutes.
Back then, I think a big part of these jumps wasn't just CPU design. I think a lot of it was that those CPU design changes were often accompanied by massive improvements to overall system architecture. So the I/O buses, memory, peripherals, all of it got a lot faster as part of the same upgrade.
It's a game changer if you can wholly devote yourself to the ecosystem. I can tell you that even third party applications with M1 support (ie. Google Chrome) can be a mixed bag and still run kinda clunky.
Also you said music production so verify before hand that your DAW, etc of choice will work fine. I use Reason and the visuals in the DAW itself are still laggy, but it isn't M1 native yet.
Other things like the fact that World of Warcraft runs at medium settings on a 1440P monitor feel kinda magical given I can barely feel any heat leaving the M1 Mini.
In the end I returned my M1 Mini (8gb/256gb) because my personal experience still felt half baked at this time. Once we hit the next generation a lot more software will have caught up.
I'd get 16gb. The memory options are disappointing. I am waiting for 32gb to switch my daily driver, and also giving the ecosystem a bit of time to catch up. I have an M1 Mini but that was for us to test our software on and make sure our Mac ARM builds were correct.
Apple's memory and storage options tend to always be on the disappointing side.
From what did you upgrade? Apple had no competitive product with modern processors. Where is the Zen3 macbook with which you compared that? Did the laptop you compared with had an equivalent fast SSD?
Top-end Ryzen chips may be as fast or faster. Are they also as low power?
The impressive thing about the M1 isn't just raw performance but performance / watt. There are obviously faster high-end many-core x86 chips, but the power use difference is wider than the speed difference. The M1 destroys mid-range Intel and AMD offerings at a fraction of the power consumption.
The MacBook Air trounces the top-end Mac Pro on single core performance. No this is not the top-end Intel you can get, nor is it as fast as AMD, but it's a MacBook Air and consumes a small fraction of the power.
Again... the power efficiency is the thing that really blows me away. 5nm accounts for only some of that.
On multi-core the same MacBook Air is just below recent Mac Pro models on total computational throughput. To beat the MacBook Air you have to go up to newer-generation Xeon chips with 8 full-speed cores. The Xeon is branded as a server chip and uses as much power as probably eight to ten M1s.
It's just insane. If Apple puts, say, 16 full-speed core in one of these chips it's absolutely over for all other vendors. Speculation is that the 16" Pro will end up getting 8 performance cores and 4 low-power cores. I wouldn't be surprised if it ends up beating the top-end Intel Mac Pro.
Just keep in mind that scaling things will change power efficiency. Like the Vega gpu architecture showed so clearly - which could be quite power efficient, but the Vega graphics card released were ridiculously power hungry, it seems because they missed their performance target and thus got overclocked out of their power efficiency range.
I'm not sure that they can easily add more cores to the M1. If they can, that would also raise the TDP. And adding more cores does not increase performance linearly, even if the architecture is made for it.
Also keep in mind that comparing it to 14nm Intel processors with an architecture from a decade ago is a fair comparison when comparing M1 vs prior Apple products, but is not a fair comparison when comparing it against what x86 can do in general. You would need a modern x86 processor targeting the same watt usage to have a completely valid comparison. As discussed in the thread above.
So yeah, it's a great processor for its target. And it likely will also be a very strong processor when scaled up - it bodes well for the future of ARM. But don't extrapolate the performance too linearly, that is likely to be misleading.
Sigh. I'm typing this on an Apple "prior" product which has a two year old microarchitecture and is built on Intel 10nm (roughly the same as TSMC 7nm) - it's absolutely a modern x86 processor and a valid comparison - and it's left in the dust by the M1.
I currently use a MacBook Air with the same 10th generation Intel chip in it and it's not bad. The only difference between the Air and the Pro with this chip is that the Pro can sustain high clocks longer and has a Touch Bar.
I expected the M1 to be like a much lower power and maybe a bit faster version of this chip, and was really blown away by it being far beyond that.
Hm, I'm surprised by that statement. The Ryzen 5000 mobile CPUs were a lot faster than the Intel competition and I am pretty sure Apple does not have them in any laptop. And the very fast SSD is a new addition, isn't it? Did I miss something?
You implied that they only saw a difference because Apple wasn't using "modern processors". You can't exclude everything but Zen 3 from any sensible definition of "modern processor".
The parent comment talked about the implications he thought this had about the architecture. Like half of the thread seemingly ignoring that a) there are x86 processors that are more than competitive and b) that the big leap he described is more likely coming from the faster storage and from comparing it with the very much older and weaker processors most people that bought an upgrade now had in their old laptops.
Heck, he might have used an old dual core with an HDD from all we know. Of course the new M1 models would make a big difference then. So would all other modern laptops.
But in that context of x86 and ARM Zen3 is absolutely the only sound comparison.
I know what you're trying to say is that you have to do a comparison of CPUs built on similar process nodes - and that maybe the parent has overstated the x86 vs Arm comparison.
I would have some sympathy with that - but that's not the same as discounting the parent's perspective because Apple somehow weren't using "modern processors" previously - because that simply isn't true.
Similar process nodes, or at least when making general statements of the architecture to compare the best modern candidates. Otherwise you just can't make statements about the architecture based on that. And Apple just does not have Zen 3 laptops (unsurprisingly), so there is that.
And I still think it's very valid to not forget how old hardware used for subjective comparisons will be. E.g. the old Macbook air sold very well all that time.
The processors that come closest are the Zen 3 processors. They beat the M1 in total performance, see https://www.pcworld.com/article/3604597/apple-m1-vs-ryzen-50..., but are made for a higher watt usage. There is no direct performance/watt comparison I am aware of.
It's likely the M1 is better in that category, but it does not look like a huge difference if you factor in the higher performance you get from the Ryzen 9 5980HS.
Single core Ryzen 7 5800U Geekbench scores are c20% lower than M1.
Multi core better but still lower but not too surprised as it's got 8 large cores vs big.LITTLE 4/4 for M1 - not clear when those cores will start throttling down.
Suspect single core performance is what drives apparent responsiveness so I'd say not too surprising that the M1 gets rave reviews from users.
With my 2400G the benchmark takes 5.5 seconds.
With someones M1 it takes 1.3 seconds.
We still need to verify that was run at the same settings, but it looks like the M1 is going to be faster than the 5000 series APUs regardless, possibly by a large margin and I'm assuming the 5000s are between 1.4x and 2.0x the speed of the older 2400G.
It will still be interesting to see AMD on a 5nm process, and Zen 4, and with 8 core, but by then Apple will probably have an M2 on the 3nm process. You gotta compare what can be obtained today.
The 2400G is a relatively low end part that is four generations old. In the past AMD did not release high end desktop APUs, preferring customers to buy CPUs + GPUs. This is already announced to change with a 5700G being produced. https://www.amd.com/en/products/apu/amd-ryzen-7-5700g - On paper this is a 5800H in a desktop form factor with a higher max TDP and higher clock speeds.
The 5800H gets 2.5x on cpu benchmark's rating vs the 2400G. This is composed of a 50% single core and 300% multi-core improvement.
>> The 5800H gets 2.5x on cpu benchmark's rating vs the 2400G.
That's in large part because its 8 cores instead of 4. The test I ran makes good use of multicore, but going from 4 to 8 won't cut the time completely in half. I suspect the M1 will still beat it by a bit.
I take back my comment on availability. We ideally need to compare things on the same process node to see which design team has done a better job ;-)
If it is the same foundry (and it is in this case) you can compare them and you can assume that 3mn is better than 4mn and better than 5mn. Across foundries, no so much.
With the M1 in the latest iPad Pros I’m not even sure if Apple will keep the Ax line but instead to upgrade their low to high end with just Mx series processors .
Isn’t the M1 a bit big to put in a phone? Presumably, both Mx and Ax SoCs are already very similar so the distinction could be mostly marketing. But they’ll probably want to have different names for the lower-power, fewer-cores phone SoC and the things they put in their Macs. If only to avoid the perception that Macs run on “low-performance” phone parts.
What if Apple didn't care about the size and just put in an M SoC? iPhones have been getting thicker and it seems like their design language is going to have devices under certain thicknesses.
Option two would just be to make an A15 be only a die shrink and surprise ! It fits.
No reflection? The iPad Pro running the Axxx soc was already insanely powerful. x86 is a monster with this huge legacy. Sounds to me more like the usual anti-apple speak..
The whole article is about not having to choose between performance and powerusage. With "5000", which 5000 do you mean? A 100W cpu of about the full cost of a macmini?
This is the whole point of the article. Apple made a monster move. Next up: Apple Cloud. M1 based cloud computing. Will save them a couple billion per year on aws which expires in '22 or '23 iirc.
I had wondered what their first Apple Silicon powered data center would be.
I thought the company would wait and deploy a later gen chip. Though with the standardization of M1 across such a wide array of products makes me think it is going to be the M1.
I was bit surprised to see M1 in the iMacs. I thought the M1 was going to be a very capable proof of concept. But now we've got it in iMacs, iPads, and Macbooks. So I wouldn't be shocked if they spread it far and wide.
Ya, part of the reason I presumed there would be at least one rev was because there have been reports of system hangs and restarts on some M1 machines.
So I had thought there may be something learned in the mass deployment that would trigger even minor design changes.
Perhaps those are software issues. Or maybe the M1 as a product name should not be taken literally to describe the SoC in the current lineup.
We know the Secure Enclave component appears to have been updated mid-production this past fall for a host of A-series chips.
Perhaps, if light changes were needed Apple would not see them a sufficient to designate a new moniker.
Or perhaps they are but the iMac and (theoretical) apple silicon-based data centers are intending to build consumer confidence in this bold foray.
Given the competition for fab capacity at TSMC, I'm not so sure Apple would want to use that capacity to supply servers. If things were less tight in that department I'd agree the time is right.
I'm not really sure about the answers to your questions, but it vaguely seems like the type of thing they'd want to go big on or not at all, for economies of scale. I also suspect they can get a better margin on each M1-based device selling them in consumer products.
His “as tested” was 11 hours for AMD and 13 for M1. His “max time use” for AMD was 14.5 and 20 for M1. That’s a 38% greater time for the M1 in a best possible outcome and 18% on “average”. Substantial differences that users would perceive noticeably.
I’m not so sure. I think it reaches a meaningful maximum. Can I do an entire days work on the laptop? Yes to both. Can it last an entire flight, even a long one? Yes to both.
It’s not that it doesn’t matter at all, it’s just that it matters a lot less once you’ve checked boxes like that. The effective experience of both would be “you only need to charge it overnight”
For me it's always been "how desperately do I need to find an AC outlet" with my laptops. I've been pretty happy with my M1 MBA in that regard because the answer has been "not desperate at all". I've done a couple weekend trips since getting it and I never even thought of getting the power adapter out of my bag the entire trip. This was despite doing a bit of work.
The same can't be said for the MBPs I've owned over the past decade. They get ok battery life but I always needed to know where the nearest outlet was. While an 11 hour Ryzen might be close to a 13 hour MBA, those extra two hours is the difference between a full weekend worth of work or just a full day.
For anyone wanting long battery life in a laptop those extra couple hours are important.
What the sibling says, but you've also got to take battery degradation into account. Even at 80% of its original battery life, an M1 Macbook Pro/Air will still give you a full day's usage.
Just to admit something odd I do in public. I've started using my lithium powered devices between the 40%-80% charge range. i.e. I plug it in when it's at or around 40% left charged and unplug it when it's 80% charged or around there. I'd read that's what Tesla drivers recommend for maximizing the life of their batteries. Don't know how much it translates to phones/computers. 40% of 20 hours is 8 hours...
It has less battery life and less performance (judging by Cinebench, at least), so it is a fairly substantial difference. And it's quite possible that Apple will remain one TSMC process ahead of AMD for the next few years, even if AMD do put out 5nm chips in the near future.
This doesn't contain any comparison graphs relevant to my question, and couldn't, because it's from November. The Ryzen 5000 mobile chips came out this year.
- With M1, Apple has finally closed the loop and created a fully closed system that it has control over vs an x86-64 platform that is more open and gives us users more choice and control.
- All new devices with M1 SoC score very poorly on repairability and upgradebility vs the x86-64 platform that still allows you to repair / upgrade RAM, memory, CPU's and other parts in an affordable manner.
- And because of the above two, all M1 devices with low ram and storage, have planned obsolescence of 3-5 years built-in, along with a kill switch to totally disable the device.
Apple has a found a great fit for their business model with the ARM SoCs. Even when (not if, when) AMD and Intel bring out better processors, Apple will stick with their ARM SoCs because todays computing power has far outpaced the demand made by the common softwares used by all. And Apple can cater to the common denominators with their own OS + CPU + 8GB + 256 GB NvME quite well for maybe another decade.
I am happy there is some good tech development and competition in computer systems. But my Intel Mac Mini will be the last mac I will buy from Apple. macOS Mojave still runs fine on it. Otherwise there's always Windows, Linux and FreeBSD. That's the real advantage of the x86-64 platform that people like me will never sacrifice for closed systems like the M1 devices.
> the x86-64 platform that still allows you to repair / upgrade RAM, memory, CPU's
Have you opened many x86 devices of a similar form factor to the current M1 devices? I haven’t seen a socketed CPU in a laptop in over a decade, and the vast majority of ultrabooks and thin-and-lights have soldered RAM now too. The latest trend has been to start soldering SSDs as well.
True, other manufacturers have been trying to ape Apple and solder RAM and other parts on the x86-64 platform too. Thankfully it hasn't yet fully spread to the desktop platforms - I can still build my own PC. I pin my hope on government regulations and right to repair to stem this - EU has already emphasised that they are serious about their "right to repair" legislation. I also like the attempt being made by indie engineers to create more repairable phones and laptops.
Technically, yes, the "right to repair" isn't specific to socketed components. But socketed components do make it easy to repair a device and is the obvious way forward to make devices easy to repair and reduce waste (in fact the EU actually funded a project to create a mobile phone with more reusable components - http://www.puzzlephone.com/ - and some startups are also trying to do the same with laptops - https://frame.work/blog/introducing-the-framework-laptop ). As for soldering things on your own, today's modern electronic manufacturing techniques make it a very difficult task.
In all its iDevices, and mac with T2 security chip (so all the M1 devices now), Apple offers an anti-theft feature called the "Activation Lock" - https://support.apple.com/en-us/HT201365 - once activated, you can use it to erase all data on your iDevice remotely and ensure that nobody can use it without knowing your Apple ID and password. It's a useful feature.
But if a government (or Apple) wants, this can be abused - for example, tomorrow if your country goes to war to the US, the US government could ask Apple to disable all Apple devices in that country, and Apple could do it, whether you like it or not.
(The conspiracy theory part that others are hinting at is that many believe that this can be easily extended to totally cripple the device and make it unusable.)
I mean technically there's the T chip which could switch off everything. Parent poster seems to be posting conspiracy theories that apple will maliciously kill machines in 3-5 years at their whim. I would guess that would not go over with the government or customers and be the end of Apple PC type products though, so it's not going to happen
I like to read news and keep up to date on silicon development but when it comes down to buying a laptop with an Intel CPU I really can't tell how good it is from the name.
I usually end up putting it into cpubenchmark to get any sort of comparable numbers.
Nowadays a chip from 2014 and 2019 can both be named i5, and the 2014 can have higher clockspeed while being half as slow... You just can't tell, and maybe that's the goal.
Can you elaborate? I do remember clockspeed being the main thing you looked at and then being surprised that changed, but didn't really have any insight.
Pipelining was really kicking off in a big way and suddenly IPC made as big a difference at a time amd led in that regard as clock speed so amd started numbering their cpu models with a number that was based on the MHz they'd expect an Intel CPU to need to reach to match it.
Of course intel also released new CPUs and AMD didn't want their newer CPUs to have lower numbers than their old one, so that number eventually got inflated.
As others mentioned, Pipelining resulted in scenarios where Other manufacturer's CPUs had a lower clock rate but 'comparable' performance.
There's two main Eras of this;
During the P5/P6 Days, AMD, Cyrix, and NexGen made CPUs with 'PR' ratings, based on what they felt their CPU's compared to.
Ironically, this first era is probably why folks got so soured on 'PR ratings'; As far as the AMD K5 and Cyrix 6x86 went, These numbers were based more on integer performance, additionally Intel's P5 had a very novel (at the time) pipeline that some Game developers were optimizing for (Quake comes to mind here.) For NexGen the situation was even worse, in that some of their models completely lacked an FPU.
All those factors together made consumers a bit more wary of PR ratings for quite a long time.
Thankfully, AMD Bought out NexGen, took their arch and made it into the K6, which was very competitive with the P5 Clock for clock, and PR ratings went away.
They came back in the days of the Palomino K7s, but what a lot of people might not remember is that the Athlon XP's PR ratings were technically supposed to relate to a Thunderbird core. IOW, an Athlon XP 1800 was supposed to be 'equivalent' to an TBird Athlon running at 1800Mhz.
But, of course, PR ratings drifted again, as they tended to... and now we are in model number hell.
Just remembered a bit of history: the first (4 bit) Intel microprocessor would have been called the 1202 according to Intel's chip naming convention at that time until Federico Faggin pushed to get it changed to the 4004.
If it's complicated for most customers to choose a processor/pc, than often sales people help them choose. That's good for those sales people and good for Intel.
And it's not that like Intel is really competing with anybody.
> I've always thought Intel's marketing was a bit confused - i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.
It's been a mess for a long time. An i7 may be less performant than an i5, which may be less performant than an i3...all depending on which specific processor is chosen.
I ran a gadget shop for years. The amount of times people would say ‘What I have must be better because it has an i7 and that is an i5’. Then I’d have to explain that their i7 is ten years old and that they aren’t even in the same realm. Confusing marketing indeed.
The gadget shops around here turns this to their advantage. Trying to sell a five year old i7 at premium prices. Usually hiding the generation in very small fine-print.
Makes it next to impossible to give advice to family members what to look for when buying a new laptop. Simply Macbook M1 becomes much easier to recommend. As bonus you don't have to worry about underpowered ssd, noisy fans, weak backlight or what else the oem might have saved on.
I mean......the very first generation of i7s already had a dual-core mobile part, i7-620M was a dual core cpu.
So I'd argue you can't really "ruin" something you've just introduced. I do agree that for a while it was standard that a desktop i7 was a quad core and a mobile one was dual core.
I think the lesson here is that Intel's marketing for these chips has been a mess from day one.
Some of it is inevitable, they have too many aspects to contain in a reasonable product name, but letting the names become divorced from any sort of reality helps nobody.
Yes this was so annoying... We optimized software for multi core, but the laptop builders label their products as 10th gen i7.... which says little about it’s performance. GPU labels are easier to decode by comparison.
There's a value in that too - people know immediately that i7 is powerful, and everyone [that cares] already knows that processors are renewed frequently. Though yeah, perhaps something like 7i1, 7i2 would've been better. Apple solved this nicely (given the naming is confirmed) with M1X.
That's fine if you're buying new. Is there an i5 that beats an older i7 in performance? When buying a used laptop or desktop, how much extra research do I need to do to figure out which processor I'm actually getting? There's been 13 years of i7's.
>>Is there an i5 that beats an older i7 in performance?
Of course. I got a new laptop last year with a Core i5-10300H, and it easily beats my desktop i7-4790K, despite the same number of cores, in every workload.
UserBenchmark is your friend for this kind of thing really:
It makes sense for software release versions, but I think this lacks marketing prowess.
Consider; “hey man check out my new M1 machine!” Versus: “hey man check out my M1 2021.04”
Even with the Core series it’s short and sweet.
“10th gen. Core i7” still sounds futuristic and always will.
Ryzen 7 also.
Honestly I think folks here are overthinking this stuff. It’s easy enough to compare the specs on a 10th gen cpu vs 8th gen. I think that’s easier than comparing different product names.
For instance, just by the names: Pentium 5 versus Core i5 it’s not quite clear unless folks were old enough to remember Pentiums. Whereas, everyone knows current generation has better processing than previous generation.
Noone would ever say it like that. They would simply say:
“hey man check out my new M1 machine!” And if it happened to come at a generation shift the context would make it obvious if it was the old generation at a bargain or the brand new generation. (or that the person saying it couldn't care less about specs and generations)
Now if you would buy a used computer, or would like to compare your current i7 to the latest i7 - then you would take notice. How much difference is there between a i5 2021.04 vs. i7 2016.8 ?
That is dead simple to google and reason about.
Compare that to: So what computer do you have? Oh, it is an i7 with 16 GB of RAM. I have no idea what decade that machine is from. And noone remembers the specific version, and if they do I have no idea how to parse it anyway.
Speaking of Ryzen, I see everyone attacking Intel for their naming especially now that the generation is 2 digits, but AMD's situation is a mess too. The Ryzen 7 5800x is built on Zen 3 architecture. I know the numbering scheme and still sometimes mix up the generation of processor with the generation of architecture.
Also, given that AMD copied Intel's 3/5/7/9 numbering, I'll be interested to see if in a few years they're selling a 10800x.
Exactly. This proves the point I'm making. No one - even Apple with the best supply chain management in the industry - wants the version scheme you suggest.
Vendors want version numbers to push upgrades to consumers. They don't want to deal with demand forcasting for something with a limited time of life though.
Nobody said that they have to make that versioning public and shout it from the rooftops.
Users do use the informal versioning that I mentioned, because otherwise they wouldn't be able to differentiate the products. I've heard "MacBook Pro 2015" a million times.
And a date <<is>> a version number. It also nicely auto-increments to prove to your customers that the latest thing is better than the old one.
Doodad Pro, 2021 edition. It's not that hard, as I was saying.
The alternative is the garbage that everyone is doing. WH-1000XM3, really?!?
Cars do this. BWM 3-series or WV Golf has existed with that same name since forever, despite getting smaller or generational upgrades every year. Enthusiasts refer to them by model year. Of course you also have the engine size as the final variable.
Nah. BMW used to have sane numbering, as did Mercedes. Nowadays its all made up bullshit. 28i? you might thing straight six 2.8 Liter, well its a 2.0 Turbo. M550d? 3 Liter. Merc E63? 6.2 Liter. Dont even get me started on Porshes non-turbo/electric Turbos, or Mercedes 4 door "coupes".
This was a situation I ran into a lot when buying simple Intel-based nuc's for family. It got more complex when the difference between and i5 and i7 got really close due to thermal constraints.
But then you have different i7 variants with one having a different letter in the name which makes it just half as powerful because it's an extra energy-conserving mobile CPU.
I'm not into hardware but I have a rough overview, though whenever I'm supposed to help someone choose a laptop I only see a whole page with dozens of products and the only difference are a few numbers in the CPU serial number and some other weird product names and I simply have no idea what to say. I fully understand people who just go "I take this one because it has a nice color" - a layperson simply has no chance at understanding the difference, and good luck finding a store where the employees know more.
I never owned an Apple computer and I can still tell you on the spot which category of Macbook is better and more expensive, and why. Might have to do with the devices having normal names instead of alphanumeric strings that probably came from a password generator.
It is very confusing because typically the previous generation’s i7 performs the same as the current gen’s i5, within the same tdp tier, and i5’s from a higher tdp can vastly outperform i7’s from a lower tdp within the same generation. Typically at any one time there are two or three processor generations in actual laptops being sold, as well as three or more tdp classes, so there is basically no good way of knowing how different laptops in the store compare performance-wise except by looking up benchmarks.
> i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.
My wife was confused about that earlier this week. She was like: 'you mean i7 is not newer than i5? And if i7 is better than i5, why can an i5 be better than an i7? (when it's an old i7)'
I get they have the generation. But:
1 - it's not easy to compare different models of different generations. How can a non-technical person compare 7th gen i5 with 5th gen i7?
2 - Many places that sell computers don't put the generation on the 'headline'. You have to dig in the specs to discover that, if you can find it at all
Words of mouth is not just marketing. Whilst I still wait for the 32GB version :-) I may still jump in after having holding those hot macbook, MacBook Air and not to mention i9 MacBook Pro with turbo off already. Even MACmini is hot air. Only the decade old Mac Pro is hotter (but other than noise, it is not on my hand, my lap etc. And hence it is fine for a decade old mac).
It is not just “advertisement”. Btw I still confuse marketing meant segmentation, target selling, ... may be they are as so far only on low end. As said by a YouTuber it is so much better if they do Color on macbook aid or macbook 12”.
Yeah, it's arguably quite weird that Intel don't market this way anymore (they used to, particularly with the Pentium brand).
EDIT: Though, thinking about it, I can see why they might have been reluctant. Of the big releases over the last 15 years or so, Core 2 was, while a massive improvement, also essentially an admission that P4, and their whole strategy around it, had failed, and Skylake had a lot of teething troubles. The only big microarch shift that was an unqualified plus was Haswell, and I don't understand why they weren't louder about that.
Interesting. Marketing about "computer guts" has never been an Apple thing,so it's interesting to consider the motivation here. It could be that the M1 is that revolutionary, but Moore's Law et al makes me think this is not about new chip technology. More likely competing against Intel or whoever.
You get an M1 with the new iPad Pro as well! I hadn't thought of the situation as the article presents. When shown in that light, it made me pause to reflect. The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?
I will admit that I switched to Thinkpad and Win10 about two years ago when I had to return my butterfly for the 5th time. I am not looking back either. If anything, I am more focused on AMD Ryzen and Nvidia 30 series chips in MSI, Lenovo Legion and Asus offerings. There is nothing I can't do with one of those machines. Going Apple is a backward move for me as I like to program, design in CAD, play steam VR, and run blender sims. Can't do any of those well with Apple hardware.
Apple differentiates the majority of their products by generation rather than binning. If you buy a low-end iPad, you get an A12; the iPad Air steps you up to an A14; and the iPad Pro gets you an M1.
This is less evident on the Macintosh side of the business right now because they're just trying to get M1 silicon into as many product lines as their fab capacity will allow. They don't actually have an M2 (or even M1X) to sell high-end products with yet, which is why they're starting with low-end products first. When they release upgraded chips, they will almost be used to transition the high end models with lower-end product getting it later.
> Apple differentiates the majority of their products by generation rather than binning.
This simplifies things for consumers, but how do you make chips without binning? Are they all just that reliable? Do they all have extra cores? Maybe Intel bins more because they can squeeze out a significantly better price for more core and cache, but Apple's margins are already so high they don't care?
So, there are a few instances where Apple does bin their products:
1. "7-core GPU" M1 Macs, which have one of the eight GPU cores disabled for yield
2. The A12X, which also had one GPU core disabled (which was later shipped in an 8-core GPU configuration for the A12Z)
3. iPod Touch, which uses lower-clocked A10 chips
It's not like Apple is massively overbuilding their chips or has a zero defect rate. It's more that Intel is massively overbinning their chips for product segmentation purposes. Defect rates rarely, if ever, fit a nice product demand curve. You'll wind up producing too many good chips and not enough bad ones, and this will get worse as your yields improve. Meanwhile, the actual demand curve means that you'll sell far more cheap CPUs than expensive ones. So in order to meet demand you have to start turning off or limiting perfectly working hardware in order to make a worse product.
Apple doesn't have to do this because they're vertically integrated. The chip design part of the business doesn't have to worry about maximizing profit on individual designs - they just have to make the best chip they can within the cost budget of the business units they serve. So the company as a whole can afford to differentiate products by what CPU design you're getting, rather than what hardware has been turned off. Again, they aren't generating that many defects to begin with, and old chips are going to have better yields and cost less to make anyway. It makes more sense for Apple to leave a bit of money on the table at the middle of the product stack and charge more for other, more obviously understandable upgrades (e.g. more RAM or storage) at the high end instead.
I believe that Apple is effectively doing some binning with the M1: The intro MacBook Air has one less GPU core than the higher-config version or the MacBook Pro.
It would make sense if those were identically-made M1s where one GPU core didn't test well and thus had its fuses blown. Between the CPU and GPU, the GPU cores are almost certainly larger anyway; the GPU cores would therefore have higher probability of defects.
Binning requires more design work for the chip. I would guess the M1 was designed rapidly and probably they decided that hundreds of different bins for different types of defects wasn't worth the complexity if it meant delaying tape out for a few weeks. It also leads to extra product complexity (customers would be upset if some macbooks had hardware AES and others didn't, leading to some software being unusably slow seemingly at random).
How rapidly? And so how come it has such spectacular performance? Or the shortcomings of the x86 arch were so, so soooo obvious, but nobody had the resources to reaaallly give a go to a modern arch?
Or maybe, simply the requirements were a lot more exact/concrete and clear? (But the M1 performs well in general, no?)
Apple has always binned; they just don’t publicly announce it all the time. For example, iPod Touch has been historically underclocked compared to the equivalent chip in iPhone or iPad.
That struck me as an extremely odd metric to differentiate products by, given its low relevance to non-technical users who don't know what a GPU core even is (except gamers, but they are not buying iMacs). Additionally, most people are going to think adding one more core is hardly worth the price upgrade, and its quite strange to see an odd number of cores.
Depends, remember that the 5nm processors overall are fairly new so we have no idea of yields. But if 1/5th of the processors are flawless, 1/5th have one faulty GPU and the rest have more errors (in either CPU or two or more GPU failures) then having twice the amount of CPU's available to sell (at a slightly lower pricepoint) might make perfect sense.
It will have a far greater than 1/8th performance impact.
When data structures are power-of-two sizes, having 7 cores instead of 8 could halve performance, since the work gets split into 4 pieces and 3 cores sit idle.
Well GPU data structures aren’t always a power of two, right? There’s more than textures. For a fact, I know vertex count (vertex shaders?) and screen sizes (fragment shaders?) will rarely be exactly a power of two.
Isn't false sharing (as in all your addresses hashing onto the same cache lines) still an issue for power-of-two sizes? You'd have to mess with padding to figure out what's fastest for each chip regardless of core count.
> The only differentiator is screen size, RAM, and OS?
Apple is iPhone-izing (for lack of a better word) the rest of their product lines. If for the last ten years, the market hasn't really cared about the speed of the phone's processor within the same generation, but rather about physical differentiators (e.g. screen size, number of camera lenses, adding facial recognition), and the non-professional market is overwhelmingly characterized by light-usage applications, then why, pray tell, should laptops and desktops be so different?
> the non-professional market is overwhelmingly characterized by light-usage applications
I'm not sure social networks and most modern sites or apps qualifies as light-usage nowadays. Browsing Reddit or Facebook put my pro to its knees. The difference of CPU speed in phones is clearly visible for consumers (but maybe you don't notice it if you only use iPhones, because an SE from 2015 is still fast, but try an Android phone from that period).
I love Apollo but I am very annoyed that they don't allow side scrolling between posts. I just find that much easier to do and it somehow bothers me when I don't have it.
They combine standard parts into a number of configurations that aren't as different from each other as a consumer might imagine, then slot those engines into a whole host of different car models.
From a production stand-point, this makes sense to me. I imagine that it makes high quality at a lower price much easier to accomplish for Apple.
> The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?
Why not? It's basically back to where we were in 1980 when "everything" had a Z80 or 6502 (or both!) in it, and the major differences were in what else was in the system.
> The only differentiator is screen size, RAM, and OS?
I think that's the point. Until now, buying a computer has always been focused on the CPU and RAM stats. If you wanted faster/bigger you had to spend more. With Apple's new strategy you almost don't even care about CPU/RAM stats. They are focusing on providing value in other ways; larger screen, lighter weight, different colors, more ports, etc. I think this is the biggest shift in computers in quite a while and makes it much more akin to purchasing a phone or tablet than specing out a computer.
And essentially, why not? If I want to work on a machine, I want it to be fast. Not 1.33x of the baseline benchmark CPU when equipped with X GB RAM. Just fast enough it doesn't annoy me.
And with today's PCI-E-based SSD's, waiting for data to be written to "disk" is a non-issue, so the system feels much faster.
Apple’s goal is to eliminate technical specifications from marketing: tech specs are an excuse - make it good enough that few care what those specs are.
I can't think of a technological product where that mentality applies. Can you?
In fact, it seems that the opposite is true; as a product gets better, people care more about the specs. Whether you're buying a Wusthof knife or a luxury car, you want to know what makes the product good enough to justify its price and position in the market.
Vanishingly few people buy phones based on screen resolution, RAM, bandwidth, etc. Ditto computers used mostly for mundane email, web browsing, games (not hardcore), and such. Few buy cars based on horsepower, range, etc.
Insofar as people do consider specs, it’s usually because the specs are injected into the conversation, customers being taught to care by salesman trying to baffle them into choosing their product.
Most customers want it to just work. Apple is pursuing that.
TVs are sold on refresh rate, color accuracy, contrast, backlight localization, and panel resolution. This stuff is written on the box and promoted in marketing material. They are unavoidably differentiated on technical specs, even in the eyes of a layperson. Some of those specs depend on the panel (itself a semiconductor product) and others depend on processors/ICs.
I'm not suggesting that no parts can ever be a commodity (like capacitors in a laptop), but as I spend more, I increasingly tend to look for a technical advantage that justifies the marginal price.
I'll give you an example. If you want to buy a docking station for an Apple TB3 enabled MacBook, you have a couple of controller options at the higher end: Alpine Ridge and Titan Ridge. Better chips exist but they haven't found their way into truly well engineered consumer docks. I have a multi monitor setup with one superultrawide screen that can do 120Hz, so I opted for the Titan Ridge dock. It was buggy, so I ended up returning it and buying an Alpine Ridge dock that lacks the ability to push my big monitor at its best-looking resolution. And that's for a sub-$300 peripheral.
Like most of us here on HN, I'm one of the "few" that GP mentioned. But these are consumer products and they are mass marketed based on their technical specs.
Apple itself markets technical innovations in its 6k IPS monitors. The back is designed to dissipate heat, the monitor works with TB3 (ie, an Intel chip)...they even market the glass treatment in a deeply specific way.
Most TV customers can’t articulate the difference. Many may state a preference but only because such numbers are prevalent and associated. Were the numbers not advertised, most wouldn’t ask.
That was the game change with the original iPad. When you strip all the other specs away and put an original iPad next to a 2010 laptop, you realize just how awful the mainstream LCD panels were at the time.
It's an exciting change I think will benefit consumers. Less being upsold on questionable i7s and more nice displays please.
Why doesn't the M1 make sense in a variety of products? I don't follow the logic. It is a processor that can scale to meet the demands of mobile computers including laptops, tablets, and designer desktops (iMac). In each use case it fulfills the its computational role regardless of I/O or even operating system.
Based on your own description, you are self-selecting as an enthusiast that prefers gaming-like PCs. Isn't that exactly the sweet spot that Apple doesn't support?
> The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?
They've only replaced the lower tier of Macbooks and iMacs with the current M1 board, which suggests to me that they're working on a variant with more CPU and GPU cores that will go into the higher tiers of those machines.
I don't think they have the fab capacity to be able to do that.
Since M1 products are selling very well, they likely have reallocated fab capacity they had planned for the successor to the M1 back to making the M1.
A higher performance M1 would have a bigger die size. It would have a lower yield and get fewer dies per wafer. The same capacity would sell fewer products.
Apple probably earns more profits by simply selling more M1s, unless they can reserve substantially more fab capacity.
Some benchmark sites claim they saw performance reports. Whether they make those up to see more traffic, I cannot tell. If I worked at Apple, I would not tell :)
> I don't think they have the fab capacity to be able to do that
Do you have a source for there being any fab capacity shortage for Apple (or anyone, for that matter) at 5nm? Older nodes are a different story because that's still where most of TSMC's customers are doing their high-volume production.
I think plenty of people will skip it. Even ignoring the high cost, not everything scales as well with every new node. There're plenty of customers who will have no interest in this node right now and some that will have no interest in it ever. I've not seen anything that suggests capacity shortages at 5nm yet, especially for Apple who've already booked all of 3nm for next-gen.
SRAM scaling from 7nm to 5nm is pitiful. While the CPU transistors see 50-70% increases, the SRAM is only shrinking 20-30%.
For chips with massive cache, that isn’t super cost effective (I suspect as a cost saving measure that we’ll see L2/L3 cache moving to a separate chip on a larger process while the rest shrinks down).
>The M1 doesn't make sense when its in every darn product.
Would you feel better if they called it A14X?
It is basically the same thing as what Intel is doing. Same Die, different binning, different naming. Same with Core count on AMD, same die, different binning on Cores and Clock Speed.
Apple doesn't bother do any of that because well it is complicated for consumers. I call it TDP Computing. You are limited by the TDP Design of the product. Not the Chip.
I am waiting to see Apple absolutely max out their SoC approach for Mac Pro.
Apple is doing binning, at least to some extent. e.g. some Macs have only 7 GPU cores rather than 8. Presumably that's done to a greater extent for the iPad Pro
>I am waiting to see Apple absolutely max out their SoC approach for Mac Pro.
But they already maxed out their SoCs in the benchmarks because Geekbench doesn't care about TDP. The TDP makes a difference in real world performance but not in the benchmarks.
I agree that the M1 doesn't look good for high performance computing right now or anything with poor ARM support. But strategically, I think Apple is in a good spot. For heavy computation these days, I always remote into another machine. With increased bandwidth and more efficient remote desktop protocols, I even do all my graphics-intensive 3d work remotely now. By focusing on low-power processors, Apple is making the laptop/tablet/phone experience better, and I could see them handling the performance issues via remote compute. It could be a very effective strategy (if it is their strategy).
I think what apple may be trying to do is reduce macOS sales and full throttle on iOS sales. It no longer makes sense for a macOS if it is going to have the same chip as iPad Pro.
How's that any different than having a single Intel generation scale all the way from low powered laptop SoCs to 12 core i9s though?
After all, the M1s across devices aren't the same either - iMacs have different configuration from iPads, those have different amount of cores and clocks from Airs and MBPs as well.
It seems like the difference is only in Apple vs. Intel marketing blurb.
No? That's the entire point of the article. The only difference is whether you get a 7 core or 8 core GPU.
> If you want to buy a MacBook Air or MacBook Pro, Apple will sell you an M1. Want a Mac Mini? You get an M1. Interested in the iMac or the new iPad Pro? You get an M1. It’s possible that the M1 CPUs inside the iMac will have different thermal or clock behavior than those inside the systems Apple has already launched, but the company’s decision to eschew clock speed disclosures suggests that these CPUs differ only modestly.
> with the M1, is that its custom CPU performance is now so high, at such low power consumption, that the choice of chip inside the system has become irrelevant within a given product generation
Exactly, so what is the point here? How exactly is that different from Intel generations?
Why does Apple put different amount of cores and clocks into different products if the choice doesn't matter? It seems like there are performance differences there if they choose to install differently configured SoCs in different devices and even split them by pricing on the iMac.
So, please, explain where's this big difference? (With more than a single sentence of your words if possible.)
I've seen nothing to indicate different products have different clock speeds?
So far we're seeing two differentiators. One is the use of the 7-GPU bin to hit the bottom-of-the-range price point for each product family. The other is simply the different thermal characteristics of different products - the passively cooled Air & iPad Pro will thermally throttle earlier than the actively cooled MBP, iMac & mini.
The products aren't being differentiated in silicon, they're being differentiated in feature & format. My mother can tell me the difference between an iPad and a macbook without describing anything that she can't see with her own eyes.
The M1 in the macbook air throttles when it gets too hot due to the thermal characteristics of the case (no active cooling).
The M1 in the macbook pro is actively cooled, and does not throttle.
When additional cooling is applied after-market to the macbook air, it has the same performance characteristics as the macbook pro.
The M1 is the same in both products. It throttles when hotter. It's more likely to throttle if not actively cooled and under heavy workload. It's not clocked differently. There's no artificial constraints.
There are only two different M1 versions. The 7 GPU version and the 8 GPU version. All of the cpu cores are the same. All power envelopes are the same. That's much different then the vastly different form factors and power envelopes intel has.
I have an M1 Macbook Air and this thing gets a bit hot for an iPad. I don't think the iPad Pro will be able to hold the same clock speed as long as the other machines over the long haul.
Make it as fabulous as much as you want, $1700 for an 8 core, 8 gigs of ram machine is just one plain fabulous joke. This in the time when 16 gigs ram is the baseline if you plan to do anything more than facebook and instagram.
So my sister does professional photography, and went from a 16GB MacBook Pro(2016) to an 8GB Air, and reports no decrease of productivity - quite the opposite, in her experience all Adobe programs run much faster, and the machine is quieter and lighter to boot. So yeah, I'm not sure - maybe the ram amount isn't as a big deal as people make it out to be. On the other hand I'm a C++ programer and my workstation has 128GB of ram and I wouldn't accept any less.....so obviously it varies.
AAA video games do that for you :P Well, it isn't the programming part that uses the ram(although yes, building our codebase takes about 40 minutes and uses gigabytes of ram without using distributed build), but just starting up local server + client + editor easily uses 80-100GB of ram since ALL of the assets are loaded in.
Did you have the chance to try your setup on an M1? If it worked for your sister, although you seem to have way higher requirements, is there anything to say it wouldn't work for you?
I'm asking because I read a lot of comments when it was released that it just doesn't need as much RAM because $REASONS. I wouldn't put my money on this, but I'm curious if this assumption holds water now that people have had time to try it out.
I doubt it's possible to try a AAA dev setup on OSX at all, and, for whatever it's worth 64G workstations were "hand-me-downs" at my previous gig (AAA gamedev), I doubt there's much magic that can make "64 gig is not nearly enough" go to "16G is fine"
I also have doubts but that's what the marketing hype has been claiming for some time now, so I'm really curious about real-world experiences and where the hype breaks down. The debate is often "I need way more RAM!" vs "But this is a new paradigm and old constraints don't apply!".
AAA gamedev might be the wrong demographic though, since it's mostly done on Windows (I think?).
I'd suggest people just get the larger ram unless they're tight on budget. I know Apple's trying to argue otherwise and people will agree with them. But I can't hear it as anything other than thinking molded to fit a prior decision. For what it's worth, and not scientific, but reported "percent used" statistics seem to grow slower for the 16 gig models than the 8 gig models (from the smart utils).
I'm equal parts happy and terrified that MS announced x64 version of VS recently, because I know it will just mean VS can now scale infinitely. At least right now the core process has to stay within the 4GB limit :P
Large code bases in an IDE, program dumps, large applications (the software I write will gladly use 10-20gb in some use cases), VMs, large ML training sets, &c.
128gb is likely overkill, but I can see a use case depending on what you're doing.
I think everyone should get more ram than they think they'll ever need. 32 gigs is that number for me, but if I thought I'd get even close to using 64 gigs, might as well go for 128.
Same setup as him, I'm working on llvm. It's very nice to be able to test the compiler by running and simulating on a threadripper CX3990, means I don't have to run everything past the build server
Web developer, I had an 8GB M1 MacBook Air and if I ran vscode with my 50kish typescript codebase and dev server for api and frontend + native postgres and redis at the same time I’d be right at the limit of my machine slowing to unusable levels. Switched it for a 16GB one and I’ve never had any noticeable performance issues since.
It’s funny you say “innovate.” They certainly innovated with the M1, but with one key difference. They did so in public, step-by-step over the course of a decade.
Perhaps more than any other Apple innovation, we have the greatest visibility into the process with the M1.
I consider ability to perform open development of future products a key differentiator for Apple.
Like Jobs said in his Stanford speech, “You can't connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”
It seems like Apple uniquely combines open development of technology in known products with secrecy around new product development.
This allows people to be so surprised by the M1, when the late AX processors were obviously pointing toward massive capability.
I believe there are other examples of this happening--specifically with the Apple Watch.
On that product, the size limitations combined with increasing expectations of performance and functionality have allowed Apple learn and improve production-capability in many areas that will be in any forthcoming AR/VR products.
The M1 also means that Apple regains full stack control of its desktops and laptops. Their phones and tablets prove that when they develop both the main chips and operating systems themselves, they are able to eke out greater performance from lesser specs.
Flagship iPhones always have less ram than flagship androids, but match or exceed their performance.
That may help a bit but unless they adopt that draconian policies that govern iOS runtime that impact is limited. Safari is an example of improved battery life when Apple owns the stack. It’s a great feature, but not a game changer.
Most business users were fine with 4GB 5-6 years ago. Electron apps like Teams and Slack pushed it up to 8. The next tier are folks with IDEs, docker, etc that are usually 16-32GB.
You joke, but I think this is actually true. If companies gave their developers of client-facing software slower computers the resulting software would end up being faster.
And why some of my past upgrades were driven by the web browser over-taxing the machine on certain sites, while nearly everything else was perfectly performant with absolutely no complaints.
Yep, if you're running big IDEs (e.g. Rider/IntelliJ, Visual Studio), containers and VMs, 32GB is really a must. There always seem to be people in these threads claiming that 16GB or even 8GB is enough - I just don't understand how that could possibly be for most of the HN demographic.
Do you think most of the HN demographic is actually running big IDEs, containers and VMs at the same time? I'm personally a CS student and never had to run more than a few lightweight docker containers + maybe one running some kind of database + VS Code and that has been working fine on a laptop with 8GB and pop_os. Could imagine that a lot of other people on HN are also just generally interested in tech but not necessarily running things that require a lot of memory locally.
CS PhD Student. Running a laptop with 16GB of RAM. I dont train ML models on my machine but whenever I have to debug stuff locally, I realize precisely how starved for RAM my computer is. I start by closing down FF. Just plain killing the browser. RAM down from 12GB to 7. Then I close the other IDE (usually working on two parallel repos). 7GB to 5. Squeeze out the last few megabytes by killing Spotify, Signal, and other forgotten Terminal windows. Then I start to load my model to memory. 5 times out of 7, its over 12-13 GB at which point my OS stops responding and I have to force reboot my system cursing and arms flailing.
If you're on macOS, there's no such thing as a “lightweight Docker container”. The container itself might be small, but Docker itself is running a full Linux virtual machine. In no world is that “lightweight”.
I was going to say, I'm on a 16gb 2015 macbook pro (not sure what to upgrade to) and Docker for Mac is _brutal_ on my machine, I can't even run it and work on other things at the same time without frustration
I run like 50 chrome tabs, a half dozen Intellij panes, youtube, slack, and a bunch of heavy containers at the same time, and that's just my dev machine.
My desktop is an ML container train/test machine. I also have ssh into two similar machines, and a 5 machine, 20GPU k8s cluster. I pretty much continuously have dozens of things building /running at once.
Yeah. I suspect most people here are software engineers (or related) and IDEs, Docker, and VMs are all standard tools in the SE toolbox. If they aren't using Docker or VMs, then they are probably doing application development, which is also pretty damn RAM hungry.
I do most of my development in Chrome, bash, and sublime text and I'm still using 28GB of RAM.
Depending on the requirements of your job—-just a single VS instance with chrome, postman and slack open takes around 5GB. Teams adds another GB or so. The rest probably another 2GB (SSMS and the like).
On my particular team we also run a dockerfile that includes elastic search, sql server, rabbitmq and consul—-I had to upgrade my work laptop from 16GB to 24GB to make it livable.
Wouldn't you just have all the heavy stuff on a server? I don't understand the goal of running something like sql server and other server type apps on a desktop/laptop.
I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters. Consider the fact that iOS devices ship with half the RAM of Android devices and feel as responsive, have better battery life, and have better performance.
The Apple stack is better optimized to take advantage of the hardware they have. Indeed, one of the reasons is because they have so few SKUs to worry about it focuses the engineering team (for example, in the past, internally engineers would complain about architectural design missteps that couldn’t be fixed because 32bit support wasn’t dropped yet and was pushed out yet another year). Now, obviously in a laptop use-case this is trickier since the source code is the same as the x86 version. It’s possible that the ARM code generation was much more space efficient (using -Oz instead of previously likely set at -O3). It’s also possible that they have migrated over to iOS frameworks in an even greater part than they were able to in the past, leveraging RAM optimizations that hadn’t been ported to macos). There could also be RAM usage optimizations baked around knowing you will always have a blazing fast NVME drive. Now you may not even need to keep data cached around and can just load straight from disk. Sure, not all workloads might fit (and if running x86 emulation the RAM hit might be worse). For a lot of use cases though, even many dev ones, it’s clearly enough. I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.
> I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters
I didn't claim it was all that matters, and I haven't seen anyone else do that either.
I do take the point of the rest of your comment though, and it may well be the case that Apple does some clever stuff. But realistically there is only so far that optimisations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.
> I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.
Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.
> But realistically there is only so far that optimizations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.
Except the M1 is a novel UMA architecture where the GPU & CPU share RAM. There's all sorts of architectural improvements you get out of that where you can void memory transfers wholesale. There's no "texture upload" phase & reading back data from the GPU is just as fast as sending data to the GPU. Wouldn't surprise me if they leveraged that heavily to get improvements across the SW stack. The CPU cache architecture also plays a big role in the actual performance of your RAM. Although admittedly maybe the M1 doesn't have any special sauce here that I've seen, just responding to your claim that "DDR4 is DDR4" (relatedly, DDR4 comes in different speeds SKUs).
> Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.
No one is going to model things as "well users aren't using that much yet". You're going to look at RAM usage growth in the past 12 years & blend that with known industry movements to get a prediction of where you'll need to target. It's also important to remember that RAM isn't free (not looking at the $). I don't know if it matters as much for laptop use-cases as much but for mobile phones you 100% care about having as little RAM as you can get away with on your system since it dominates your idle power. For laptop/iMac use-cases I would imagine they're more concerned with heat dissipation since this RAM is part of the CPU package. RAM size does matter for the iPad's battery life & I bet the limited number of configs has to do with making sure they only have to build a limited set of M1 SKUs that they can shove into almost all devices to really crank down the per-unit costs of these "accessory" product lines (accessory in the sense of their volumes are a fraction of what even AirPods ships).
Anecdotal. I write client software for bioinformatics workflows. Usually web apps or clis. Right now with my mock db, Emacs, browser, and tooling I’m using ~5G of ram. At most I’ll use ~8GB by the end of the day.
I also shut down at the end of the day and make judicious use of browser history and bookmarks. If I were compiling binaries regularly I guess I could see the use in having more ram but as far as I’m concerned 8 is enough and so far people find what I put out perform at.
Yeah 32gb is my baseline now. I could probably work on a 16gb machine now but last time I was using an 8gb machine the memory was nearly constantly maxed out.
(Curious) why? VScode, Chrome, and a terminal running a local server usually will do fine with 16gb or less. Are you testing with or querying massive data sets locally or something?
I'm typing this response on a 8GB M1. It's great, but its no magic. Its limitations do start to show in memory intensive and heavily multi-threaded workloads.
Getting some down votes, which I attribute to reasonable skepticism, so hopefully this will allay your concerns.
One example, I was trying download all the dependencies for the Kafka project via Gradle with IntelliJ while watching a video on YouTube and working on another project in Visual Studio Code. The video started to stutter then stopped and Visual Studo Code became responsive. I basically had to shut a bunch of stuff down and go to lunch.
I haven't seen a modern computer struggle with that kind of workload before.
End of the day the Intel Macbooks of the last few years have been terrible low performance processors that get thermal constrained and have abysmal inconsistent battery life. So if all you use is Macs the M1 is going to feel amazing.
Funny. Except there are fully supported MacBooks from 2013 running the latest version of macOS, still on their first battery, still bringing people joy and productivity.
Or in my case, a 2012 MacBook Air. I hope they'll support Catalina for another three or four years or at which point it's more than a decade old and can finally mature into a laptop's final stage of life as a Linux machine.
I stand corrected. I'd attribute my false confidence to the fact that even the previous version (macOS 10.4 Mojave) is still supported. But now that I think about it that'll likely change at the Apple WWDC in June.
Still, decently satisfied with ~10 years of software support for a laptop.
Mojave Support ends this year right on time for 3 years. I finally left Mojave behind myself as I really wasn't using the older apps catalina lost support for anyway.
Yup. My personal machine is a base Late 2013 15” MacBook Pro (8GB RAM). Original battery, original storage, it saw very heavy usage up until a a couple of years ago. Still fast, decent battery.
In ~2018 I briefly used an old 4GB MacBook Pro for work. It was only untenable longer-term because I needed to run two electron apps or many-hundreds-of-MB-memory tabs at a time, sometimes.
But why not 16G? It's a small price increase compared to base and it would basically make it much more usable in any extended amount of time, especially given the SSD write 'bug' that were exposed.
I just went from a 32GB RAM Macbook Pro to an 8GB RAM M1 Macbook Air...the difference is insane, I don't know what the hell the MB Pro is doing with its RAM, but it just felt like RAM was never enough on Intel Macbooks. Here on the M1, I don't feel my system crawling to a halt like I did on the MB Pro, and I'm doing the same workloads.
At this point, I don't think that's entirely clear. Something to do with the way Apple has tightly integrated the SoC parts and the OS's memory management lets them get massively more performance out of much lower-specced machines, and at least from what I've seen, no one has yet managed to truly unravel all that makes this possible.
I'm interested in this because I have a maxed 2018 intel mac mini and loaded it to 64gb of RAM. I want to ditch my eGPU and go to Apple Silicon on the next release.
I wonder if the memory performance should be so surprising. Because haven't people been bemoaning the "low" amount of ram in iPhone / iPad?
iPhone had 4GB on XS, and 11. And only went to 6GB on the Pro models. Yet the performance and benchmarking on these devices has seemed to garner praise with each successive generation.
I've just made a similar switch I went from a Pro (2018 i9 Pro with 32GiB of RAM) to an M1 Air with 16GiB of RAM and its ridiculous.
I've tried to look up what this difference might be, but all I've found is a hand-wavey answer about it being an SOC. If someone can ELI5 then I'd be super appreciative.
My feeling is that as everything is on chip, things are physically closer - less lanes on the motherboard - so latency from tasks is lessened. I would not have guessed it would be such a difference though.
Really? I still use seven year old MacBook Pro with 8GB of RAM. It’s perfectly fine for developing in at least Python, Go or Nim, with a Docker container or two and MariaDB in the background.
Sure, when/if I upgrade I’d go for 16GB of memory, but one should be careful about projecting ones own needs onto other.
I have a 2015 MBP as well... it's HORRIBLE when I hook it up to a 4k monitor. Lag, freezing, etc. I can still get stuff done, but the experience is pretty bad.
Perfectly fine on its own screen though, the dual-core in mine just isn't up to the task of a 4k external monitor.
Your web browser probably doesn't have many tabs open, and those that are open, are probably not web apps (think Slack, Facebook, WhatsApp, various internal corporate apps, etc.).
I can't even remember a time when just my web browser used up less than 4-5 GB of RAM, on its own.
Add at least 500MB - 1 GB for the OS itself, and we're talking about 2 - 3.5GB for apps. I'd immediately swap with just 8GB of RAM with an IDE and a DB running.
You’re right, I can’t mentally deal with more a few open tabs and I use none of those webapps at home. I might be an outline in the other direction, but there’s a lot of room between using maybe 4GB of RAM in total an using that for a browser alone.
How often? I set mine to discard after 60 minutes, except for slack, WhatsApp and a couple of others; cost is reload when I do venture into discarded tabs - e.g. I scan HN and Reddit headlines in the morning, open each interesting comments page in a tab; much later, when I have downtime, I visit them - but I used to reload to get the new comments. Now with ATD, they get reloaded automatically, so for that use it is even bette than the real thing.
None, I use Vim or TextMate. But you have a point Xcode isn’t fast on my laptop. You use it, but if I where to use Xcode professionally I’d get a newer machine with more memory.
I have been working on a TVOS app, and it works fine, but there is some waiting involved.
X cores + X gigs of RAM does not mean better performance for higher values of X. That is the fundamental innovation of Apple Silicon, which people still struggle to grasp. The M1 upended how we think about CPU performance. It's not even a CPU, it's a unified memory architecture with hardware-level optimizations for macOS. You can't even cleanly compare the performance benefits of having memory and CPU on the same chip, because the time spent on copying operations is far less.
The plain fabulous joke is that we've spent 30 years thinking that increasing cores and increasing RAM is the only way to increase performance while the objective M1 benchmarks blow everything out of the water. The proof is in the pudding.
But we still live in a physical universe where there are voluminous things some of us need to put into ram. Things like big projects, application servers, database servers, IDEs, and for some of those - multiple instances of them. On top of that, browsers with tabs open, productivity tools. Can't benchmark out of that.
Apple have done things to mitigate a lack of RAM (consistently using the fastest SSDs they can with a controller integrated into their custom chips and now into the M1 presumably, memory compression, using very fast RAM in the new M1, etc.) but yeah at some point you just need more room. 16GB has been enough for me for a while thankfully.
You still have to try it out and see. I'd welcome an article detailing a dev setup where the M1 isn't suitable because of performance reasons. So far we've seen mostly praise.
It's also not true. I don't know if I'm the only person not doing fluid simulations all day on my laptop, but I don't understand where this idea comes from
While true, that's not going to motivate anybody. Memory is not a very scarce resource, we're making more all the time, we can continue to make more, and it's reusable.
Moreover, memory that isn't being used is almost completely useless - the only thing it does is act as disk cache.
Better to use memory than CPU, as the latter actually consumes power (leading to climate change if not powered using renewables etc.) - although even better to use less of both.
A more effective argument would be to look at the number of users on low-end devices, and point out that, the more memory you use, the less these users are able to run your applications.
Not that I'm expecting companies like Slack to care - they design for the high-end user, and if your company forces Slack on you and you have an older device, there's nothing you can do, and they exploit that.
DRAM refresh takes power. I don’t have the numbers, but I’m not sure that, in the average smartphone or tablet of today that energy usage is negligible compared to that of the CPU.
Reason for that is that DRAM must be refreshed 24 hours a day, while the CPU is sleeping a lot of the time, even if the device is in use.
For typical dimms it's about 2-3 watts per stick. Doesn't sound like much until you max out a SuperMicro (or other server grade) motherboard with 24 dimm sockets. :)
I have an M1 from work (which I now mainly use for private stuff...) and it does have 16gb memory. It did cost a bit less than $1700 (after taxes) I believe, although I am not sure. I know it has some upgraded gpu compared to the standard model in some form, but didn't know about the memory.
As for the device: It is neat, but not revolutionary to any degree. I do paint and model and can run blender/krita just fine, it is even quite performant. This is through emulation, I don't have native builds for arm. Maybe those have become available in the meantime, but you don't notice the emulation at all.
But it won't be the end of x86 in my opinion. Why would it?
The M1 isn't what kills x86 (if it ever does completely). ARM kills x86.
Microsoft is working on the ARM transition. ARM has good control of mobile hardware. And Apple will be only selling ARM hardware (in the form of Apple Silicon) in another 12-18 months.
The main feature for me isn't the ARM, it is that the device is passively cooled and still very performant. If that isn't possible with x68, ARM might have a chance perhaps. But 99% of my time is still spend on that platform.
I just wish I could install Linux.... If MS and Apple just provide their locked down environments, it will never be more to me than a neat device and I would still crosscompile instead of binding myself to a manufacturer.
I ran Linux for years on laptops/desktops/servers, doing OLTP software that does millions txns/day. I adopted Linux back when Solaris was the way to go for backends.
But moved to Mac (and OSX) about 8 years ago.
I don't get the "locked down" thing. On my current macs (a 2020 iMac and a 2015 MBP) I run Macports that lets me install pretty much every bit of userland software that I want. I also get the advantages of the MacOS gui environment and the availability of most "user" software.
Yes SIP and the new sandboxes lock down the MacOS part of the system, and things like VPNs (eg Wireguard) need to get a dev cert and distribution from Apple.
But the "lockdown" is very lightweight. There's nothing I can't do on this devices that I used to do on my Linux environments.
If I truly need a "native" Linux, then there are a number of VM and container environments also available.
But there is no technical reason for it being locked down. I don't want to subject myself to more of this, which would be the result if I get dependent on it. Why would I? There are only disadvantages if I don't want to sell software for the ecosystem.
I do some low level system developing and I doubt I would ever switch to MacOS for this. Higher level software? Maybe, but as I said, why give Apple any handle here and these sandboxes don't provide security for me. I will also not get a dev cert from anyone, that is just something that will never happen.
That ARM is on the cusp of finally dethroning x86, is really, for me, amazing me that Intel has kept it dominant for so long (3+ decades).
Partially it's amazing they remained on top, and also partially that they never managed to cannibalize their own success (like Apple and later Microsoft have done).
I mean, that price tag is being swung in a kinda silly way.
The Mini with 16GB of ram is like $899. At that price the CPU has 8 Cores but 50% greater single core performance than basically anything in the price range. In lightly threaded or thread-racey tasks the M1 will outperform any chip on the market for most people working on their machine. In my case, we build, run, and run tests on a very large typescript app in half the time of any single Intel chip we have ever tested, including desktop i9. I'm not defending the pricing of their ram or storage upgrades, those are nuts. But the pricing you are comparing with here is for machines that include a LOT of gravy in the build sheet intended to draw your point such as a 4.5K Display or TBs of PCIe SSD.
Did I mention that the case I was mentioning was in the very base Macbook Air with 8Gb ram cross-compared with an i9 64Gb machine?
You are kinda making a blanket statement that is a little unfaithful to the intent of any of these machines. None of these machines are intended as 'pro' machines, even the 'Macbook Pro' is just the low end model and it is three times as powerful as the outgoing model. Sure you can spec one to the moon to make a price point, but that's the story of anything.
I wonder if Apple's strategy is to push macOS developers to optimize for certain SOC cadences, rather than having to traditionally target every system configuration possible. Therefore they opted to only have limited SKUs of the M1: only differentiated by RAM and GPU cores.
Analogy is gaming consoles - the hardware is fixed for X years so game developers know exactly what to target and make better looking & performing games over the cycle of that console. Compared to, say, Windows 10 that has to run on an almost infinite number of hardware configurations.
This is actually similar to their approach to iOS and iPhones - iOS versions span across multiple iPhone cycles, but they are limits. For example, iOS 13 was supported on iPhones 6S thru iPhone 11, or the A9 thru the A13 SOCs. There's probably some correlation between good performance and the tight SOC-to-OS coupling.
We'll likely see similar, limited configurations for future Apple M* SOCs.
The 8 gigabyte machine is fine for 90% of people. I've recommended just getting that platform to a number of people, and none of them have had issues.
I've got an 8 Gigabyte MBAir, and it never stutters. Meanwhile, on my 16 GByte Dell XPS (Ubuntu 20.04) - I routinely live in fear of exceeding my chrome tab quota because I know it will bring the system to a crashing halt. Somewhere around 45 is the point it all comes down.
Meanwhile, I don't even think about how many safari tabs I have open (hundred+) - and 8-10 applications open at the same time.
Different operating system has different models of swapping and degrading performance. Apple has nailed it.
Safari vs. Chrome (or Firefox) is one part of why I can get so much more mileage out of a Mac's memory than I can under Linux or Windows. It's wildly more respectful of system resources, across the board—memory, processor cycles, and battery.
The rest of their software's mostly like that, too, with the possible exception of Xcode. I often forget Preview with a half-dozen PDFs is open, and Pages with a document, and Numbers. I wouldn't forget a single tab of Google Docs under Chrome, left open for weeks, because it would make itself known in system responsiveness and battery use. Ditto MS Office. Mac Terminal's got notably lower input latency than most other terminal emulators, especially featureful ones.
Apple, seemingly almost uniquely among major software vendors, gives a shit about performance, and it shows. I really, really wish they had competitors, but in so many ways they're the only ones doing what they do, to the point that they can make periodic serious blunders and I'm left going, "yeah, but what else am I gonna buy, that won't have a 'normal' that's overall-worse than Apple's 'broken'?"
Went from 64GB to 8GB. Didn't notice a difference - infact it is better. I kind of get your mindset but they have done some fuckery somewhere to get it to work so good.
My M1 Air with 16GB feels way more responsive and behaves better under load than my 64GB hex-core beast of a desktop I built late last year.
It's like the old days when I could put BeOS on a Pentium with two-digit MB of memory and it'd feel as good and responsive in actual use as a newer desktop clocked 6-8x higher and with 8x the memory, running Windows or Linux.
Looking at core count and ram size isn't how to measure performance. You test and measure. But, the cores are just better on the m1 as has been shown through tons of benchmarking. Also, if the interconnects and busses are smarter (better cache coherency impl) then you'll be better in multithreading cases. Also because it's an SoC it has a way better implementation for the ddr interface (that's what they claim at least). I wouldn't be surprised if the latency hit for accessing RAM was way better than on x86 where you have to cross a long bus. For the workloads that people run on iMacs, light photoshop, some video editing, etc. We already know the new macbooks are amazing at it and this machine will have the same hardware, but with a dope screen (and cool colors. I want yellow >.>).
I bought a new machine last year that came with 8GB and ordered a 16GB upgrade for it which didn't arrive for a few weeks. I literally just now remembered I've never got round to installing it.
I don't disagree that 8GiB for $1700 machine is absurd, however, I feel compelled to point out that MacOS is significantly more aggressive with memory compression than other operating systems and they come with blazingly fast SSDs, allowing devices to get away with less available memory.
In-memory memory compression is the most meaningful part of what parent comment said. And it helps tremendously on other systems as well, for example zram is quite underhyped on linux.
zram is a game changer. I believed I needed to upgrade 16gb to 32gb, with zram and a decent cpu no I don't. 97% full memory + 20GB zram swap and no noticeable problems.
I did indeed, I never said that SSDs are a replacement for RAM, I just said that having access to 2.5GiB/s RW operations allows you to offload many things into the SSD, yes, thrashing is an issue, but it is much more insignificant at those speeds than disks.
I am not defending apple in any way whatsoever, I even mentioned in the first sentence of the comment. I stated that they are able to do that because they compensate with compression and SSDs, applications in the background can be thrown in the swap and the user will rarely notice.
Actually, they're somewhat slow compared to the competition.
A 500GB M1 SSD gives you about 3.1GB/s of write and 2.75GB/s of read performance [1]. In comparison - a $219 Samsung 980 Pro benchmarks at 4.2GB/s of write and 5.2GB/s of read performance [2]. Both on Disk Test from Black Magic.
32/64 queue depths generally do not apply to desktop computing since you are unlikely to read that many file streams simultaneously (or rather, your one program that you care about right now might be loading only a few files at once, which is what you'd perceive as the SSD speed: the fact that OS services might be accessing other stuff in the background does not help much there).
BlackMagic tests video workloads using sequential reads/swrites. 2.2/2.7 GiB/s is pretty average for a modern NVMe SSD. Even consumer-level Samsung NVMe SSDs are faster.
I'd believe that - because the average user likely doesn't do anything more than Facebook, Instagram, general browsing, word processing etc.
I'd also believe it, since the number of laptops available with only 8GB of RAM is ridiculous - I swear it's because more prevelant in the past few years, presumably because of RAM prices.
RAM use should always be 100% for reasonable definitions of "use". There's always more disk pages to be cached unless the machine has just booted. If they're not all being used, it means something temporarily flushed all the pages which is actually a bad thing.
Adding up the memory use of apps isn’t actually a good way to calculate RAM needs though. It undercounts (because file caching matters too) and overcounts (because many things are fine swapped out).
I am not sure what the end-game for releasing those under-powered machines is, tbh. I am a bit more extreme in that I am not buying any machine below 32GB given RAM is no longer upgradable, but even if my mom came and asked me about those iMacs, I'd tell her to not buy.
It's a shame that Apple is so great at greenwashing and at the same time spits on every way they could help create truly sustainable products by just dropping a few percent of their bottom line.
I think that's a bit dramatic. I'm using a 16GB M1 Macbook Pro as my daily driver doing standard, boring professional work (lots of email, tabs open, PDF manipulation, Word, Excel, etc). It performs as well if not better than the 2018 Macbook Pro it replaced with 32 GB of RAM and an i7. And it cost less than that one.
The iMac will perform comparably (probably a bit better due to better thermals). My point is that these are not bad machines and I don't see why you'd steer someone away from them. However, the price is still quite high compared to other manufacturers and options. But that's always been the case.
> I am not sure what the end-game for releasing those under-powered machines is, tbh
The lowest and highest tier exist only to shift consumers towards the middle tier, where Apple profits the most and where lies less value for the consumer. It is a marketing strategy invented by Apple IIRC (it's been a long time since my marketing exam).
I know, but AFAIK those new iMacs ALL feature just 8GB of RAM? There isn't really a middle-tier, that what makes it so weird. you'd expect one machine with top SSD and 16 GB (and space grey, lol).
Probably going to be the next iteration of the 27" iMac, spanning from 8GB/256GB (LOL) to 16 GB/1TB or something.
You can upgrade them to 16GB. Having been developing heavy applications in a M1 Mac mini for since they were released they’re definitely adequate for most users. Obviously some people need ~100 cores and ~1TB of ram (myself included for other parts of my job) but that segment is always going to be better served by companies other than Apple.
I don't think it is. They usually offer a couple of configurations at different prices, but you can get a BTO machine with just the upgrades you want. The SSD+RAM are tied together in the new M1 iPad.
Ah you're right, i'm the one who confused the two and remembered the weird setup of the iPad, the iMac's Tech Specs page does just state "8GB (configurable to 16GB)" for the entire range (whereas it's quite clear that the "low end" iMac with only 7 GPU cores can only be BTO'd to 1TB SSD, versus 2 for the 8-cores).
Reading this discussion thread makes one psychological quirk clear. People, even well informed professionals, are rarely comparing Apple to the best available alternative on the market (arguably Ryzen U line processors). They're comparing 2021 Apple products to their last experience with an alternative, which for many ppl stuck in Apple's ecosystem could very well be from 2010.
So what's Apple's end-game? Milking (predominately US) professional class in perpetuity. It's not like they're going to switch, especially if they perceive the alternative to be sluggish heavy bricks whose battery barely lasts 2 hours. For sales, perception is more important than reality.
> Reading this discussion thread makes one psychological quirk clear.
Is that so clear? It sounds more like a huge assumption to me. It’s not like having an Apple laptop makes it impossible to see how the competition is doing. Doubly so for people with friends in tech.
The M1 processor is a direct result of the death of Moore's law. It's an amazing processor, but a sad sign of things to come.
The performance gains from Moore's law have typically come from shrinking die size. That has ended, you can't juice more performance from general purpose CPUs. If general purpose processors no longer advance quickly enough, the only way to get performance gains is to build custom chips for common specific tasks. That's what we're seeing now with the M1. The M1 buys us a few more years of exponential-appearing performance gains, but it's a one-trick pony. You can turn code into an ASIC once, but after that, your performance is at the mercy of the foundry and physics.
The death of Moore's law has many consequences, the rise of ASICs and custom co-processor chips is just one of them.
You had me with everything except this. Any time someone claims the current state of computing hardware is "enough" I'm reminded of that fake 640K quote. There is no indication we are running out of applications for more compute power.
I'm not saying we're running out of applications for more compute power.
We're specifically running out of reasons to want faster linear (per core) general-purpose performance (in fact I'd say this happened some time ago). Everything else we get from here on in terms of smaller process etc. is just a bonus. We don't fundamentally need it to keep evolving our hardware for our ever-growing computation needs.
And that's because as our problems multiply and grow, parallel execution and heterogenous cores tend to solve our problems much more efficiently on the watt, than asking for "more of the same, but faster".
There's this Ford quote "if I had asked what people want, they'd have said faster horses". Fake or not, it reflects our tendency to stare at the wrong variables and miss the forest for the trees. The industry is full of languages utilizing parallel/heterogenous execution and you don't need a PhD to use one anymore.
CPUs are effectively turning into "controllers" that prepare command queues for various other types of processing units. As we keep evolving our GPU/ML/etc. processing units, CPUs will have less to do, not more. In fact, I expect CPUs will get simpler and slower as our bottlenecks move to the specialized vector units.
Production quality multiplatform software is much much harder and less fun to make for GPUs due to inferior DX, rampant driver stack bugs unique for each (gpu vendor, os) combination, sorry state of GPU programming languages, poor os integration, endemic security problems (eg memory safety not even recognised as a problem yet in gpu languages), fragmentation of proprietary sw stacks and APIs, etc. Creation of performance oriented software is often bottlenecked by sw engineering complexity, effort and cost, and targeting GPUs multiplies these problems.
tldr; we are not running out of reasons for wanting faster CPUs. GPUs are a crap, faustian bargain substitute for them.
Honestly that's a bit too abstract to make sense of.
As a programmer to another, I'd rather ask... what's one example of a problem we have today that needs faster linear performance than our best chips (not in a nice-to-have way but in a must-have way).
I'd rule out all casual computing like home PCs, smartphones, and so on, because honestly we've been there for years already.
Also due to decades of bias we have serialized code in our programs that doesn't have to be serial, just because that's "normal" and because it's deemed easier. Also we have a huge untapped potential of better performance by being more data-oriented. None of this requires faster hardware. It doesn't even require new hardware.
> I'd rule out all casual computing like home PCs, smartphones, and so on, because honestly we've been there for years already.
Casual computing can definitely be a lot better than where we are today[0][1].
The software business has moved to a place where it’s not really practical to program bare metal silicon in assembly to get screaming fast performance. We write software on several layers of abstraction, each of which consumes a ton of compute capacity.
We have resigned to live with 100ms latencies in our daily computing. This is us giving up on the idea of fast computers. It should not be confused with actually having a computer where all interactions are sub 10ms (less than 1 frame refresh period at 90fps).
Linear compute is the ideal solution. Parallelization is a useful tool when we run up against the limitations of linear compute, but it is not ideal. Parallelization is simply not an option for some tasks. Nine mothers cannot make a baby in a month. It also adds overhead, regardless of the context.
Take businesses for example. Businesses don't want to hire employees to get the job done. They want as few workers as possible because each one comes with overhead. There's a good reason why startups can accomplish many tasks at a fraction of what it would cost a megacorp. Hiring, management, training, HR, etc... they are all costs a business has to swallow in order to hire more employees (ie parallelize).
This is not to say parallelization is bad. Given our current technological limitations, adding more cores and embracing parallelization where possible is the most economical solution. That doesn't faster linear compute is a "nice to have".
Virtual reality, being so latency-sensitive, is always going to be hungry for faster raw serial execution. It seems like something that ought to parallelize alright (one GPU per eye!) but my understanding is that there are many non-linearities in lighting, physics, rendering passes, and so on that create bottlenecks.
On his recent Lex Fridman podcast appearance, Jim Keller speaks to exactly this mindset. He says that they've been heralding the death of Moore's law since he started and that the "one trick ponies" just keep coming. He says he doesn't doubt that they will continue.
> they've been heralding the death of Moore's law since he started and that the "one trick ponies" just keep coming. He says he doesn't doubt that they will continue.
The situation is clearly far worse than what you suggest. Back in the 1990s and early 2000s, apparent computer performance was doubling roughly every two years. Your shiny new desktop was obsolete in 24 months.
Today, we're lucky to get a 15% gain in two years. The "one-trick ponies" help narrow the "apparent performance" gap, but by definition, are implemented out of desperation. They aren't enough to keep Moore's law alive (it's already dead), and their very existence is evidence of the death of Moore's law.
Moore's law is only about the number of transistors per chip doubling every 24 months, not about the performance. Seeing that the trend is still happening, Moore's law is not dead, as so many have claimed.
But what is it good for, if it does not improve performance? For example, increasingly larger and larger part of transistors on a chip is unused at a given time, due to cooling issues.
And as long as there's something to gain from going smaller/denser/bigger, and as long as the cost-benefit is good, we'll have bigger chips with smaller and denser features.
Sure, cooling is a problem, but it's not like we're even seriously trying. It's still just air cooled. Maybe we'll integrate microfluidic heat-pump cooling into chips too.
And it seems there's a clear need for more and more computing. The "cloud" is growing at an enormous rate. Eventually it might make sense to make a datacenter oriented well integrated system.
It obviously does improve performance, otherwise why would people be buying newer chips? :) It doesn't mean we'll see exponential performance increases though. In specialized scenarios, like video encoding and machine learning, we do see large jumps in performance.
- It means consumers won't have to keep buying new electronic crap every couple years. Maybe we can finally get hardware that's built to be modular and maintainable.
- It means performance gains will have to come from writing better software. Devs (and more importantly, the companies that pay devs) will be forced to care about efficiency again. Maybe we can kill Electron and the monstrosity of multi-MBs of garbage JS on every site.
The sooner we bury Moore's Law and the myth of "just throw more hardware at it" the better.
>Today, we're lucky to get a 15% gain in two years.
The 2012 MacBook Pro 15-inch I'm typing this on is about 700 on Geekbench single-core, while the 2019 16-inch is about 950. 35% "improvement" in seven years!
M1 13-inch is 1700 on single-core, which is why I hope to upgrade once the 16-inch Apple Silicon version comes out.
>The "one-trick ponies" help narrow the "apparent performance" gap, but by definition, are implemented out of desperation.
I don't think that's right. x86 hit an apparent performance barrier in the early 2000s, with the best available CPUs being Intel Pentium 4 and AMD Thunderbird, both horribly inefficient for the performance gains they eked out; those were very much one-trick ponies created from desperation. It took a skunkworks project by Intel Israel, which miraculously turned Pentium III into Core microarchitecture, to get out of the morass. Another meaningful leap occurred when going from Core Duo to Core i, but the PC industry has been stuck with Core i for almost a decade.
We've finally smashed past this with Apple Silicon, but it is certainly not a one-trick pony; Apple could sell it to the world tomorrow and have a line of customers going out the door, just like it could have sold the A-series mobile processors to rivals. AMD Ryzen isn't quite the breakthrough Apple Silicon is, but it is good enough for those who need x86.
Apple's M1 is a good processor, but the only reason it "smashed past" previous macbook single core results is apple was using older Intel lower powered processors.
It is not twice as fast as even mobile x86 stuff, as much as people seem to want to think otherwise.
Anecdata of one, but compiling our product at work on my three machines (a 2019 intel macbook pro, a 2020 10 core intel imac and an m1 mac mini), the macbook pro is the slowest, but the imac isn’t that much faster than the mini. it’s something like:
Where the M1 really blows any other CPU away is single-threaded performance; multi-threaded performance is just normal. So it's not surprising that it's not faster than your 10-core iMac when compiling (which I assume is using 100% of every core).
In fact, given that the M1 is an 8-core CPU and your iMac has a 10-core CPU, the fact that they take 5 and 4 minutes respectively to compile seems to indicate that they're fairly similar in multi-threaded performance (and the iMac wins only because it has more cores).
Is this a bad thing? This seems like a great outcome for consumers, and will reduce e-waste. I look forward to a future where people see less need to upgrade year after year.
Even then Jim Keller is using a looser definition of Moore's law - i.e. he's saying there's a lot of scaling left rather than that the scaling will continue as it did in the past.
> The M1 processor is a direct result of the death of Moore's law.
It is a bit ironic since the M1 is a 5 nm processor, currently the finest process, and I think it plays no small part in its success. A very Moore's law-esque solution.
Mores Law as originally stated said transistor density doubled every 18-24 months. Using larger CPU’s for example let’s you have more transistors, but has nothing to do with Mores Law.
Clearly density has kept increasing, but the law refers to a rate of increase that we haven’t been able to meet. The original 386 released in 1985 had 275,000 transistors, using the slowest interpretation we would need to be at (2^18) = ~72+ Billion transistors today or (2^17) = 36+ Billion in 2019 which is close, but the chip would also need to be the size of a 386 which they aren’t.
AMD Epyc Rome is 1008 mm^2 vs a 386 at 104 mm^2. The M1 is 119 mm^2, but it’s only 16 Billion Transistors. As such it’s safe to say Mores Law is dead.
Back in the old days you'd get that sort of improvement in 2-3 years, not 5. I used to expect at least a 4x improvement on my last machine every time I upgraded.
Yeah, I bought myself a new PC two years ago or so to replace a 5+ year old one, and the difference was... okay? If it was twice as fast (mostly for gaming) I'd already be impressed.
Whereas back when (thinking of early 90's) you'd upgrade every three years and be taking massive leaps forward. 10x increase in disk space (40 MB to 500 MB), or going from diskettes (~1.5 MB? I don't even remember) to CD's (650MB). We went from Wolfenstein to Half-Life in just six years (it felt longer).
Maybe buying a 5950X at the birth of DDR5, PCIeGen5 and TSMC's 5nm wasn't the wisest choice. Ehhhh seems like all that new stuff would still take lots of time to actually get ready, and the 5950X is the best CPU now.
No way the M1 supports "dual socket" configurations. Absolutely no way a configuration like that would "combine" the GPUs and display outputs. I'd bet money on Apple releasing a larger monolithic "M1X" or whatever for the large MBPs.
The death of moore's law made us wonder - there is so much effort trying to optimise hardware, but less emphasis on making software more efficient. Our view is that there is a lot to do with regard to software efficiency to mitigate the limitations in hardware progress... See the company we founded in my profile, this was one of the drivers to build it
Let’s not debate whether we really are at the end of Moore’s law (not a foregone conclusion, given that the M1 is the first CPU at 5nm)
Why do you find it sad that we now have a holistically designed system, rather than the glueing together of ever more powerful parts that desktop PCs have gotten away with for a few decades?
Can't wait for an iPhone Pro with an M1 processor that I can plug into a thunderbolt/usbc dock, run monitors, a keyboard, ethernet, and have it running MacOS when in desktop mode.
EDIT: A little context; when I am in the office, I use vscode over ssh to connect to my desktop PC at home. My desktop takes care of my language server, syntax highlighting, compilation and vscode forwards my ports and spins up terminals. All I will ever need is a low powered computer that can run my browser and tooling fast enough.
> have not shown any indication that they will do so.
I find it very suspicious that the new iPad has a 16GB RAM option. There's ~no use for that in iOS. Wouldn't be totally amazed if some sort of dual boot solution shows up at WWDC.
Also: how would you cool it? The passively cooled M1 MacBook Air throttles after 15 minutes of high CPU usage. So, perhaps a phone would throttle after a couple of minutes?
Even when the Air throttles, the performance hit is barely noticeable for interactive tasks. You really only see it on batch tasks where you can measure a discrete start and end times. Not saying that there isn’t throttling but it’s not that impactful in practice.
This is the utopian future I dream of, but unfortunately there is 0 economic incentive for a company like apple to create a product like this. With how powerful our phones are, they could easily be the main workstations for millions, but they're forced to remain as toys. I'm somewhat surprised google don't try to push this more.
Basically many modern Android phones today work perfectly well paired wirelessly with modern TV screen. Enable desktop mode, pair with a mouse and keyboard, and you have a kind of working environment: video chat, programming, text editing, office apps - all exists.
That's actually how to watch youtube without ads, because non-Android TV has no freetube/newPipe apps.
Termux, Andronix and (Wireless) Dex (Samsung phones only) work pretty well to bring a workable Linux desktop onto an external screen, with the phone as a separate screen for android.
You can also connect a usb-c hub if you want hdmi in and out, network etc. It's even possible to drive 3D printers directly over USB.
As an additonal bonus, for Galaxy Notes and some newer S devices you can use a stylus for art, signing PDFs etc. They also support splitscreen multitasking and virtual desktops which is quite practical if you work from the terminal (emacs).
Certain phones can be very good replacements for a computer if you require the portability.
Have you tried installing[0] code-server[1] in Termux[2]?
code-server is basically the VSCode app split into a backend which runs on NodeJS and a front end Web app which runs in an Android browser app and communicates with the backend running in Termux. It works pretty similar to the Electron app, but extensions don't come from Microsofts Marketplace because of licensing and usage restrictions. Most popular extensions are available though.
Sure, can be whatever browser. Seems like Chromium, Safari, Firefox all run perfectly fine on a MacBook Air so I have high hopes that an M1 powered "iPhone Pro" wouldn't have too much trouble running MacOS with a browser and vscode.
It's more that Apple wouldn't create a device that would remove the incentive for customer to buy other products.
And yet, for work I'd like to see something 10x as powerful as a phone still. The differences in performance between the two is quickly becoming less and nonexistent though.
Isn't basically the same vision Canonical had for Ubuntu Touch/Phone?
You carry your (main) computer with you in your pocket. If you're on the go, you use its screen. If at home, you plug it in better screens/inputs peripherals.
My use case is pretty fine, I think. I use vscode's ssh development feature to remote into my desktop at home. My laptop just needs to run a browser and editor window which I think an M1 powered iphone would be fine at.
The difference is the advantage Apple has with their HW+SW vertical integration. Its as simple as that.
Intel sells CPUs, so it creates the ranges of CPUs to make money. They advertise clock speed and put the higher ones on a pedestal, that is how they can charge more money. The OEMs just used that playbook and developed their own marketing stories on top of Intel's marketing. Either no one tried to differentiate or they just didn't have the power to fight it.
Apple has a lot going for it in that scenario. They never had proliferation of models and always kept the number of options to a minimum. They also don't deal with volume, so they didn't have to do 20 variations of mac mini or the iMac. They kind of did their own thing even with the intel macs. Now with their own processor they were in a position to double down and make the whole product line even more efficient.
Like the article said, they couldn't have done if the M1 is not clearly better than the competition.
> Intel sells CPUs, so it creates the ranges of CPUs to make money. They advertise clock speed and put the higher ones on a pedestal, that is how they can charge more money.
How is any of this specific to Intel?
Apple uses M1 at different clockrates in different devices. And that's not only due to battery and heat concerns, but also because those M1 are rated to run without errors at different clockrates.
Similarly the new iMacs come with two types of GPU: 7 core (cheap), and 8 core (more expensive). The 7-core ones had one core disabled because it was defective.
What Intel does reflects the simple reality of producing microchips. Some units turn out better than others. So you sell them at different prices. It's the same for AMD. It's the same Apple.
> Apple has a lot going for it in that scenario. They never had proliferation of models and always kept the number of options to a minimum.
I just explained above there's no single M1. They vary by core count, and by clockrate. And probably more.
The only thing that's "one" about M1 is the brand. It's one brand. It's easy to have a clean brand, when you don't sell naked CPUs by themselves. "Oh everything we have has M1". It doesn't even matter if this is accurate or not, we realize that, right?
Are we going to hold up Intel for having hundreds of SKUs because they sell chips alone, and Apple sells them in computers? I hope not, that'd be silly.
Are we also going to hold up Intel for being in the PC CPU business for decades, while Apple has been at it for several months only? I hope not, that'd also be silly.
Finally, are we going to hold up Intel for targeting cheap office machines, and high-end data centers, and hardcore gamers, and scientific uses, and many other customers that Apple isn't even trying to have? I hope not, same reason. :-)
I think vast evidence over the past few decades has proven this is actually not the case, especially in tech.
> , when you don't sell naked CPUs by themselves.
This is Intel's choice of a hill to die on. They drove everyone else out of the market then failed to realize that market itself is collapsing with the rise of cloud computing and ISA agnosticism and haven't evolved with the times.
I thought you were making a claim that Apple would eventually trend like Intel with hundreds of SKUs. I don't see apple complicating its CPU line like that because it has no need to: the M1 works in a variety of scenarios, which which is what the OP was pointing out.
It is hard to see to me how this ends any other way. The creative class (us) will quickly have largely all ARM computers within 4 years.
Its not hard to see from there how software will be even more optimized for ARM variants than x86 and that the scale of both mobile and consumer computing will push x86 out of the datacenter slowly as old software that relies on x86 is retired over the next decade.
People won't want to develop on x86 and deploy to ARM. ARM is more power efficient which is important in the data center too. We already scale by the core in the cloud, so why not just heap a few more cheap cores on if we need more to match x86 (which right now looks like we might not).
Are there examples of Arm designs OTHER than the M1 which are suitable for consumers? Yes, the M1 is a remarkable product and it will certainly make inroads against x86 on the desktop but it is from a single company. Will Apple M1 (2, 3 etc) replace all x86 devices in a decade? That’s hard to swallow. Now, if we see another Arm product released that ALSO kicks x86 butt, from another player, then maybe I’d believe a change is happening
Qualcomm tried so many times, it is actually a strong argument against ARM dominance, if Apple is the only one who can make it work but there are two companies that can make x86 work then the answer is pretty obvious which architecture is better.
The smartphone ARM and chromebook ARM processors are nowhere near the M1 or x86 CPU in terms of performance. You need an M1 kind performance in these devices outside of Apple.
Is not the majority of gaming occurring on mobile ARM devices, not dedicated consoles or rigs?
“Mobile gaming has fast become the largest gaming market in the world with industry revenue expected to hit $76.7 Billion by the end of 2020 and 2.2 million mobile gamers worldwide. It’s become so popular that 72.3% of mobile users in the US are also mobile phone gamers. To put this into perspective when compared with the wider video game market, by 2022 the global game market is set to reach $196 Billion, and the mobile gaming market will account for $95.4 Billion of that alone.”
That's only because no one other than Apple has put up a serious competition in ARM CPU space.
Earlier consoles used a boatload of different CPU and GPU architectures (PowerPC, SuperH, IBM Cell, other proprietary stuff, x86), nowadays consoles have converged to essentially a locked down, glorified x86 PC on one side (Xbox
One, Xbox S/X, PS4, PS5) and ruggedized ARM-based tablets (Nintendo Switch) on the other side.
On the mobile gaming space, ARM already has achieved utter dominance with Switch + iPhone + Android... all it needs for ARM to conquer the console market is for someone to tape out an extremely powerful chip and actually sell it - Apple won't.
Is the dominance of x86 due to any inherent properties of x86, or historical legacy?
I think the xbox-s/x and ps5's choice of CPU vendor was driven more by business factors and the integrated graphics, than the virtues of x86 per se. The Switch does pretty well despite the less powerful CPU in the tegra.
ARM Macs run x86 software great (my experience) and ARM Windows machines run x86 software adequately (I have read). Don’t know how well QEmu works on Linux - haven’t tried it but know it’s available.
Apple's Rosetta 2 that allows x86 code to run on M1 is a different beast than QEmu though. It translates x86 machine code to ARM code. That's why it is much faster than VM solutions.
QEmu also translates x86 machine code to ARM code, or vice-versa, or many other combinations. It's more optimised for portability and ease of adding translation support for new architectures than performance though but it's a really nifty bit of engineering.
Lots of software have been ported to ARM architecture in the last half year, so the clock started ticking for great ARM based Windows laptops. But it's clear that ARM's memory ordering has won.
Creative class is a lot more than programmers. Look at boutique system integrators like Puget Systems... do you see Apple/ARM Hardware? Heck no you don't! These guys sell hardware to a LOT of the companies that comprise the credits in movie production, game production, laboratories, university science departments, ML researchers, big energy, and soooooo much more.
These guys are not rewriting their software stack for thin chassis, thermal limited, and non-repairable hardware. Raytheon doesn't buy Apple/ARM hardware for their simulations, design and development. Does Boeing, Airbus, Ford or Caterpillar run on Apple/ARM hardware? Are these companies just chomping at the bit to ditch their legacy stacks? I don't think so.
I'm not sure anyone thinks programmers when they think creative class. That's like saying the professional athlete class is a lot more than just Pokemon Go players...
Isn't creative class just people who make things without consuming raw materials? I think it includes consultants, "knowledge workers", etc. in its original formulation, not just artists.
I wholeheartedly disagree. Apple is probably the "first, best" case of ARM, but there's absolutely nothing stopping anyone from making a similar investment in ARM hardware. Indeed, Amazon is already doing so with Graviton, and we're seeing similar improvements in both raw speed and in perf/watt like what we're seeing with M1 chips vs. Intel[0]. And that's one of the best parts of ARM -- the design itself is available for licensing, unlike x86, so anyone can pony up the cash and build up their own customized chips that maintain compatibility with base ARM code (maybe needing a recompile to take advantage of new hardware features).
But the M1 is already a huge hit in the music and design community, which is pro-performance oriented. If subsequent Mnx chips provide enough of a speed bump they're going to destroy the high end Xeon market.
You've named two industries where Apple have histrionically been really popular, mostly because of their software rather than for any particular performance reason. They'd all buy the latest, Apple product whatever was inside because that's the only one Apple is going to support going forward. The idea of real high-performance computing moving to ARM is entirely dependent on there being suitable good software support, which currently does not exist (at least in my industry there is nothing yet and the vendors move very slowly). This "destruction" might eventually come but it won't be any time soon and Intel will of course react in the meantime.
I agree with you in general, but disagree on the time scale. Particularly:
>The creative class (us) will quickly have largely all ARM computers within 4 years.
I think good options might exist in 4 years, but still possess trade-offs, where legacy x86-64 platforms still have their place. This could be something as niche as maturity of IOMMU implementations in x86 workstations, but stack up a bunch of niches and I find it hard to see "largely all" in four years. Fifty-fifty in four years, "largely all" in ten for "the creative class". Then there is a long tail of industries may end up legacy-bound to x86 for decades to come.
Nothing is guaranteed. If starting in 2022 we were to see another 3-year performance plateau, ala 2016-2019, then anyone who bought into an mid- or high-end x86 platform today may not feel compelled to upgrade in just four years. Even if the ARM options exist, it might not be a compelling enough gap in four years' time. I'm not saying that a plateau is likely (competitive landscape suggests the opposite), but I don't have a crystal ball, so I won't discount it either.
Again I want to emphasize that I mostly agree, but I wouldn't bet any money on sweeping changes on such an aggressive timeline.
> People won't want to develop on x86 and deploy to ARM.
It's still too early to say if people want to develop on ARM to deploy on x86.
The 90's with all their "better than x86" chips couldn't beat the fact that people want to develop on the same architecture to which they deploy. Back then that meant displacing all these other chips from the server side because there was no option but x86 for us mortals to have as PCs.
This dearth of alternatives on the server side took out their respective offerings on the high-end/niche workstation market as well. The PowerPC was one of these casualties, lets not forget about that.
Things are even worse today. The x86 is dominant in both ends and has no real alternatives: ARM server chips are weak and the inertia of building on x86 and deploying on x86 is too strong, and ARM desktop chips are also weak, except for a single luxury brand (Apple, which incidentally cares very little for any developers besides the ones that develop for their own ecosystem).
For AWS cloud customers, I don't think this is true anymore. Graviton2 is quite capable with better performance in some instances for less money. I've already started moving some Java services to Graviton2 for the cost savings.
AWS is positioning the the Graviton2 in such a way that everyone using AWS will end up on them if they can.
Only if I can buy a desktop/laptop with a powerful ARM processor and plug-in a bootable usb and install operating system (arm version) of my choice with all drivers etc working fine.
I expect this will be the case with Linux on the M1 in a year or two, given the pace of marcan's work. Windows depends on Microsoft's willingness, of course.
That's the thing, reverse engineered drivers are not the solution and I don't see Apple providing drivers or any documentation of M1 SoC for other operating system. We need M1 equivalent from vendors that are NOT into "vertical integration" like Apple.
Ok, it's time for the emulation anxiety meme (based on EV range anxiety), will this system emulate this extremely performance intensive x86 game at ultra with 4k graphics at 144hz? If not, then I will not buy it.
It's already difficult enough to get Windows games to run on Linux, getting them to run on ARM with good performance isn't going to happen, especially when every ARM vendor insists on using an integrated GPU.
If your CI system builds for x86 and ARM from the first build, it will be simple to ship to both. For old games it’d be harder, but one could maybe compile from one ASM to another.
a lot of people still rely on windows. maybe if microsoft gets a better emulator going, but I imagine with the amount of legacy they support that's a lot more challenging than for apple who are much more trigger happy when it comes to imposing changes on devs.
i think it'll definitely happen, especially with the web browser increasingly taking on the role of the operating system, but 4 years seems a little optimistic, even with your qualification of that statement
The article makes a good point on positioning, but I'm not sure if it's due to lack of data points.
Sure Apple seems to be using the M1 in across every price segment for their products, but M1 is also literally the first iteration of their shift to running macOS on ARM and not x86 architecture. This mass push mainly serves to speed up the transition.
No doubt there'll be a higher performing SOC for Apple's Pro lineup such as Mac Pro, Macbook Pro. History has confirmed this since Apple developed the A*X chips specifically for the Pro lineup of iPads. Main question is, how many concurrent SOCs will Apple maintain? Just 2 as they've done for the iPhone & iPad Pro divide or potentially more?
I believe you are 100% correct that when the M1X chips are released they will clearly differentiate the pro market. At the same time it would be impressive of Apple if the X chips capture MacBook Pro, iMac Pro and Mac Pro markets all with a single chip as well. So (as you point out) the article is only half the picture as it’s missing the pro lineup. And yet the article’s main point is true that Apple is satisfying an absurd number of products with (likely) only two processors.
I remain interested in seeing how much of Apple's lead is the process size, and how much is engineering prowess.
That is, would a more generic new ARM Neoverse on 5nm perform at roughly the same clip? I suppose AWS's Graviton 3 would be the first place to see that, or something close to it.
Apple can afford to exclude some hardware that other manufacturers need to support legacy OSs and apps, leaving more space on their SoC for more CPUs and specialized hardware.
Generally no hardware today is designed for OpenGL. They all translate it into their own instruction set. The M1 is no different as it supports OpenGL as well.
The CPU microarchitecture is truly a quantum leap ahead of x86 processors. The fixed-width nature of arm instructions means there's way more front-end bandwidth for decode, which can then feed a much larger out-of-order engine. Having memory so close is also a huge win. TBH, I don't know what wizardry they have managed to get power consumption so low, but wow. 15W TDP and trouncing desktop processors pulling 12x that power!
That is the talk yes, but we don't know how much of the actual performance improvements is simply due to the lower process size. And process size is independent of if it is ARM or x86, but instead depends on manufacturers getting production slots in the chip foundries. Currently Apple is hogging up all the production capabilities of the smallest process that TSMC have.
In terms of decoding bandwidth I'm not sure how many instructions it can actually sustain but it's not like it's 10x - M1 is a basically a very wide version of a tried and tested formula rather than a wholely new thing.
x86 decoders are massive. They are about the same size as the integer ALUs in current designs. I think it was an Anandtech interview a couple years ago where someone from AMD said that wider decoders were a no go because of the excessive power consumption relative to performance increase. I’m sure they’ve looked into this exact idea many times from many different angles.
ARMs all 32-bit instructions make the decoder trivial in comparison. To parallelize 10kb worth of instructions across 8 decoders means read 32 bytes into the decoders, jump 32 more bytes and do it again (yes, it’s slightly more complex than that, but not too much).
x86 instructions are 1-15 bytes. How do you split up to ensure minimal overlap and that one decoder isn’t bottlenecking the processor? How do you speed up parsing one byte at a time? uOp cache and some very interesting parsing strategies help (there’s a couple public papers of those topics from x86 designers). They can’t eliminate the waste or latency issues though.
What is amazing to me is their efficiency despite the limitations. When you look at their massive 64-core chips and account for all the extra cache, IO, and interconnect necessary, it seems like scaling up the M1 to those levels would result in a less power efficient chip by (20% or so).
Ryzen 7401p still represents the pinnacle-for-it's-time value offering to me. 24 cores, single socket, on a 14nm process, launched July 2017 for $1075. Just an amazing breakthrough processor. At the time there were supposed to be X300 and A300 chipsets coming, basically just boot bios, to make ultra-low cost motherboards possible. There have been improvements since then to architecture & IPC, but overall it feels like we've been headed in reverse since then in terms of chips that get put into medium/large-ish sized chassis.
It has been remarkable what a mockery Mac has made of mobile chips, and now of desktop chips. At a way more reasonable price point.
> but the company’s decision to eschew clock speed disclosures suggests that these CPUs differ only modestly.
I forget exactly when but the first Google IO that happened where Google started offering simply "Intel Core i5" or i7, without saying the model number (2017?), without revealing speeds, & it was a huge huge jumping the shark moment for me. A post competitive market, where speeds were good enough, where reputation & market presence domineered over metrics & comparable factors. I don't think chasing specific GHz & cache size numbers &c is super rewarding or important, but it felt like the first time we were being sold an unspecific system, where obvious inquiry into what we were buying was blocked.
This is somewhat the opposite of this article: that Apple has found a good enough CPU to sell everywhere. But I still think the real truth is that the providers, those building systems, have begun to refuse to compete. They refuse to detail what they are offering. AMD has been without competition and the new 24c chip is considerably more expensive, albeit for yes more IPC, but it still hurts me a little. Looking at Google no longer allowing us to have any idea what kind of Core i5 or whatever chip though, they beat this article to the punch almost half a decade ago. Consumers haven't been given respect, haven't been allowed to know what wattage, what caches, what Hz their chips use for a long time now. Google started that, Google pushed the post-knowable computing upon us. Apple is merely following up on this, merely delivering what Google started, at a far better price point, with far better underlying technology.
> but the company’s decision to eschew clock speed disclosures
There's one other major factor here: clock speed isn't all that useful for this new architecture full of custom-purpose cores, asymmetric cores, a new arrangement of connections between cores, etc. Actual benchmarks may be useful, but not so much a clock speed measurement (and which of those various asymmetric cores are you measuring against?).
I agree! There's a lot more to it than MHz! At the time, we also didn't have any other figures, like cache size, any power consumption/TDP figures, memory speed/bandwidth, any base or turbo clocks.
I felt like it was probably one of the first possible custom cores I'd seen, something specifically built for Chromebooks or Pixel or whatever the product was. That probably this part was not listed on Intel ark. But it was super distressing none-the-less: I had been denied any understanding what-so-ever of what kind of core was going to be within. There's more than MHz, yes, but also not knowing process (nm), not knowing wattage, not knowing caches... Google was asking me for something unprecedented: to spent ~$1000 on a system for which I had no understanding at all of expected performance.
It felt like a dark dark dark dark day. After decades of in depth analysis & review of every cache change, every TLB tweak, after endless in depth review of cores, we'd entered a bold new era. Where none of of what you really were buying was regarded as consumer-pertinent.
There's some "bright" spots but they are somewhat obfuscated. Lenovo's M75q gen2 with AMD 4750GE was an amazing package, regularly on sale for a very reasonable price with amazing performance[1]. But alas, it's rare that the genuine performance is known, is what is for sale. We have become a post-consumer market. The invisible hand operates at a post-consumer level, selling us on other, more illusory factors than capability. Truth vanishes. And yes, as this article says, many people just don't need to play the game in the first place, but still, the becoming invisible, the vanishing of actual competition is most dismaying.
What I find lacking in the article is an apt comparison with AMD's Ryzen chips.
Those are all the same chiplets, just binned differently.
High performance ones go into the 5800x and 5950x.
Lower performance ones into the 5600x and 5900x.
Which seems to be the same thing apple does, with a slight naming difference. Calling everything M1, instead of naming the CPUs.
At least on a x86, you can install Windows / Linux / Hackintosh. With m1 you have can install HackinWindows / HackinLinux / Mac, if you understand the joke.
The good part with m1 is that force amd and intel better cpus. Competition in always good.
The not so good part is that Apple might start a trend of higher prices for CPUs.
Economics of scale. my bet is Apple will develop a cluster of M1s and call them M-power-2 to address the 1.5TB RAM workstation market you mentioned. It will be practically an array of M1s (or next gen) together. The way the M1 is used from iPad to iMacs is genius in terms of cost reduction at scale and for an end consumer, who doesnt care who else uses their chip, I get a good $/cpu power bargain.
Tim Apple being the supply-chain guy he is, I see him doubling down and scrambling engineers to user more M1s in array to build a stronger core. Maybe an M1-based server rack for AWS?
People tend to lump them all together and just call them "ARM" but the 64-bit instruction set ("AArch64") first came out in 2011 and is hugely different than classical 32-bit ARM. And the chips Apple makes these days don't even implement the 32-bit instruction set anymore.
Honest question: could it be a matter of backwards compatibility?
Intel has been piling stuff on top of old architectures in order to stay backwards compatible at each step, while Apple had the opportunity to develop their architecture from scratch? I don’t know the answer, I’m just curious.
I was always taught ARM (and M1), being a RISC architecture, isn't as "capable" as x86 in some way, whatever it means.
I am no longer sure if that's still the case, since they seem to work just as good, if not better (energy efficiency). Of course, it's not exactly an apple to apple comparison since Apple upgraded so many other things, but I just didn't see any mention about the limitation of being RISC in these articles.
Could someone enlighten in this respect for an average Joe who knows nothing about hardware?
You are right that CISC processors (like x86) have more capabilities, i.e. more instructions. You as the programmer get to take advantage of the "extra" very specific instructions, so overall you write less instructions.
Less instructions sounds great, but with CISC you do not know how long those instructions will take to execute. RISC has only a handful of instructions, that all take 1 clock cycle (with pipelining). This makes the hardware simple and easy optimize. The instructions you lose can just be implemented in software. That space on chip can be used for more registers and cache, for a huge speed up. Plus nowadays, shared libraries and compilers do a lot of work for us behind the scenes as well. Having tons of instructions on chip only is a benefit for a narrow group of users today.
We've found that for hardware it's better to reduce clock cycles per instruction, rather than reduce total number of instructions.
> RISC has only a handful of instructions, that all take 1 clock cycle (with pipelining).
This isn't really the case with modern ARM. Fundamentally modern ARM and x86 CPUs are designed very similarly once you get past the instruction decoder. Both 'compile' instructions down to micro-ops that are then executed rather than executing the instruction set directly so the distinctions between the instruction sets themselves don't matter all that much past that point.
The main advantages for ARM come from the decode stage and from larger architectural differences such as relaxed memory ordering requirements.
For the most part I think so. The main advantages x86 has are based on code size. Many common instructions are 1 or 2 bytes so the executable size on x86 can often be smaller (and more instructions can fit in the instruction cache). I'm sure there are tons of other small differences that weigh in but I'm not well versed enough to know of them.
A paper I read a few months ago compared instruction density between a few different ISAs. Thumb was 15% denser and aarch64 was around 15% less dense compared to x86. Unfortunately, mode switching in thumb impacts performance which is why they dropped it.
RISCV compressed instructions are interesting in that they offer the same compact code as thumb, but without the switching penalty (internally, they are expanded in place to normal 32-bit instructions before execution).
If they added some dedicated instructions in place of some fused ones, that density could probably increase even more (I say probably because two 16-bit instructions can equal one 32-bit dedicated instruction in a lot of those cases).
It’ll be interesting to see what happens when they start designing high performance chips in the near future.
You have been listening to propaganda. ARM has always been better. Consider the fact that it still exists, despite Intel's predatory nature. It exists because Intel could not make a processor for the low power market that was performant. They tried. But anything they built either took too much power, or used the right amount of power but was too slow. So ARM survived in this niche. But it couldn't grow out of this niche because Intel dominates ruthlessly.
But the low power market shows that, for a given power consumption, the ARM is faster. Does that not apply everywhere? Yes, it does. And so Apple, which controls its own destiny, developed an ARM chip for laptops and desktops. Its faster and cooler and cheaper than Intel chips, because ARM has always been faster and cooler and cheaper than Intel chips.
AWS, which also controls its own destiny has launched Graviton2. These are servers which are faster and cooler and cheaper than Intel servers, and the savings and performance are passed on to customers.
As long as Intel ruled by network effects - buy an Intel because everyone has one - build an Intel because it has the most software - their lack of value didn't matter.
There are now significant players who can ignore the network effects. The results are so stunning that many people simply refuse to believe the evidence.
Certainly seems that way. It looks like there going to be an M1 chip for 99% of folks, which works fine for all non-CPU-pegging work (Air, 13" MBP, 24"iMac, Mini), an M1.Large for stuff that pegs CPU (27"iMac, 16"MBP), and M1.XL for 0.01% Mac Pro folks who drop 5 figures USD on computers. But I'd expect the numbers to decrease logarithmically and the prices to be multiples. M1 machines from $700 to $1700, M1.L from $2000 to $4000, and M1.XL from $6000 onwards.
> Second, Intel and AMD both benefit from a decades-old narrative that places the CPU at the center of the consumer’s device experience and enjoyment and have designed and priced their products accordingly, even if that argument is somewhat less true today than it was in earlier eras.
I would have argued that memory was far more important than CPU in quickly judging the performance of a machine, once the 7th generation Intels made dual-core obsolete. But the M1 seems to buck this trend a bit as well, given that it only has an 8GB and 16GB variant and its new unified memory model makes traditional estimates of how much memory is needed less important. Some workloads such as an in-memory database won't change, but the memory usage for GUI rendering, graphics, etc. can take advantage of much faster accesses. And, with SSDs which are now considered a must-have for any serious machine, paging to disk is far less expensive than before in any case.
On another note, the M1 iPad Pro is the first time Apple has ever officially confirmed or marketed, let alone offered a choice in, the RAM for an iOS/iPadOS device.
M1 is the fastest Apple CPU YET.
I suspect in the fall they will release the M2 for MacBook Pro 15 inches.
They have also delayed releasing the MacBook Pro 15inches with Intel on purpose in my opinion.
When they will release the MacBook Pro 15 inches with M2 they will compare it with the Intel version with a 3 years old processor.
I don't trust the Apple benchmarks much.
They are choosing what to compare and what metrics.
Let's wait 3 years when the dust has settled and we'll be able to compare apples with apples.
Let's see also if Apple will be able to keep up the improvement of in-house built CPU+GPU with ALL COMPETING MANUFACTURERS OF CPU&GPU.
What if Nvidia or AMD or Intel comes out with a huge leap, Apple then won't be able to take advantage of that.
In my opinion the M1 is the new PowerPC.
In 10-20 years from now Apple will have slow in-house built hardware and we'll be getting back off the shelf hardware like when Steve Jobs moved from PowerPC to Intel.
16" will be the size, like in the current gen. External form factor about the same size as the old 15", but smaller bezels.
If they come up with a 16" M2 with 16GB+ of RAM they'll be out of stock for MONTHS, everyone I know will be upgrading from their pre-touchbar Macbooks.
yes I found myself exactly in that situation.
Got a MacBook Pro 15'' mid-2015 which was falling apart.
I did not trust Apple with the whole M2 and abandoning support for Windows Bootcamp.
I made myself an hackintosh. Paid pretty much like an iMac but with everything maxed out.
Still missing the GPU because of the crypto shortage but that's all another story.
What’s also lacking from a marketing perspective is the “Intel Inside” campaign - which was incredibly successful for the Wintel monopoly in the 1990s and early 2000s.
Seeing the sticker or hearing this slogan used to imply a premium or cachet to the product/hardware to the average Joe or Joanne.
No longer, Intel’s brand recognition has really taken a hit in the past decade.
People underestimate AMD and mix it in together with the current Intel chips when talking about the M1. Ryzen is not far behind M1 in single thread performance and beats it in multi core. If Intel had not made all kinds of deals with laptop makers for exclusivity then the narrative would be totally different in my opinion.
Assuming the M1 performance extrapolates to desktop power/cooling it's going to be a monster chip. If that is the case I think apple will not stop at having the fastest watch/phone/tablet/laptop/PC. Why not go after the datacenter and leave money (a ton of money) on the table?
How you intend to extrapolate that? A seemingly much larger thermal/cooling headroom usually does not gain you that much in absolute performance. Consider the mobile/desktop Zen3 Ryzen geekbench (S)ingle/(M)ulti scores:
- 5800U, mobile (10-25W), 1400(S), 7000(M)
- 5800, desktop (65W), 1600(S), 9000(M)
- 5800X, desktop (105W), 1700(S), 11000(M)
and
- M1, mobile (10W), 1700(S), 7000(M)
Everyone's going gaga over the fact that the passively cooled M1 can trade blows with big desktop towers. But mobile Zen3 is not that far behind those towers either, so I think much more thermal headroom buys you less than people assume.
One of the big reasons the M1 is so fast is its RAM is on-die. Latency and throughput to memory are dramatically better and you don't have to drive those long traces, saving power as well. Servers need to scale to memory sizes in the terabyte range. Apple will need a new design that uses conventional memory before they can go after the big workstation market, let alone server.
It's not on die arghhhh why does this misinformation keep happening. It's on package. It's two separate LPDDR4X chips. Latency is not dramatically better. Other current LPDDR4X laptops run the same clock speeds off package. Maybe Apple has more aggressive timings than them, but the difference would be tiny.
For certain applications, the migration will be challenging. It's also a slowly dying market. Amazon has an ARM instance, already. I'm sure they're already tearing the M1 apart to see what they can use. It's also no Apple's core competency; enterprise sales is a different beast.
Edit: I wasn't quite clear: it's dying because businesses are migrating to the cloud, and cloud providers are already working on/support ARM CPUs. Google might be willing to buy M1 chips. Not servers or boards because everything is probably custom by now, just the chips.
In what way is the datacenter market dying? Are Googles and Facebooks and Cloudfares and Microsofts and Amazons suddenly decommissioning huge racks of computers while their cloud businesses take off? What is that cloud made of?
AWS is on its second in-house ARM chip. You can launch an M6g/R6g/C6g right now. If you have workloads on the cloud, you should be benchmarking already, as they will likely be cheaper for any given workload.
Those all owns their datacenters (and some can afford to design their own CPUs for those).
Renting racks isn't dying yet, but it is on a downward trajectory.
Interesting marketing approach for sure. What I want to know are the economies of scale for using a single component, both at the chip fab level, the main board level, and overall product level.
Still seems like Apple's typical under-powered per price point offerings, but does this close the price/performance gap a bit, or drive higher profits at the same price points, or do they really see it as just a good marketing play?
While it sounds nice to have one chip in different machines, what is the benefit to the user who buys one machine, typically for efficiency or power? I suppose in some edge cases it might be nice to have a laptop with CAD-station power, but also able to stream movies or edit documents for a full intercontinental flight on a single battery charge?
Apple's focus tends to be on consumer products, I don't see them swinging for the fences and presenting an AWS like cloud offering at least under their own name.
Personally I'd like them to offer alternative versions of their CPU to cloud providers, I imagine that the current core configurations are not ideal for the sorts of workloads the big cloud providers need from a single processor.
But this will require serious heavy lifting and would likely require official linux support for Apple's hardware (undoubtedly they have internal systems running bare metal linux on Apple silicon), however I don't see Apple wanting to do this in the open and in public.
What’s the advantage? I don’t think it’s likely to cost less as a service, especially directly from Apple. If you get the same performance per dollar you might as well keep using what you have.
Heat and energy savings only apply to the direct user (in this case Apple Hosting) and unlikely to be passed onto you (since it’s Apple).
We’ve already seen this with M1 computers which are simpler and less expensive but did not have a lower retail price.
They would use their scale and engineering muscle to differentiate. Free outgoing bandwidth to iCloud users. The most eco friendly cloud. “Lambda at edge” which can compute on charging iOS devices nearby, those users earn a little Apple Cash. “Launch in iCloud” thin client support for resource intensive applications, using a proprietary low latency VNC alternative (like Parsec). I want to see them get creative.
Completely agree. I can't see them reviving the Xserve brand with their own silicon, but I could see them introducing cloud hosting services. Ideally with differentiation in something like environmental credentials (reduced, renewable power, etc.) to try and gain market share whilst maintaining margin.
Besides a deliberate paucity of knobs to tweak, what exactly is the difference with AWS S3 as far as the target market (app developers) is concerned? This is a cloud hosting service in all but name.
It's so simple - Apple doesn't need to make money off of the M1 - they use the M1 to make money. For companies that sell CPUs instead of systems, it's the other way around.
>Apple’s gamble, with the M1, is that its custom CPU performance is now so high, at such low power consumption, that the choice of chip inside the system has become irrelevant within a given product generation.
This is clearly wrong as they are still selling Intel versions of the MacBook and MacMini. Apple makes a lot of money by offering a range of processor options. At the moment we are early in the transition but I have no doubt there will be M2, M3, etc options for most of the range.
But the point is that people do not care about the cpu in their mac. They care about screen size, price and other stuff.
Apple is still selling intel versions, because either they are cheaper (older generations), or because the newest version does not have the M1 yet (like the 16" mac book pro).
That's not the only reason - the M1 is limited to 16GB which is fine for many workloads, but not for all. Plus not everything works on the M1 yet e.g. running Linux in a VM.
> This is clearly wrong as they are still selling Intel versions of the MacBook and MacMini
I can think of other reasons some customers may prefer or need to buy the Intel version. AFAIK, you cannot run virtualized x86 code (docker, parallels, vmware) on the M1 macs (yet?) or maybe you need a driver for HW that is not yet available for M1/ARM. You can't conclude that they sell Intel versions for performance reasons.
> This is clearly wrong as they are still selling Intel versions of the MacBook and MacMini
They're only doing that until they can replace the entire line with "Apple Silicon", which they said will take less than 2 years from when they announced the first M1 Macs. It's simply a stop-gap measure.
Perhaps Apple has more processors in the pipeline (this almost certain, as we have seen with its mobile processors).
It may be very difficult to sell products with the same processor but different hardware forms, because the market is tuned to comparing processors.
I am not saying whether its right or wrong, just that there is a whole mindshare of the market (marketing, advertising, news articles, blogs and videos) that start comparison of two different with their core processors.
Would M1 scale up with 64GB or even 1.5TB of RAM? It's not possible to have that much memory integrated; what would be the performance difference in that case?
Samsung started shipping 16GB LPDDR5 modules last year. They are supposedly 1.5x the speed and 20% lower power.
Double the stacks from 2 to 4 and moving from 8GB to 16GB gives 64GB. The two extra channels and extra performance gives something like 3x the bandwidth at only a bit more power. That seems perfect for feeding more cores and a larger GPU.
I’d love to see a move toward 512-bit HBM3. It seems like the perfect compromise. It doesn’t need expensive silicon interposers and offers decent latency while still having a lot of bandwidth. HBM2e uses about half the power as GDDR6 for the same bandwidth (I don’t know about HBM3 power numbers)
> "Part of the reason Apple can get away with doing this is that — and let’s be honest — it’s been selling badly underpowered systems at certain price points."
Well, they've been doing this for a long time, and were able to pull it off only because we got hooked up with their ecosystem and preferred more than the alternatives. What worries me though is the trend to make these machines totally unupgradeable: not only this is bad for the environment but will also decrease the long-term value of these expensive objects.
I'm very curious to see what happens to Intel. When the TSMC Arizona plant comes online in 2024 or sooner there doesn't seem to be much left for it to do that AMD and others are already doing more effectively. Perhaps that's too much an oversimplification, however it seems to be the sum of the various parts given as others have said here and elsewhere (and was my experience too), the performance of Intel hasn't changed much in about 10 years.
This is completely disregarding scale; the iPhone allows Apple the scale to manufacture these chips at an affordable price. Lenovo, HP and Dell do not have that kind of scale. Apple sold almost 3 times the number of iphones compared to Dell's total device line.
It also disregards B2B sales; apple has pretty much disappeared from schools and most office issues in big companies seem to Lenovo's, because of repairability.
> Apple’s gamble, with the M1, is that its custom CPU performance is now so high, at such low power consumption, that the choice of chip inside the system has become irrelevant within a given product generation. It challenges OEMs to consider how they might spec higher-end systems if some of the higher price tag didn’t have to pay for a faster CPU.
I really do wonder what would Apple create in order to be able to offer ARM workstations.
Maybe chiplet / modular design like AMD? Only this time one of the chiplets is going to be a full M1/M2/M3 ARM CPU + some RAM and everything else usually included, and the other chiplets would be extra RAM or dedicated GPUs?
I am looking forward to their first ARM workstation offering.
Why does Apple bother with the 13 inch MacBook Pro if it has the exact same processing power as the MacBook Air? Touch bar and 2 extra hours of battery life aren't enough to differentiate the two. My guess is that Apple does intend to differentiate the M line but hasn't been able to do it yet, for what reason I don't know.
The really smart move is launching the low power consumer version first. This positions the eco-system for the native code where Apple wants it - on apps that run on the mass market devices. The higher power apps can come later and exploit 8 firestorm cores and up.
Personally I think Apple has completed its shift from a computer manufacturer to a processor manufacturer who serves two internal clients the software division and the consumer electronics division.
I was thinking that there’ll be “M2” by now but a suspicion is growing inside me that they might use a NUMA cluster configuration with PCIe interconnect for upper tier models. Doesn’t it make sense?
That would be a very AMD thing to do. Apple is likely to pump endless amounts of cash into TSMC to make huge monolithic chips so that there's no cross-core latency issues and that they can still say they have a single "chip" in marketing materials.
While the M1 is great now, I'm very hopeful in what's to come. If Apple can maintain even a bit of their current improvements year over year, in 2-3 years it just won't make sense to use anything else; see this graph to know what I mean:
Even if Intel can catch up somehow, it'll take years to catch up to this, and by then it'd be too late. The only reason why Apple will not win is that they bundle software and hardware too strongly. This some times and for some markets is a big strength, but other times it's a weakness IMHO.
> The only reason why Apple will not win is that they bundle software and hardware too strongly.
That is the major reason M1 is having such a momentum. I was pretty skeptical that niche x86 FORTRAN R extensions would work on M1. But here we are, Rosetta made it the smoothest ride possible. Now look at Microsoft and their Surface Pro X.
I for one welcome our new ARM overlords, may x86 rest in peace along with other relics of the past. Saying this as both a gamer and intel/pc enthusiast but iphone owner.
Learning a bit about this in my MBA classes. Cost + Margin = Selling Price. Apple have the capacity to be more of a cost leader now, since they are not subject to the same bargaining power forces of CPU suppliers. As a result, they have the potential to push their costs down (if not now, then in the future once production scales) and increase their margins. A good time to invest, perhaps?
that's cost-plus pricing, and should be viewed as a relatively naive model that is no longer typically used for pricing. sophisticated pricing decisions are based on price elasticities to find the profit-maximizing price. apple is a notable example of a company that exhibits this type of pricing decision-making.
Is it possible to write C++/Rust on an iPad? If so I'll consider buying one, as my iPad4 is struggling both wth charging and with 'modern' websites and apps. Otherwise Apple's hardware is just too expensive for me for what it is.
"Part of the reason Apple can get away with doing this is that — and let’s be honest — it’s been selling badly underpowered systems at certain price points."
There is also market share. Being more affordable to more people means more sales. True, Apple has always positioned itself as premium. That will remain. Now it will be a higher value as well.
While everything wants to be faster. The Apple direction is everything I don't need.
I have a desktop for 1000$, 64 gb DDR3, 512 gb ssd + 1tb hdd, AMD 2600 ( 6 core) and a AMD 380 8gb and I could have gone overboard with the latest CPU and GPU. But I didn't think i needed it ( and after 7 months of usage I still think it was the right decision)...
Either way, I don't think I ever was Apples target audience. And something like an IPad is for consumption, not producing anything.
Ps. Yes, i use it for some gaming and for VR too, but mostly dev.
Here we go again. Comparing a 10w mobile RISC CPU to a 100w x86 CPU and somehow deducing that having a mobile CPU run desktop applications is a good idea.
I wish all the Apple fanboys would just hurry up and buy one so they would realize they are comparing apples to oranges and sit back down.
Nobody is buying an M1 to replace their gaming computer. And to insinuate that such a thing is possible or practical is disingenuous to the way PC hardware works.
Additionally I think the article is of amateur quality. It takes on an Apple perspective and assumes that there will be no response from x86 vendors basically because "how could you possibly respond to something as astounding as the M1?"
Don't worry. Apple hasn't taken over the desktop market yet, and they don't have the raw horsepower to do it anyway. And they don't want to. We are comparing embedded CPU's with proprietary north bridge architecture to industry standardized and socketed CPU's with an entire industry supporting it. Nobody in the market for a desktop PC is going to get a mobile Apple device just because the M1 is more efficient. Nobody goes out to buy a pickup truck and accidentally gets talked into a Prius. Just because you conflate efficiency with performance doesn't mean the M1 is capable of replacing a Ryzen 9.
A better analogy for the M1 is EVs. The M1 has near best in class single thread performance so similar to how Tesla's have crazy acceleration. Sure we've yet to see how efficiently M1 will scale to multithreaded or GPU work loads but all signs point to a very scalable architecture given the jump from A12 to M1. Simple extrapolation indicates that a M2 or M1x will match or beat most desktops while still consuming very little power.
> We are comparing embedded CPU's with proprietary north bridge architecture to industry standardized and socketed CPU's with an entire industry supporting it
Eh? x86es have had an integrated northbridge for about a decade.
You are right that as of right now the M1 can't really replace a gaming PC, but I don't see Apple currently really targeting that. However the real question is if that is an issue with the M1 chip or just an issue of Mac not really having much of the gaming marketshare.
That being said, personally I believe the gaming PC market is largely a niche market for the entire PC market.
At its core I don't see any reason the M1 chip can't play games (in fact we are seeing some good numbers come out of games that have a proper migration), it is purely a software issue.
It may not be able to compete with the top of the line custom built desktops, but the reality other gaming laptops that are far more expensive can't either.
edit: If you have something to add to the conversation, feel free to enlighten us. It does gets tiring discussing anything Apple around here because of the sheer bias.
The page you're citing contradicts that statement -- they will not be 100% off x86, but instead will offer 100% of their products with the option of buying an Apple ARM chip:
> This transition will also establish a common architecture across all Apple products, making it far easier for developers to write and optimize their apps for the entire ecosystem.
> Apple plans to ship the first Mac with Apple silicon by the end of the year and complete the transition in about two years. Apple will continue to support and release new versions of macOS for Intel-based Macs for years to come, and has exciting new Intel-based Macs in development.
I would expect Intel-based Mac Pros to continue for as long as there's a market for them. Probably at least 2-4 years after the "transition" finishes. (The last spec bump in 1-2 years, and sales end 3-4 years from now...)
It's also worth pointing out that Apple supports its products roughly 7 years from when they're sold, which means Apple will likely continue supporting Intel-based MacOS for at least another decade, possibly as much as 13-15 years. That said, I wouldn't buy an Intel Mac after 2024 and expect the same experience even if Apple supports it until 2031 or later: In just a few years, I'd expect Apple would have created even more MacOS features uniquely suited to the ARM chips they're now making.
You're using the transition that everyone wanted to point a rosy picture. Everyone wanted Intel chips and Wintel compatibility. PPC was too slow, too hot.
Fact is, until they can get nVidia GTX compatibility or quality, there will be a significant market of 3D and gaming that does not want to switch.
And I say this as someone who very much likes my new M1 Macbook and wants to convince the rest of the family and friends to never again buy an x86-based laptop computer.
If Apple can add 3D and gaming chips equivalent to the GTX 3000 series with software support, all within 2 years? Amazing and that speeds up their switchover -- but Adobe, for one, has still not yet updated Adobe After Effects and likely won't for at least a year, even though they announced they would. I'll remind folks that Adobe said they would consider adding Metal support to After Effects almost five years ago and still hasn't. Other apps are in a similar state.
I don't doubt that Apple would like the transition to be quick as well as seamless, and I think most portable products will transition very quickly. But unless Apple wants to leave the Pro market they just entered by making things too unstable for Pros, they will tread carefully on eliminating sales and support for Intel. Especially because even now they are selling 16" MacBook Pros that run Intel, and Intel Mac Pros, and 27" iMacs (formerly Pros) and all three of these platforms should ideally be supported for five years, minimum, ideally six given we could count the last year as the "seventh" year of support.
Yes, but in the PPC era Apple published amazing quality products several times superior to W98SE, W2000 and even XP.
Now, until the M1, Intel Macs where pretty much mediocre, even against a (fully configured OFC) KDE [3-5] BSD/Linux setup. Even with their iThingie ecosystem integration.
Hmm. Sure, M1 isn't suited for heavy workstation loads. They didn't target that sector yet. What's your point?
This is not the last processor they will do. Have you been watching the massive year over year performance gains Apple has been making for several years? There will be an M2 and an M3, and ....
Given the performance AND energy beating they are putting on the market sector that they targeted with M1, why is there any reason to believe they won't kill the high-end workstation sector when they target that?
I'm expecting apple to go the tiles or chiplet route for their workstation processors. TSMC have been working on some advanced packaging methods that are ideal for this usecase.
Apple could make one 16 core die and offer workstations with anywhere between one and four tiles for 16 to 64 cores.
As for the high end laptops and iMacs, I'm expecting Apple to launch a monolithic 8+4 chip.
That would be just 3 unique dies to cover Apples entire Mac lineup.
I will grant that the Mac Pro will be one of the last to transition, but I don't think it will stay on x86 forever. It may receive a spec bump at most.
> I think Apple will keep selling x86 Macs forever.
Apple immediately discontinued all Mac minis, almost all the Macbook Air and Pro line ups and the iMac's running Intel. The last form factor to transition is the Mac Pro.
> For heavy-duty workstation workloads, M1 just doesn't cut it.
Perhaps you are right on this one for now. The M1 doesn't cut it for 'Apple's standards' for now but when the time comes for the Mac Pro to transition, it won't use an M1 chip, but probably an M2 / M3 chip.
Apple Silicon is the start of it all and M1 is only the beginning and is certainly not the last. So its worth waiting to see how the M1X, M2 or M3 Macs improve over M1.
I heard similar arguments in regards to initial performance with the transition from 68K to PPC and then again with PPC to Intel. The transition will probably be far quicker than you anticipate.
Why should Apple care if a small portion of its product lineup will still rely on Intel/AMD? Why shouldn't they put pressure on these chipmakers to defend their position. They have both the vertical integration and scale to push entire industries to innovate. If this serves to “wake up” Intel we all benefit (including Intel). Look into xscale and its relationship to the original iPhone circa 2006-2007, that didn't wale up Intel and arguably they lost out on billions and weakened their future outlook.
Because of the cost of continuing to maintain software for that small fraction of the user base. Not only the core OS but the toolchain, and all their first party apps. And the difficulty of incentivizing and enabling third party developers to continue to support x86.
The m1 is built specifically to sip power and run cool. You don't think apple could come up with an arm processor that can keep up or exceed desktop x86 when those constraints are removed?
I'm not an expert, but isn't power efficiency the selling point of ARM? As in it runs efficiently, but there are diminishing returns to running it faster?
the power efficiency is a consequence of simpler circuitry. modern x86 chips devotes a fairly large amount of logic to translating the exposed cisc interface to an internal microcode that more resembles a risc system anyway. A risc chip doesn't need that.
...And you think there'll never be an M2? Or M1X, or whatever branding Apple decides to put on their 32/64/128/???-core version that supports terabytes of RAM?
I think the open question about the Mac Pro is the expandability. The trash can Mac Pro didn't do well and they went back with something with expansion slots. Is it worth it to Apple to build out and support everything necessary on the M1 platform for one low-margin line of computers? We'll find out in the next year.
Even the 27" iMac has a 128 GB RAM SKU, so they're going to need to build a chip supporting external RAM regardless. External PCIe support already exists thanks to thunderbolt. So the architecture will be there, it'll just be a question of how expensive it is to scale up.
Why? Apple just demonstrated that they could do something that had previously been considered unrealistic. They've also been demonstrating massive annual performance gains for many years. For people who have been watching closely, they should have some credibility by now.
It seems odd to think that their first attempt at this, which largely consists of small consumer-level devices with little to no active cooling, is the best they're going to do.
Not to mention that they've already been dominating mobile SOC performance for half a decade as well.
There's no indication that they couldn't take on the pro market. They've clearly got the talent in place. It's likely that the market size is a bigger factor than capability.
It's getting a lot of prominence and its use across lots of computers means that there can be a consistent message that "M1 is great - get a new Mac and get an M1".
It also provides an opportunity to distinguish between M1 and the next generation M2 (presumably).
I've always thought Intel's marketing was a bit confused - i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.