Hacker News new | past | comments | ask | show | jobs | submit login
The Apple M1 compiles Linux 30% faster than my Intel i9 (jeffgeerling.com)
313 points by geerlingguy on June 1, 2021 | hide | past | favorite | 315 comments



I'm a little bit confused why there are so many comparisons between M1 and older apple hardware. It seems like a weekly occurrence on HN. I think it's great that the M1 performs well. However, posts like these, and in general the comments I read, make me feel a bit uncomfortable. So, take this with a grain of salt, but I feel like there is a strong desire to confirm the idea that the M1 is a beast. This has been reinforced by quite a lot of misleading advertisement.

So, personally, I like to work on a desktop computer. I don't care all that much about the power efficiency as I do about performance. So, I was curious, and ran the same test on a regular good top-end desktop hardware, and the same test took 2 minutes and 53 seconds. In other words 4.3x faster, or "78% faster" if using the same metric. Compared to the M1 time it would be 3.1x and 68% respectively.

Just to be clear. This isn't a comment that "with my beefy computer I can outperform the M1". My desktop environment is not something I can commute with. My point is nothing more than, maybe to add some interesting data-points that I find are hard to come by. I seem to always see the M1 be compared to older apple hardware. Maybe that's all you wish to know. If MacOS is a must-have, then I suppose it doesn't really matter what else exists. And this is fine, I have no objection to it.


I think what people are excited about is not the actual benchmark but the potential of the path we now are clearly on.

When you see “M1 beats XXXXX at XXXXX task” I don’t think anyone really cares that much about that task, but it usually makes you think, “crap, imagine what their proper MacBook Pro pro / Mac Pro chip is going to be able to do. Or what it will do in 3 years...?

Also, everyone just loves watching Intel suffer.


Also 5 or 10 watts versus 125 watt power draw. No matter how you slice it. The BBC acorn has grown up to be a monster.


Really one of my favorite examples of why public sector investment is a Good Thing.


I still love that movie with the hobbit guy about the BBC acorn. That was great.


Micro Men! Great little TV movie, and really interesting story of the rivalry between UK golden-age home computing giants Sinclair and Acorn (Acorn being the company behind the BBC micro, and also originally the "A" in "ARM").


Oh thank you, I never heard those name before so I had a hard time following this part of the thread.

( do some more search, adding “micro” )

Ok, so it IS a British Broadcasting Corporation spawn.

Ha. Excellent. I had no idea.

Thanks for spelling it out, I was about to reject the idea as funny but unlikely.


Me neither! Here's a link to the movie: https://archive.org/details/MicroMen720p2009


Wait wait wait… Gloating for no good reason about public investment is one of my favorite activities.

I have a hard time following the connection with M1 processors?


bbc contracted acorn to produce an educational computer to go hand-in-hand with a tv program. it was a smashing success and made them so much more money than anything else they'd been doing. they birthed arm.


It made them the only money they ever made, but in the end they had to default on payback of 3mil pounds of royalties to BBC thanks to typical 80s mismanagement (oh hi Commodore).


Ah thanks yeah I was researching that. I had absolutely no idea. And I’ve been using those nifty fuckers since a while now. Ha.


Not unlike real acorns.


Ohhhhhhh I think I'm getting it. Then they grow up into a big tree and that's where Apples come from?


No, acorns turn into oak trees. To get apple trees you need apple seeds.

You get those from core dumps.


"RJ-45 ends are not "network seeds" and should not be scattered under floor tiles in an effort to cultivate a server farm."

- https://old.reddit.com/r/sysadmin/comments/2gt7x5/just_sysad...


Well, apples don't come from oaks. Unless you count oak apples: https://en.wikipedia.org/wiki/Oak_apple

Though not sure how that would fit into the metaphor.


I believe a lot of people don't hear the fans because they're wearing headphones. For the rest of us that like a quiet room, the arm CPUs sound like a godsend.

I say "sound" because I don't feel like an early adopter much; don't have one yet.


> I believe a lot of people don't hear the fans because they're wearing headphones. For the rest of us that like a quiet room

You’re completely focusing on fan noise here, but of course it also enables tiny devices with great battery life and - this isn’t completely irrelevant when you run your computer for a significant part of the day - lowered electricity cost.

Oh, and I believe these processors have quite some head room before they will reach the physical thermal limits others have already run into, so expect some more raw performance on better cooled systems.


> You’re completely focusing on fan noise here, but of course it also enables tiny devices with great battery life

You're optimistic there. Where's my iPhone with a 7 day battery? Last time i had a phone that lasted 7 days it was a Nokia 6210 with a b&w screen.

> and - this isn’t completely irrelevant when you run your computer for a significant part of the day - lowered electricity cost.

I care about that too, but prolly not worth mentioning on an US centric site.


> I care about that too, but prolly not worth mentioning on an US centric site.

Always funny to see a European head to a "US-Centric" site just to constantly take jabs at the US, which your posting history confirms.

But HN is more California (and Bay Area)-centric, if you're being technical; which is generally considered energy and eco-conscious:

https://www.bobvila.com/slideshow/america-s-10-most-energy-e...

For future reference, and to realign your stereotypes.


> Always funny to see a European head to a "US-Centric" site just to constantly take jabs at the US, which your posting history confirms.

Oh well, good job Trump. But we're getting political so let's shut up.


> For the rest of us that like a quiet room, the arm CPUs sound like a godsend.

Got an M1 Pro and normal daily use just doesn't trigger the fans. I can do it by, e.g., charging whilst playing Minecraft or running something that uses all 8 cores at full speed for more than two minutes-ish (`wireguard-vanity-address`, Blender render, etc.) but it's an oddity to hear them these days.


Despite all of the 14nm++++++ ridiculousness, 2020 was Intel's best year ever[1].

I think Intel has a huge challenge ahead but one thing that's helped them is that Skylake was competitive enough to get them through the process node mistakes (at least to this point). Maybe what we've learned is that fewer chip buyers care about performance and power tradeoffs than imagined (or at least, than I thought).

[1] - https://www.statista.com/statistics/263559/intels-net-revenu...


My comment every time someone mentions Intel best year yet:

Commodore reported 7-year record revenue with good profit 4 years before going bankrupt. https://dfarq.homeip.net/commodore-financial-history-1978-19...


As did Blackberry! And countless other "innovators dilemma" examples.

Intel's a little different though. It's not like consumer sentiment has changed like in the case of the C64->PC or BB->iPhone. AMD and ARM are strong competitors but Intel is in this position because they made a major technical misstep. Maybe more like Microsoft and the Longhorn/Vista debacle. I don't think we can conclusively say Intel or even x86 can't be competitive when they get their process figured out.


Commodore didnt go out of business because of C64, if anything C64 kept them afloat far longer than they should of lasted. They actually still sold ~150K C64 in 1994! Commodore went bankrupt because they kept selling 1981 released c64 and 1985 released Amigas all the way to 1994 with almost no upgrades. They did plenty of updates, like new C64 cases, or Amiga 600. A600 had same cpu, same mhz, same Dual Density floppy as 1985 A1000.

Intel has been selling same 4 core CPUs for almost 10 years now, only last year finally releasing something able to compete with Ryzens on $/core count.


the historic revenue charts for the two companies are completely different though


> Also, everyone just loves watching Intel suffer.

This is such a weird fucking statement.

Apple literally think they own your hardware after you buy it, and Intel are the evil ones?

I don't really do fanboyism, currently own Mac, windows, Linux, Intel, amd, android, iPhone, switch and PlayStation.

I just use what I use but I don't get so attached, apple has been lacking for years. M1 is ok but honestly it feels like they've only just caught up to everyone else.


>This is such a weird fucking statement.

Not if you've been in the industry for long enough. And it has absolutely nothing whatsoever to do with Apple, the "everyone just loves watching Intel suffer" (or more charitably, "watching Intel get a bit of comeuppance and much needed kick in the arse") bit stands on its own no matter who is dishing out the suffering because Intel has been a big PITA for everybody since the late 90s/early 00s. Intel is pretty much single-handedly responsible for lack of widespread ECC memory for example. They've nickle and dimed everyone and feature gated important functionality for pricing power and basically just exploited the ever living shit out of their dominant CPU position. And the last time they were getting threatened in terms of that position, they bought themselves what they needed with illegal tactics denying AMD a chance to get the ball rolling, dooming us to another 10-12 years before they could go for it again. And Intel has been this way for ages and ages. People still remember everything that went down around Itanium and how it choked off promising alternatives.

So yeah. Intel have indeed been the evil ones, and in a way that's been a lot harder to avoid than Apple which you might stop and recall has in fact always had only a tiny minority of the PC market. I mean, if you want to talk about Apple specifically:

>Apple literally think they own your hardware after you buy it, and Intel are the evil ones?

That's kind of amusing given the access Intel has given themselves at a deep level on every CPU.

>M1 is ok but honestly it feels like they've only just caught up to everyone else

This is just you being silly. "Only just caught up"? They were using Intel before so they were by definition the same as everyone else. What they've done in mobile CPU design has been genuinely remarkable. And that in turn is a genuinely good thing in general whether you use them or not, same as with AMD. We're already seeing Intel sluggishly but seriously start to respond and shift.


Intel built a fairly open platform whereas Apple is and always has been the China of tech.

> ...given the access Intel has given themselves at a deep level on every CPU.

And they use that to stop end users from doing what?

Meanwhile Apple will have you paying them yearly to sign apps as they continue to build a dystopian new world order where only they control everything you can do with your own hardware. I honestly hope that their marketshare keeps growing in all categories though. The more successful they are the better chance people will rebel against them. You can't run an app business in the United States without dealing with Apple. Whatever Intel has done is hardly comparable.


> ...as they continue to build a dystopian new world order where only they control everything you can do with your own hardware

Well, the new M1 computers have a built-in ability to run third-party OSes. It is not an oversight but rather something Apple intended to do and spend effort implementing in a way that fits their security model. That they did not provide any hardware documentation to go with it, is another thing.


Maybe reconsider the metaphor


It's perfectly accurate.


Intel has played dirty for years, both with AMD and its own customers for years. It's not about Apple.


That's legit reason for Intel to be blamed, but also I want to praise Intel for massive contributing for OSS. I wish AMD also increase contributes for OSS as increasing shares and profits.


Thank you. Yeah I never really had much to say about intel. They sell processor, sometime, rarely, I buy processor. I do look up what make sense to buy, or not, at that specific time.

And most of the time I forgot I did it in a matter of weeks.

I’m not sure what it’s in my current machines. It work really well.

On the other side, for some reasons all my work computers have been macs for a solid 10 years now. Basically after the intel shift.

And boy oh boy I had moment of annoyance toward Apple.


I think it’s hard to exclude power efficiency since that’s a constraint that matters for a lot of people and is particularly relevant on laptops.

The other noticeable thing for me I think is driver related: instant wake, instant resolution change.

It’s noticeably nicer to use.

Is it faster than a desktop? No, but comparing it to a desktop feels like the weirder comparison to me.


Even in terms of power efficiency AMD is in the same ballpark with their 4 and 5 U-series mobile parts, particularly when comparing multicore benchmarks on their 8 core/16 thread CPUs.

M1 is really most impressive when compared to Intel parts and especially when compared to the underspec’ed Intel parts that Apple used to ship.

Yes, the M1 is good but I’m honestly just tired of hearing about it. The story seems to be about the decline of Intel’s fab supremacy more than anything.


It's kind of a joke at this point. Talking about the runtime of something on a dual 7742 and someone pipes up with "Have you tried M1, foocloud has them available, it's so much faster".


Single threaded performance is on par with AMD 5xxx series parts. Consider that Apple is using the 5nm process while AMD is using the 7nm process.

I'm not sure how it compares in terms of power efficiency though I'd expect the ARM chips to be more efficient both because of the transistor size and the ARM CPU not needing to support instructions from the 60s.

I'm very excited to see a new competitor in the CPU game and I'm a big fan of ARM, hopefully we will see further disruption but in the PC space.

Imagine a 128 core desktop CPU. It will be interesting to see how software has to evolve in order to maximise utilization of the quantity of threads.


> [...] and the ARM CPU not needing to support instructions from the 60s.

Apparently, those weird instructions only take up a teeny-tiny amount of space on the CPU and are basically implemented in micro-code.

To be backwards compatible those instructions just need to be supported, but they don't need to be fast. So they are implemented such that they don't get in the way much.


Is it really a competitor given they don't sell their chips? For Intel maybe, given they lost the Apple market, but it doesn't look like it changed anything for the other players.

There's been a lot of comments about how ARM is going to revolutionise everything, but that doesn't seem to be the case for the chips you can actually buy (e.g. Qualcomm Vs AMD). Honestly, I don't see the M1 as a statement about how good ARM compared to x64, but how good Apple is compared to the rest of the field.


> It will be interesting to see how software has to evolve in order to maximise utilization of the quantity of threads.

Not only this, but the PHP/Python/Javascript kiddos will need to learn:

1. Languages that get closer to the metal. 2. Utilize those languages to make their applications aware of data locality, temporality, and branching.

The bottleneck of the future is memory and pipeline stalls. Spraying objects all over the heap is one of the dumbest things you can do with modern processors, but developers do it all the time. This needs to get fixed.


I think I might have been misunderstood here. I sort of feared I would be, as I have some experience in this regard. I don't really have an opinion on what is nicer, and I also don't mind people liking their stuff, if anything it makes my happy to know.

I also agree that it is weird to compare it to a desktop. In fact, my larger point is that I feel these benchmarks, and subsequent comment discussions, tend to imply this comparison. And, I don't see the harm in then showing what those numbers would be. If power consumption isn't important, I sure would like to know.


So for development in various ecosystems, would you say the M based is superior to Intel based in terms of efficiency and performance?


I believe it's often on HN because there hasn't been a major "jump" in computing/processing power like this in over a decade. I do remember when every year's PC was significantly faster and yearn those years, as probably many here, and see this as a very exciting moment.

I only use a laptop, like many others, and I believe it's very fair to compare M1 vs last generation Intel. Why? Because both are Apple computers (few differences in screen brightness, colors, etc), same OS (few differences in optimization, stability, support, etc) and similar constrains from being a laptop (thermal, power, etc). Intel's year over year improvements are minimal, so "last year" Intel vs "this year" m1 is a fair comparison. It'll not be so when we get to m3+, but even m2 would be fair-ish game and gain more from comparing an Apple Intel computer vs Apple m2 than comparing it with a totally different windows machine.


Doesn't that imply you're only limiting yourself to apple hardware? Seems like the logical conclusion if the M1 is this "jump" in computational power, yet being three times slower in the very same test I ran?


I’m going to guess your desktop isn’t running on a 15 watt SoC. If it were, then we’d have something to talk about


Is it a good point though? I mean, I've already said that in my situation, I do not care about power consumption. I'm not saying this isn't a good feature of the M1, it is just irrelevant when you're plugged in, and you can dissipate the heat. Both of these are the case when I work, so I would consider it to have no downsides, with a 3x speedup. Likewise, this is not relevant for those who have such power constraint, that need to work on 10 hour long train rides. I'm not sure what is so difficult about accepting these constraints, since most replies so far to my post about "if power consumption isn't relevant to you" with "well, the power consumption is much better".

The point here was the nostalgic "I miss the days when computer performance increased noticeably", which apparently hasn't happened for a while, until the M1 came along. If it is performance you're after, the only way that sentiment could hold true, was if you limiting yourself to the rather poor performance in older apple hardware. Just think about it... if a 1.33 speedup would give this feeling, imagine what a 4x speedup would feel like.

So, it's not that I think the M1 is bad. It's that these posts that keep popping up, week after week on Hacker News, which go to some lengths to show how the M1 is 30% faster at some heavy work load than the older, and much worse Intel processors. It just seems strange.


That's a good point. Yet for how long can the M1 sustain maximum performance, even with fans? And at what temperatures?

Regardless, it's good to see some innovation--no matter how much hype is involved.


I'm no expert, but with a 15W power draw the answer to the "how long" question is likely to be "perpetually" with active cooling. Now, Apple tends to let its processors run very hot before throttling (I had an old i5 MacBook which easily hit 98C), but again, a 15W power draw means there isn't all that much energy to push into whatever heat sink or cooling solution they want to use. (In the real world it's rather more complicated, with different load intensity on different parts of the SoC, but still)


No, I'm just saying that if you want to compare apples to apples (only the processor), well Apple just did that and changed mainly the processor, so if we compare two Apple machines with different processors we can see the jump in performance much better.

I'd be very happy to see comparison of other ultrabooks with the m1, but AFAIK everyone who "beats" the m1 are desktop workstations and not ultrabooks.


You logic doesn't follow. You start out by saying "no", but what you immediately follow up with, is the answer "yes" to my question.


True all day battery life while holding it's own against Intel desktop-class perf is super exciting. (I'm aware that some Intel chips go faster, but see eg [1] -- not near the price point.)

You could get that battery life before, but not without tradeoffs you really felt.

I can barely feel the perf difference between my 2015 mbp and my 2019; I'm dorkily excited to get a new laptop and actually feel a perf bump. It kind of feels like the 90s again.

And separately excited for what happens when Apple makes an m2 or whatever with more thermal budget to compete in the desktop space.

[1] https://techjourneyman.com/blog/apple-m1-vs-intel-core-i9/


I remember when the XBOX and PS2 were out. I never got into console gaming, so I was very much on the sidelines. From that perspective it was amazing how passionate people got about a consumer product that realistically should form no part of anyone’s identity. Since then I’ve seen the same pattern over and over again. I don’t really get it. Kudos to modern marketing I guess.


I dunno about the marketing, but I for one was excited about the first consumer devices capable of realtime, immersive 3D worlds. I played a lot of 16 player halo in college classrooms on big screen projectors. It was exciting.


I guess maybe the point I was trying to make wasn’t explicit enough. Getting excited about either one, understandable. Getting into serious and sustained arguments about which one was better like they were political parties or religions, not as understandable.


More understandable for anyone who was around for the virtually identical Sega vs Nintendo wars of the 80s and 90s.


I think it’s partly down to money.

If you’re on a tight budget and can only afford one, you don’t want to feel as though you’ve made the wrong choice and are missing out.

If you can afford to buy both then it’s not as big of a deal.


What Apple has proven with M1 and Rosetta is you can actually quite elegantly support legacy x86 programs, while leaving little on the table, and make use of all of the advantages of ARM/RISC based architectures.

I'm interested if AMD/Intel will respond with their own beefed up desktop profile RISC/ARM processors.


I wouldn't call x86-64 all that legacy (it's like, recently legacy) and that's all Rosetta supports. But performance is not that bad even if you stack something like WINE on top to handle i386 programs.


There’s videos on YouTube of people playing Windows games (like GTAV) through parallels via Rosetta 2.

Performance isn’t amazing, but it blows my mind that it works at all!


Is that really Rosetta and not qemu? I didn’t think it supported VMs.


It may be an 'older model' but it's only from 2019. It's only one year behind the current 16" MBP. So I think comparing a 2 year old top of the line, maxed out MBP to a current lowest end entry level MBP, and see the low end model blow away the high end maxed out beast, is a reasonable thing to be impressed with.


There are two important things here:

1. If the smallest laptop chips are this efficient what do you think a full fat desktop chip will look like? 2. I think desktop/workstation is the only category where energy usage isn’t that important, it’s probably less than $100 per year with our usage patterns (don’t quote me on that)P. But for server and laptop it’s very important (for totally different reasons).

(Realistically it’s probably a max performance vs efficient trade off, but that may be in part because we’ve not seen any large apple chips, so we don’t know how it scales)s


Well, we know that AMD's chips - which use the next largest TSMC node compared to Apple's - have a lot of diminishing returns from increasing the available power envelope above M1-like ultraportable levels, at least in the kinds of core counts you typically find in consumer desktops. So it's possible that even if Apple did ship a full desktop chip in a system with proper cooling, it wouldn't be that much faster. (It's also likely that they just won't bother, Apple being Apple.)


I don't think you realize how far ahead the M1 actually is... it's about processing power per watt. The desktop you used actually 1) has a fan 2) is probably consuming many more than 15 watts 3) would burn my lap if it was anywhere near it.

The implications are huge. Think about the competitive advantage a data center that operated with no cooling requirements and much lower power requirements would be if it got the same (or better!) performance than an equivalent intel based one.

It's not just faster, its faster than equivalent parts AND way more efficient.


The disconnect here is exactly what I'm referring to in my post. I'm at a loss for words, especially considering I've already addressed your main point. Also, what do you mean by "not just faster"? What are "equivalent parts"?


When they make the desktop pro version its going to slap. This is a 15 watt part! You may not care about efficiency so then wait and see what the desktop part looks like.


It's generally not that simple - you can't scale a 15w part to 150w and get 10x the performance, there's other factors at play.


With the m1 chip you pretty much can, the heat is low enough that you can just add more transistors to the die and give it more juice and it will fly. It seems the days of intel are numbered, not even i9 processors can compete against it now.


What did you test was most probably wasnt compiled for m1 or just a rare thing. Until M1 i didnt buy single Apple product. But M1 is clearly superior. You feel the difference clearly just by browsing web. There isnt even need to test anyhting someone who used decent intel/amd laptop and m1 laptop in short period of time will realize the difference.


Same as for me. I was interested in macos but 2k bucks is something that stoping me all that time :)

I bought in 2019 ThinkPad t490 (for nearly 1k $) after my Dell n5110 it's just wow. Speed, touchpad. And then two weeks ago I decided to give it a try to Macbook Air and bought minimal 8/256.

And this machine slightly (IMO something 2x in indexing my phpstorm project) faster then my t490 with i5-8265U 24/256 BUT its silent and cold and battery life. After full working 10h day I have 30-40% And I was shocked that 8 gigs is enough for me, on ThinkPad I used at least 10-12 gigs for my work.

I just bought minimum to try macos and planned to switch to 16 gig version or go back to ThinkPad, but now I can use it at least next couple of years

No regrets from switching


Understand where you're coming from but there also seems to be a weird section of the audience that, for whatever reason, doesn't like to compare apples to apples.

I wouldn't expect large numbers of benchmarks of intels new mobile chip against AMDs fastest desktop chip so I don't know why people find it unusual in this particular circumstance.


I’m usually already committed to macOS and want to know if the upgrade is “worth it”. But, for this, I’m more interested in perceived performance than real performance and, generally, the non-M1 post-2015 or so MacBooks feel slower than my older MacBooks ever did.


These benchmarks don’t threaten other ecosystems and show a strong value proposition for others who own the older, hot and noisy MacBooks. Especially in its form factor. Especially with its battery life. What’s wrong with that?


the best metric would be power to cost ratio. If you're compiling faster than the M1, how much more or less is your build in comparison?


Note: 30% faster than a thermally challenged i9 on a MacBook Pro, not a desktop one. Given the comments on similar threads, I feel this needs to be mentioned.


To go another layer deeper in analysis though, it is still >20% faster on the thermally throttled M1 MacBook Air. That's a laptop without a fan, and it's still faster than an i9 with a fan.


This right here. I was so skeptical of getting an Air. And yes, during heavy compile sessions with make -j8, it can hang after a half hour or so. But (a) you can make -j7 instead, and (b) it’s impressive how long it lasts without hitting that point.

I’ve been thinking of doing the cooling mod too, where you pop open the back and add a thermal pad to the cpu. It increases conductivity with the back of the case, letting you roast your legs while you work, aka cool the cpu. :)


Do any of the laptop cooler systems with fans help the M1 Air thermal issues? I used one on an older 2011 MBP, and it definitely helped that laptop. It might have just been a placebo of getting the laptop off of a flat surface to allow air to circulate around it, but the fans could only help in that.


Just for your information, thermal pad means more (on youtube test +10 degrees celsius) hot chassis and more hot battery. Just be aware:) Good luck


> And yes, during heavy compile sessions with make -j8, it can hang after a half hour or so

You bought a computer that crashes reliably when used "heavily"? I mean, if you heard that same statement from an individual PC user in past decades, you'd roll your eyes about the ridiculous overclockers and go back to work on your slightly slower but actually reliable development machine.

But this isn't a too-l33t-for-her-own-good overclocker. It's Apple Computer! Jobs is clearly still directing the reality distortion effect from the grave.


My 32 core threadripper hangs reliably if I run make -j64 and it starts to page out. Buckling under load isn't a phenomenon reserved for new hardware.


I assume he means step down in clock due to thermal pressure… not literally hang (stop working)


Well, depends on your definition of "literally hang." It does indeed literally hang (i.e. system becomes unusable, activity monitor impossible to open) for minutes on end if you're not careful; a fact I was somewhat dismayed to discover.

But it's rare enough that I just don't care much. The performance in other areas is too good to gripe about "if I run make -j8 for an hour, it'll hang for a few minutes."


Geez. It ran 11% slower when it had to throttle a bit. That is not crashing. The only reality distortion occurring is a misreading of the article and overstatement of the effects of the reduced cooling in the Air.


Statement upthread, and confirmed in another comment, is about a system that really does "hang" when presented with heavy load. I'm responding to the commenter I quoted, not the linked article.


My misunderstanding. Thanks for the clarification.


It's running out of memory and that's when the memory pressure becomes notable enough to affect the terminal app. The same would happen on any other system - in fact it'll become unusable more quickly on Intel.


That seems unlikely. In fact kernel builds don't really produce much memory pressure, no more than a 1-200 MB per parallel make process in general. (The final link takes more, but that doesn't really parallelize anyway).


I don't know about kernel builds, but memory pressure is the usual limiting factor when building for macOS/iOS builds when you step up parallelization like this.

Another possibility is system bugs that can be exposed when it thermal limits because it will stop running background QoS tasks, and things waiting on them will hang. It's pretty hard to stress it to that level.

I have done both these things by building compilers on M1. The memory pressure wasn't always from the host compiler but rather tools like dsymutil that may not be perfectly optimized themselves.


Limiting factor for being able to use the machine that is, not for compile time.


Apple seem to have actively chosen to skimp on memory capacity, so this seems very much like something that's on them.


I would say it's unreasonable to expect 'make -j8' to perform well on a low-end laptop (M1 is a low end CPU) and that the system should be picking the -j number for you.


there should be a term like “reality distortion field” that describes whatever it is that gets people to come out of the woodwork and nit pick acceptable performance compromises in Apple products


To be fair, a fan takes away from potential heatsink space and eliminates contact from heatsink to shell. A fan based cooling system can actually be worse than a fanless setup... especially if they're the Macbooks with constrained intakes. (virtually all of them)

https://forums.macrumors.com/threads/macbook-pro-17-my-cooli...

https://hackaday.com/2017/12/14/cncd-macbook-breathes-easy/

https://www.ifixit.com/News/6882/why-i-drilled-holes-in-my-m...


I will never understand why Intel stuck with the i3, i5, i7, and later i9 branding across so many generations.

I’ve lost track of how many times I’ve heard people wonder why their 10-year old computer is slow. “But I have an i7”


Exactly. They should have named them Air, Pro and iMac. That way you clearly can see which is fastest and which year they came out.


It seem pretty common for people who own Apple products to say "2013 macbook air" and that is easy to find on the device or in the software. Apple makes it pretty clear newer is better. Intel has a bunch of lakes and numbers and an easier way to tell would be nice all around. Though Apple has less skus to differentiate making it much easier for them.


Intel would be a lot more motivated to show you which products were newer if their newer products were actually better.


It's pretty common for people who own Intel cpus to say 5th gen or 9th gen or to easily find on the device or in the software the name of cpu like i5-4670k or i9-9900K


I think GP is talking about a branding problem though. Non tech savvy people may not realise that the i7 in that new computer at the store is different from the i7 written on the front of their PC.

For years CPUs had numbers that went up: 286, 386, 486, Pentium.

After that, it was Mhz and Ghz that people used to rate a CPU.

But none of those things are as relevant as they used to be.

So what number goes up? i3, i5, i7, i9. That's what Intel is telling us. That's the number they want us to see.


So which one in the following is fastest: i7-960, i5-2400K, or i3-1115G4? And where does Pentium Gold 7505 fit in?


Except for the Pentium, I can tell the age from the number. The i3 is probably the fastest, but since stuff uses less power these days, there's a small chance the old i5 is faster. I'd look up benchmarks if about to make a purchase. Is the Pentium equivalent in age to 7th gen core i, or is their numbering different?


Now consider that you are an average person, not so tech-savvy, don't really follow CPU industry news.

How would you differentiate?


These aren't realistic choices unless you are buying a used PC. But even less savy consumers understand generations, if you present a fast 1st gen model versus a not the fastest 11th gen model I think the majority will go with the newer model.


So which of these is faster Macbook Air or Macbook Air?


Those are all fast(to the extent matters to their marketing)


Even on Mac OS, when you go to "About This Mac" -> "System Report", you only see the iBranding and not the generation (I see "6-Core Intel Core i7" on mine for example).


from about this mac:

> MacBook Pro (Retina, 15-inch, Mid 2015)

> Processor 2.8 GHz Quad-Core Intel Core i7

prettly clear there


But Mac date year doesn’t always correspond with the intel processor year/generation.


Thank you for saying this.

Honestly - it's not even the i3, i5, i7, i9 thing. It's the fact that two i5s, etc; can be ludicrously different in terms of performance from one another because of the sub-generations within the named generations.

Yes - it's ridiculous that I could buy an i7 ten years ago, buy an i7 today, and yet - of course - they are absolutely nothing close to each other in terms of performance.

IIRC the Pentium line did not make this mistake. (Though the Celeron line could be very confusing, if I recall correctly.)


To play devil’s advocate, I can buy a Corvette today that is nothing like the one from ten years ago too.

In fact, lots of things are like this.


Almost everyone knows that the model year is part of the, idk, "minimal tuple" for identifying vehicles, though, and you can count on it always appearing in e.g. advertisements.

In CPU land, the architecture codename or process node might be part of such a "minimal tuple" but these are frequently omitted by OEMs and retailers in their advertising materials.


2009 C6 Corvette ZR1: 0-60 in 3.3 seconds 2019 C7 Corvette ZR1: 0-60 in 3.2 seconds


I have always hated 0-60 because it depends greatly on transmission shift points, driver ability, rear end ratio, and most of all traction, which OEM street tires have little of.

Let’s go to a full quarter mile, which show a bigger difference, but again not a huge one.

2009: 11.2 seconds/130.5 mph

2019: 10.6-seconds/134 mph

This would be a better comparison of Ford Mustang GT’s of the same year. Massive improvement.


It's a crappy metric. What about:

Time to stop from 60 mph?

Effective turning radius at 60 mph?

Interior noise in decibels at 60 mph?

Maximum shock absorption at 60 mph?


Nürburgring times.

Or 0-100-0.


>> Nürburgring times.

7:06 vs 7:24 for the old car. Although in 'segmented' times it had went something like 6:54


The point is that people think the numeral in the brand is something like a version number in which larger numerals are better. I.e., an i7 is always better than an i5 when in fact a new i5 might exceed the performance of a dated i7 for some particular metric.


Or frankly one new i5 might be powerful than a new i7 if one's a desktop or H series and the other is a Gx or U series.


Corvettes like all cars are identified by their model year. Hence there is no confusion that a 2021 Corvette is "better" than a 2011 Corvette.


Are the '21 models better than an '11? I don't think anyone would say they'd rather have a '21 than a '69.


I do love collecting my old classics, but every time I'm the one driving my wife's Civic, I think, huh, everything about modern cars is better in every single way.


Including pervasive tracking of driving habits and who you associate with, sold to the highest bidder. :-(


Depends on if I have a garage and a team to keep it running good, if I'm taking it for a nice drive or just showing it off in a museum.... The long, swooping '69 is iconic, but there are lots of things to prefer about a modern model.


In terms of speed, definitely - the '69 Corvette's zero-to-60 mph time was about 7 seconds, the '11 Corvette took about 4 seconds, and the '21 Corvette takes under three seconds. Sustained performance, maximum speed, etc. has also improved.


They would if they were looking for top speed.


This is valid since Intel's naming scheme is influenced by BMW.


I can't wait for Intel to release an i7 variant as the i8 Gran Coupé.


How much does the top speed differ?


https://media.gm.com/media/us/en/gm/news.detail.html/content...

About 10 years ago. 7:19.63 (ZR1 / i9)

https://www.automobilemag.com/news/2020-chevrolet-corvette-s...

About a year ago 7:29.90 (base model / i3)

Better measure than top speed anyway


More important than top speed, 0-60 went from 4.2 to 2.9 seconds in that time. So the analogy stands


Then there's the Acer way of naming products like "XR382CQK bmijqphuzx" so that each one is unique. I like Intel's clearly defined 9 > 7 > 5 > 3. However, I do wish that Intel made the model generation part of their marketing material so that retailers and OEMs would be forced to include that information in their product marketing too. Intel i7.2019, for example.


> Then there's the Acer way of naming products like "XR382CQK bmijqphuzx"

This take is deceitful. Acer typically uses a very simple scheme to define their products, which goes something like "Acer <product line> <model>".

The scheme "XR382CQK bmijqphuzx" is more a kin to a product ID, which goes way beyond make and model and specifies exactly which hardware combination is shipped in a particular product.

Complaining that Acer names its products like "XR382CQK bmijqphuzx" makes as much sense as complaining that Apple names its laptops like MVFM2xx/A, which Apple uses for the same effect.


The model number is placed on the beginning of the title. Or you argue that we should call it as """ 37.5” UltraWide QHD (3840 x 1600) AMD FreeSync Monitor (USB 3.1 Type-C Port, Display Port, HDMI Port) """ ?

https://www.amazon.com/Acer-XR382CQK-bmijqphuzx-UltraWide-Fr...


WTF, XR382CQK bmijqphuzx is really the model of an Acer monitor?


This is not at all how monitor marketing works. Every company gives them insane nonsense names. There is no other identifier. for XR382CQK bmijqphuzx there is no product line or model other than "monitor". That's all you get.


> IIRC the Pentium line did not make this mistake. (Though the Celeron line could be very confusing, if I recall correctly.)

Current Pentium and Celeron chips can either be mobile variants of the Core line or... Atoms.

I have a Celeron N4100 in my cheap netbook. It's listed as "Gemini Lake" in Intel ARK, which is a rebranding of its original lineage... Goldmont. (Goldmont Plus)

https://en.wikipedia.org/wiki/List_of_Intel_Celeron_micropro...

If you were to look at other Gemini Lake cores... You'll see that "Pentium Silver" also falls into that group.

https://ark.intel.com/content/www/us/en/ark/products/codenam...

TLDR: Intel has made it nearly impossible for anyone to know if they're getting Core based or Atom based mobile chips unless significant research is done beforehand.

This is not to say avoid Atoms at all costs. The N4100 I have is reasonably on par with the Core 2 Q6600 quad, which was quite the beast of a chip back in its day.

The current Atoms are nothing like the in-order-execution only monstrosities they originally launched as... but I still find Intel's branding a bit more than confusing. If incredibly deceitful.


Instead they could have just called them i2017, i2018... going with year of manufacturing. That way it is useful to make some sense out of performance with an understanding iN is always better than i(N-1)


Year of manufacturing says nothing; you can have two different gens manufactured in the same year, one for lower price tier and the other for the higher one. Just like Apple still produces older iPhones, same thing.

Instead, you have designations like "Intel Core i7 8565U Whiskey Lake" or "Intel Core i7 10510U Comet Lake". The first one is 8th generation (=8xxx), the second one is 10th generation (10xxx, but the 14nm one, not the 10nm "Ice Lake"), and most OEMs do put these into their marketing materials and they are on their respective web shops (these two specifically were copied from two different Thinkpads X1 Carbon models).


That gives you the opposite problem, where someone gets a brand new dual core and is confused by it being slower.


Best is to give them a number that approximately maps to performance.

The "pro" version might be an i8, while the budget version is i3. In a few years time, the pro version will be up to i12 while the budget version is now i8.

You have model numbers for when someone needs to look up some super specific detail.


Right. The issue I have with this is that we have i7 models that are from 5 and 10 years ago with vastly different performance due to their generation. If it was more iterative it would make more sense.


So that OEMs can keep selling their 5 year old chip designs at the same margins while their manufacturing costs plummet.


But it's a 9.21 Jigawatt, 11 GHz, i21 processor!?

And it's cycle- and power-efficiency figures are?

Only the frequency wars ended, the marketing and features wars continue the prohibition on computer architecture enlightenment unabated.


To be fair, it's probably not the CPU that's making their computer feel slow. Most likely, the lack of SSD and small amount of RAM.


No joke. I still daily use a 2008 MacBook Pro(with a Core2Duo CPU), with an SSD and 8GB of ram and that machine is perfectly fine for what I need it for - browsing the web, replying to emails, listening to Spotify, watching YouTube/Netlix. And people have bought a 2019 iMac with a normal 1TB 5400rpm HDD(!!!!!) And complain that it's slow. Yeah, of course it is, but it has nothing to do with the CPU in there.


I gave a responsible, sane but poor elderly homeless guy I was acquainted with a MacBook Pro (13-inch, Mid 2012), chargers, and canoeing dry bag. The MBP was originally a base 4 GiB and 500 HDD that, at some point, I upgraded to 16 and 480 SSD (OWC when it was $1.5k) + 500 HDD - optical.

OSes, platform toolchains, and apps need to prioritize UX (UI response time and time-to-first-window) to lessen the perception and the reality of waiting on apps doing unimportant activities rather than appearing and beginning interaction.


Yeap. It helps that Intel has missed the ball in the last 5 years or so. And tech savvy users are used to watching CPU/RAM usage, which helps a lot too.

My wife had a “my computer is slow” issue the other day. Fans blowing like crazy, extremely hot, battery draining fast. Turns out, lsof was stuck on some corrupt preferences file. Killed the process and file and all was fine. Regular people lack the tools to diagnose the problem and just deal with it, restart or buy a new one. We really should make diagnosing easier.


Replace Geek Squad with a very small (AI/ML) script.

No, seriously. Once the root cause is found, humanize the description of the problem and ask the user what they want to do about it.


Have you seen the software the Apple Genius Bar uses to troubleshoot problems? It’s all scripted out with lots of automation.


Actually, no, I never use them.

I bet they would to reduce labor costs.

Desktop Reliability Engineering ;-)


https://youtu.be/H7PJ1oeEyGg

https://web.archive.org/web/20110703091817/http://buyafuckin...

The memory hierarchy of a modern CPU with spinning rust might as well be the Parker Solar Probe waiting on a sea anemone.

I am remiss how someone these days can buy a laptop with spinning rust and 4 GiB and then complain it's "slow." Maybe they should've bought a real laptop for real work that's repairable and upgradable?


>repairable and upgradable

Unfortunately, the best ones aren't anymore.


Those aren't the best ones then.


Marketing "the best" (most expensive / newest) aren't the best, for most purposes.

I bought a T480 with dual batteries that does run for 10 hours. It has a 2 TiB SSD and 32 GiB. Works fine for me. Water-resistant, repairable, awesome keyboard, and MIL-SPEC drop rated too. Optional magnesium display top assembly cover.


Combined with generation it says plenty. It's unfortunate when the public doesn't understand. I'm not sure what should be done.

It is worth noting that if you think GPU model numbers are fine, CPUs are actually very similar. There's the 2600k, 3770k, etc. (sandy bridge and ivy bridge CPUs respectively) The first number goes up each generation and the rest is how powerful it is within the generation. Similar to Nvidia having a GTX 980 and later a GTX 1080.


I've seen the generation mentioned first in almost all places. For example https://www.dell.com/en-uk/shop/laptops/xps-15-laptop/spd/xp... lists "9th generation Intel core i5". How come people pay attention to the other number more?


They pay more attention to the other number because that's how the English language works. "9th generation" is a descriptive prefix to the actual CPU model name.

I don't get why not a single brand other than Apple gets product names remotely right. Every time a relative asks me for a laptop recommendation and shows me a list of models I have absolutely no clue what to tell them. All the model numbers look like they came from a password generator and the only discernible difference without hours of looking at spec sheets is the price.


PlayStation. The name is brilliant and they’ve incremented the index each release since the first about 25 years ago.

(I know it’s a different product category. It’s relevant because it’s name and number is perfect)


Most cryptic product number for consumer product I've ever know is Acer's monitors like XF270HBbmiiprx.


I get your point and it's only 5 years but my i5-7300HQ and i7-3520M were pretty comparable due to the number of cores, despite what pure benchmark numbers would say (and I'd actually prefer that i7)


Except this is an article about the Apple MacBook Pro 16" which came out aprox. 1 year ago (edit, 1 and a half years ago).


It’s a problem of intels own making - marketing vastly different capabilities under the same brand in order to segment the market.


So Macbook Air that came out in 2008 is as fast as the one that came out in 2020?


or Apple doing a bad job with the previous generation


The 2019 macbook ironically had better heat dissipation than the previous generations, but it's still pretty bad.

We can blame Apple for using chips that are too intense for their laptops, and we can blame Intel for making garbage chips that can't really perform in real world cases while spending a decade not bothering to innovate. Apple at least is moving away from Intel as a result of all of this, and I'm really impressed with how well the M1 transition has been going.


Ehh. I take the view that Apple has been intentionally sandbagging their laptops for a while to facilitate an ARM transition.

Not to say that M1 isn't amazing, but I think Apple has been preparing for this for a while and needed to make sure it would succeed even if their ARM CPUs weren't quite as groundbreaking as they turned out to be.


For _five years_?

Or longer, really; while everyone, of course, loves the 2015 MBP, they're mostly thinking of the integrated GPU one; the discrete GPU version was pretty thermally challenged. Arguably Apple's real problem with the post-2015 ones was that Intel stopped chips with fast integrated GPUs, so Apple put discrete GPUs in all SKUs.


While I don't think it's a likely theory, 5 years from "we should do it", to design, to testing, to preparing fabrication at scale, to design of the system around it, etc doesn't sound unreasonable. I would be really surprised if Apple decided the Arm migration after 2015.


Are there any benchmarks you can point to that have a similarly spec'd laptop (ideally similar size & weight too) that would show that Apple is sandbagging?


Possibility 1: Apple was making do with what Intel gave them, because their profit margins didn't care and they were busy naval gazing into their post-Jobs soul

Possibility 2: Apple had a master plan to intentionally torpedo performance in order to make their future first-party chips appear more competitive


Apple could have had good thermals with what Intel gave them, they just didn't because they seem to value "as thin and light as possible" as opposed to "pretty thin and light, and with sane thermals". Even then they seem to make straight up poor decisions around cooling a lot even when doing the right thing wouldn't affect size/weight. Do they just not have engineers on staff who are good at this or something?

It's absolutely possible to do high end Intel in a compact laptop with relatively good thermals - look at thin Windows gaming laptops for plenty of examples of this (and these have way beefier GPUs than macbooks too).


It's worth noting the engineers actually seemed to get management to allow the 2019 model to be _thicker_ with more thermal mass than the earlier models. But in a sense, Intel's chips have never been amazing for laptops.

The power draw drains batteries, and the heat is... very annoying at best. I have used a couple ThinkPads and a Dell XPS over the past 5 years too, and all of them had the same issues where they'd constantly be pushing out very hot air, and the battery life was never more than 4-6 hours.


What Intel supplied was the bigger problem, but Apple was definitely not trying to make the chips perform well. They were hitting thermal limits constantly, and more directly toward "sandbagging" the recent macbook airs have a CPU heat sink that isn't connected to anything and has no direct airflow. They could easily have fit a small array of fins that the fan blows over, but chose not to.


I've seen the video that makes that claim but it's just plain wrong, if you look at an Intel Air the whole main board complete with heat sinks lives in a duct formed by the rear section of the chassis.


Some air goes over it but it's not enough with the tiny surface area. A proper block of fins would have done so much more while still being small and light.


If it’s that easy, do you have an example of another vendor doing it? It seems like Intel would love to highlight, say, Lenovo or HP to say Apple was cooling it wrong.


Almost every laptop in the world connects a block of fins to the CPU with a heat pipe.

I went to ifixit and clicked the very first laptop teardown I saw, it has one. https://guide-images.cdn.ifixit.com/igi/gnBU5dVYrIhInIXV.med...

Or, let's see, razer blade stealth, that's a 3 pound laptop. https://guide-images.cdn.ifixit.com/igi/nHPaankElOWQndsv.ful...

What's a light 15 inch laptop, XPS 15 7590? https://i0.wp.com/laptopmedia.com/wp-content/uploads/2020/01...

Acer swift 5 SF515, "lightest 15 inch laptop on the market" https://i1.wp.com/laptopmedia.com/wp-content/uploads/2019/01...

LG Gram https://guide-images.cdn.ifixit.com/igi/fFlFJB3PKThJs3TA.hug...


Apple is actually just really terrible at thermal design, simple as that. It's surprising, because you'd kinda expect them to at least be slightly competent (like they are in other areas), but they're just not.

They've gotten around it with M1 by building an entirely new chip that's power efficient enough that it practically doesn't need good thermal design to perform near it's peak. It's an impressive technical accomplishment, but it's also hilarious that Apple had to go this far to cool a chip properly.


Yes, we knew that design was common - that’s why that PC fan made the video picking it as a point of criticism – but that doesn’t tell us whether it’s as big a deal as being claimed. That would require some benchmarks running for long enough to see substantially greater thermal throttling, higher CPU core temperatures, etc.


> I take the view that Apple has been intentionally sandbagging their laptops for a while to facilitate an ARM transition

That's just not a defensible position to take.

Intel's inability to execute a node transition has led to a situation where for years their only way to increase performance has come at the cost of major increases to power and heat.

>Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Compared to other amazing Intel laptops of similar form factor ? All Intel laptops are insanely loud and generate tons of heat for any reasonable performance level. They are just generation(s) behind in process, plus they start from an architecture designed for servers and desktops and cut down, Apple went the other way so it's reasonable they will do better on thermals and power consumption.


I bought a 2019 Core-i9, top of the line, and plugging a non 16:9 resolution screen into it out of clamshell mode causes the GPU to consume 18W, and basically cause the system to unsustainably heat-up until it starts throttling itself.

https://discussions.apple.com/thread/250943808

It's clear they have hit a limit of what they could really do with Intel's top of the line processors from a thermal perspective, with the form factor they want to deliver.

Now I have sort of an expensive paper-weight sitting next to my M1.


> GPU to consume 18W

That was a bug on Apple’s part, not a feature. Case in point my Kaby Lake Windows laptop package power draw is less than 2W when plugged into 1440p.


Or both


I blame Intel. For years i9 wasn’t just a “beefed up i7”, it was a high end line of CPUs with 10+ cores costing $1000+, meant for professional workstations that didn’t need Xeon’s ECC RAM support. Then suddenly i9 started to also mean “a beefed up i7”.

Maybe they did this in the hope that the i9 HEDT “brand halo” would help them sell more of these consumer CPUs. But if that’s the case, they don’t get to complain when someone posts that an M1 beat an i9 and people accidentally interpret that as “the M1 competes against Intel’s HEDT line”.


Its also a 9th gen i9, a two and a half year old chip.


You know what's interesting, China and Russia have been struggling for years to get something on the level of intel Westmere. And here comes Apple out of the blue with a proprietary arch and hardware emulator; cinebench showing it to be around a x5650 xeon (Westmere). Easy.

M1X and M2X in the making, too?!


It's not out of the blue and not even surprising.

Look at this from this POV:

- Apple started on custom ARM many years ago.

- Apple isn't really smaller then Amd, and Amd also rewrote and restructured their architecture some years ago.

- Apple hired many greatly skilled people with experience (e.g. which worked before with Intel)

- Apples uses state of the art TSMC production methods, Intel doesn't and the "slow" custom chips from China and Russia don't use that either as they want to have chips controlled by them produced with methods controlled by them. (TSMC production methods are based on tech not controlled by Taiwan).

- Apples had a well controlled "clean" use-case, where they bit by bit added more support. This includes that they could drop hardware 32 bit support and don't have any extensions they don't need for their Apple products, this can make thinks a lot easier. On the other hand x86 has a lot of old "stuff" still needing support and use cases are much less clear cut (wrt. thinks like how many PCI lanes need to be supported, how much RAM, etc.). This btw. is not limited to their CPUs but also (especially!) their GPUs and GPU drivers.

So while Apple did a grate job it isn't really that surprising.


> ... (TSMC production methods are based on tech not controlled by Taiwan).

Do you mean "TSMC production methods are based on tech controlled by Taiwan", or "TSMC production methods are based on tech not controlled by China/Russia"?

Update: after reading the thread I think I understand that you meant TSMC production relies on tech/equipment from ASML, which is not controlled by Taiwan.


> On the other hand x86 has a lot of old "stuff" still needing support and use cases are much less clear cut (wrt. thinks like how many PCI lanes need to be supported, how much RAM, etc.).

Good point overall - the performance of Rosetta 2 helped facilitate much of this and - at least from what I've read - does seem to be surprising to folks in the space. So that also helped them out.


> And here comes Apple out of the blue with a proprietary arch and hardware emulator...

Apple is designing processors and GPUs at least since iPad2's tri-core GPU. They're neither coming out of the blue, nor newbies in this game.


Not only do Apple have decades of experience both as themselves and PA Semi, they can also probably outspend efforts that this countries could do politically (Russia yes, China probably not, but you get the idea) especially when weighted against their ease of acquiring information.


Also Intrinsity.


>China and Russia have been struggling for years

Sometimes it's hard to figure out all of the things left out of the plans that were stolen. Some engineer saw something not working, looked at the plans, and then noticed where the plans were wrong. The change gets implemented, but the plans don't get updated. Anyone receiving the plans will not have those changes. Becareful of those plans that fall of the back of trucks.


Definitely not out of the blue. This has been a long and steady march


The team behind the M1 has been doing high performance chips since they did a PowerPC chip in _2005_ and has been doing custom low-power apple arm CPUs for almost 14 years.


Apple have at least 13 years experience.


Not to dismiss the hard work the engineers at apple put in but China and Russia hasn't poached as many engineers over the years as Apple has.


They also don't use TSMC's sub 10nm nodes.

The whole point behind the effort of China and Russia is to have a CPU under their control, starting with the design but also including the manufacturing.

And TSMC uses technology for their nodes which is not under their control at all.

The amount of expertise China already has and will accumulate in the next decades should not be underestimated.


> And TSMC uses technology for their nodes which is not under their control at all.

Note it's also not under TSMC's control - a lot of fab technology comes from other suppliers like ASML in Europe.


That's correct, China just poaches the tech when it lands on their soil without paying a thing. At least Apple pays those they've poached a salary.


Being thermally challenged is part of the design, huh...


This was a famously bad CPU/cooling design, actually. LOTS of people complained about it at the time. You can place blame on either party according to your personal affiliation, but similar Coffee Lake chips from other vendors with more robust (and, yes, louder) cooling designs were running rings around this particular MacBook.


After having to use i9 MBP for 1,5 years, my conspiracy theory is that Apple created those purely to emphasize how their own processors are better.


Also, many MB(|P)s get thermally-challenged and frequency-challenged because their sensors go tits-up.


Why? It’s all the more apples-to-apples as a comparison because the form factor remains the same and thermal limitations are similar between two system.

Why would you want to compare a desktop class i9 with a 10 watt M1 chip?


Because there are a lot of issues with i9s in those form factors that leads to less perf than even an i7 from the same generation.

There was a Linus Tech Tips the other day about how even current gen Intel laptops can see better perf on an i7 than an i9. It looks like the i9s simply don't make sense in this thermal form factor and are essentially just overpriced chips for people who want to pay the most to have the biggest number.


The title is implying M1 is always better than every intel, given that I9 is the best intel (consumer) chip.

It's been known for years that apple has been limiting the intel chips by providing insufficient cooling. I don't overly care about how fast an M1 chips in a macbook is compared to an intel chip in a macbook. I want to know how fast an M1 is compared to a desktop I9 (given mac minis have M1 chips now), or compared to a properly cooled laptop latest-gen I9.

All this experiment shows is that insufficiently cooled processors perform worse than sufficiently cooled ones. It's a classic example of cherrypicking data. Admittedly, my solution would be different to the article's author. Instead of using a badly cooled laptop to compile stuff, I'd setup a build server running linux.


Oh come on, comparing it to the "best" intel mac laptop makes sense when you only care about that form factor/specific model, which is the case for a big part of many industries (Software included)


If it's only a matter of cooling why does the Intel also produce 4x the power consumption (not highlighted in the article, but still...).

If I wanted absolute best compile times I'd go build a threadripper workstation, but I need mobility and can't always rely on an Internet connection.


Well that's why the M1 is much better than the I9, not strictly because it's a faster chip, but because it's a more efficient chip. Efficiency = less cooling required.

I think part of it is because I don't care about mac computers. I'm interested in how the M1 - a consumer ARM chip - stacks up against the best x86_64 chip, not how it performs in one particular laptop.


Not to a desktop CPU no, but I'd rather see comparisons to laptops that don't have terrible thermal design. And to AMD as well as Intel - they're doing significantly better than Intel in the high power efficiency space afaik.

It just seems like a cherry picked fight - there's more to the laptop market watt/performance wise than 2019 macbooks.


Not sure why you are being downvoted.


> I know that cross-compiling Linux on an Intel X86 CPU isn't necessarily going to be as fast as compiling on an ARM64-native M1 to begin with

Is that true? If so, why? (I don't cross compile much, so it isn't something I've paid attention to).

The architecture the compiler is running on doesn't change what the compiler is doing. It's not like the fact that it's running on ARM64 gives it some special powers to suddenly compile ARM64 instructions better. It's the same compiler code doing the same things and giving the same exact output.


No, it's not true. Just a common misconception because people believe it's some sort of emulation.


Some cross-compilation may need some emulation to fold constant expressions. For example if you want to write code using 80 bit floats for x86 and cross-compile on a platform that doesn’t have them, they must be emulated in software. The cost of this feels small but one way to make it more expensive would be also emulating regular double precision floating point arithmetic when cross compiling. Obviously some programs have more constant folding to do during compilation than others.


My understanding is that LLVM already does software emulation of floating point for const evaluation, in order to eliminate any variation due to the host architecture.

https://llvm.org/doxygen/structllvm_1_1APFloatBase.html


Is constant folding going to be a bottle neck? In this particular instance, in the kernel, floating point is going to be fairly rare anyway, and integer constant folding is going to be more or less identical on 64-bit x86 and ARM.


In theory, yeah. In practice, a native compiler may have slightly different target configuration than cross. For example, a cross compiler may default to soft float but native compiler would use hard float if the system it's built on supports it. Basically, ./configure --cross=arm doesn't always produce the same compiler that you get running ./configure on an arm system. As a measurable difference, probably pretty far into the weeds, but benchmarks can be oddly sensitive to such differences.


there's no reason for a cross-compiler to be slower than a native compiler.

if your compiler binary is compiled for architecture A and emits code for an architecture B, it's going to perform the same as a compiler compiled for an architecture A and emitting code for the same architecture A.


Well there's one. If people tend to compile natively much more often than cross-compile, then it would make sense to spend time optimizing what benefits users.


Yes but you probably would make those optimizations in C code and not assembly. The amd64 compiler is basicially the same C code whether or not it's been bootstrapped on armv8 or amd64.


Well to get a little nuanced, it depends on if the backend for B is doing roughly the same stuff as for A (e.g. same optimizations?). I have no idea if that's generally true or not.


There are some small nits, where representation of constants etc can be different and require more work for a cross-compiler.


Endianess differences would require more work, across all pointer math output, etc., though maybe still not significant.


For anyone curious I timed the same build on a Ryzen 5800X (thanks Docker for making it easy).

   -j8
   real 5m53.255s
   user 45m33.966s
   sys 4m19.954s

   -j16
   real 4m31.947s
   user 63m36.096s
   sys 6m8.550s


I tried my 10 year old desktop i7-3770, with -j8, 16 gig of RAM and 10 year old SSD:

    real    18m12.868s
    user    131m55.267s
    sys     11m29.005s


i7-6820HQ Skylake 2015

    -j8 
    real  16m22.244s
    user  116m54.355s
    sys  9m23.599s

After 6 years, using faster RAM and a much faster SSD, I honestly expected more than ~1.8 times faster compilation times from the M1.

EDIT: all CPU issue mitigations were active, HT was disabled


For a while now I have been wondering what the times on a Ryzen 5800X would be. Thank you for sharing.


Interesting, it almost doesn't scale from 8 to 16. I mean yeah, it's 30% faster but user+sys is actually 40% higher.


Makes sense for an 8-core CPU.


Regarding the point in the article mentioning the fans starting to spin at the drop of a hat: The macbook pro i9 16", albeit a fabulous device in almost every aspect, has a bug: Connecting an external monitor at QHD will send the discrete graphics card into >20W of power draw, whereas usually it's about 5W. At 20W for the graphics card alone, it's not difficult to see that the fans will be spinning perpetually.


It gets worse—if you are charging the battery, you can immediately see the left or right side Thunderbolt ports get a lot hotter, fast. Probably because piping 96W through those ports heats things up a bit.

The thermal performance on the 2019 16" MacBook Pro is not wonderful.


How is that acceptable for a "pro" machine? I would expect that crap in a $100 Chromebook, not in a machine that starts at $1000+ just for the base model.


The 16" Macbook pro starts at $2,399!

With an i7


This problem is so infuriating. There was a thread the other day about it. It's clearly a bug, but it seems to be one that nobody wants to take responsibility for.


Why fix it now when it helps them vertically integrate?


The 1440p monitor issue seems like an especially bad bug. I run a 2160p monitor at a scaled HiDPI resolution (3008x1692) and never run into that. The discrete GPU idles at around 9W.

(Though also using a thunderbolt -> DisplayPort cable might be helping? Connecting over HDMI could exacerbate things).


I'm thunderbolt -> displayport on 1440p and I have eternal fans.

It's pretty baffling that it's not been solved really.


I know you're not going to like it, but you can make it go away by switching to a slightly lower resolution. So the external monitor will be doing the scaling. It's slightly fuzzy, but the quietness is golden.


Yip, same here, 4K is not affected, but QHD is.


Huh. I have Dell Latitude 5501 and it's almost always in hairdryer mode when connected to the dock (on which there's 1920x1200 HP Z24i and 2560x1440 Dell U2515H). Your description seems suspiciously similar.

Different graphics, though - MX150.


I've a Dell precision 5530 for work, absolutley roasting, continuously, even under no load. It's so bad, I'm switching to a mac book pro 16. Seems like out of the frying pan and into the fire for me!


I wonder if anyone who prefers to dock their laptop has thought about sticking it in a mini freezer under their desk


I pointed an air conditioner at mine for awhile and it definitely helped!

Though when I really want to avoid the fans I just disable the processor’s turbo boost. In my case that means the frequency never goes above 2.4GHz. For sustained workloads it doesn’t matter much since after boosting it’ll just throttle itself back to 2.4GHz or lower anyway.

Util for disabling turbo boost: http://tbswitcher.rugarciap.com/


When I used to game on my 2012 macbook pro, I would rest the laptop on top of ice packs and change them every so often as they warmed. The case on aluminum macs acts as a heatsink so this was surprisingly effective. I was able to get my winter FPS during the heat of the summer this way.


Condensation would likely be an issue which would need to be addressed.


ziplock bag and a silicon gel packet?


I don't think silica gel has the absorption capacity for this amount of water and the bag would be difficult to seal if you have cables for docking.

We clearly need to invent an over engineered system for condensation free freezing of an entire docked laptop, for science.


Funny thing is the newest i7 from Intel (10nm) also might compile it noticeable faster then a i9 MacBook.

There are very few if any Laptops out there which handle a i9 well. And Apple is kinda well known to not do a very good job when it comes to moving heat out of their Laptops. This is often blamed on Intel due to their CPU producing to much heat, but there are other Laptops which do handle this CPUs just fine...

Anyway that doesn't change that Apples M1 are really good.


Given the state of Intel macOS of recent past it is not entirely impossible that Apple put in hardware specific optimization work into m1 macOS but ignored Intel one for whatever reasons.

I mean he's compiling the Linux kernel inside of docker which runs on macOS Intel and M1. There's also hypervisor implementation involved - hardware and software.

If he booted ARM Linux on M1 and compared the kernel compile there with one on Linux on Intel MBP it would be more apt. But then again Linux isn't optimized for the Mac especially for IO workloads due to AHCI / NVME driver issues - see phoronix.

So yeah we will need a 8c Ryzen 5xxx mobile in Nuc form factor (PN50) vs Mac mini M1 benchmark when Linux is somewhat decent on that hardware.


Having substantially more L1 and L2 cache per core but no L3 has to be a massive part of why the M1 performance is so good. I wonder if Intel/AMD have plans to increase the L1/L2 size on their next generations.


AMD went from 64kb in Zen 1 down to 32kb in Zen 2/3. Bigger isn't always better. It only matters if the architecture can actually use the cache effectively.

M1 has a massive reorder buffer, so it needs and can use more L1 cache. It's pretty much that simple.


It's more complicated on x86 because of the 4k page size. The L1 is heavily complicated if it is larger than the number of cache ways times the page size, since the virtual->physical tlb lookup happens in parallel. 8 way * 4kb = 32kb. AppleARM runs with a 16kb page size. 8 way * 16kb = 128kb


Intel/AMD have decades of experience trading off L1, L2, and L3 so it's unlikely that there's a magic design they've overlooked.



Can you help me understand why removing L3 cache would speed things up? Genuinely curious!

Increasing L1 and L2 make intuitive sense.


Removing L3 frees up transistors to be spent on L1/L2. On a modern processor the vast majority of transistors are spent on caches.

Why this might help, ultimately, because the latency for getting something from L1 or L2 is a lot lower than the latency from L3 or main memory.

That said, this could hurt multithreaded performance. L1/2 are used for 1 core in the system. L3 is shared by all the cores and a package. So if you have a bunch of threads working on the same set of data, having no L3 would mean doing more main memory fetches.


I think the idea is by removing L3 has allowed for an increase of both L1/2.


I think he meant "despite not having L3"


Apple will invent L3 for workstation-level CPU.


Wild theory: for workstation-class systems, the 8/16 GB of on-package memory becomes "L3", and main memory can be expanded with standard DIMMs.


If you could get a Mac Pro with 32 to 48 firestorm + 4 icestorm cores with tiered memory caching and expandable to 2TB+ DDR4/DDR5 DIMMs. That would be an impressive machine for the small amount of wattage it would draw from the wall.


How does a virtualized ARM build, of Ubuntu for example, run in Parallels vs. the same workload on an x86 virtual machine in the same range?

If my day to day development workflow lives in linux virtual machines 90% of the time, is it worth it to get an M1 for virtualization performance? I realize I'm hijacking but I haven't found any good resources for this kind of information...


This is very dependent on setup. If your IO is mostly done to SR-IOV devices, your perf will be very close to native anyway. The difference would be about the IOMMU (I have no idea if there's a significant difference between the two here). If devices are being emulated, the perf probably has more to do with the implementation of the devices than the platform itself.


Compiling stuff is not correct benchmark since end result is different - binary for arm vs binary for x86.

Cross compiling is not good as well because one platform has disadvantage of compiling non-native code


Is there an advantage to compiling native vs non-native code? Certainly during execution I would expect that, but I’m not clear why that would be true for compilation.

Agreed that a better benchmark would be compiling for the same target architecture on both.


Non-native can be a bit harder for constant folding (you have to emulate the target’s behavior for floating point, for example), but I think that mostly is a thing of the past because most architectures use the same types.

What can make a difference is the architecture. Examples:

- Register assignment is easier on orthogonal architectures.

- A compiler doesn’t need to spend time looking for auto-vectorization opportunities if the target architecture doesn’t have vector instructions.

Probably more importantly, there can be a difference in how much effort the compiler makes for finding good code. Typically, newer compilers start out with worse code generation that is faster to generate (make it work first, then make it produce good code)

I wouldn’t know whether any of these are an issue in this case.


Maybe you could cross compile on both systems and see if it actually does make a difference. I'm doubting it but don't have much to base that on.


True but this is addressed in the article. If what you need is code that runs on a RPi, this is a meaningful comparison.


> If using HDMI to my LG 4K display at 60 Hz, the display just blanks out entirely for 2-4 seconds every 5 minutes or so. No clue why.

I have the same problem with a 2018 (or 19? whichever's the latest) x86 Mac Mini connected over HDMI. Somehow I think Apple hasn't tested the Minis a lot with multiple monitors... or maybe with HDCP? Could be some HDCP renegotiation that you can't disable.


Regarding the displays. I use a dual screen setup on the mini (hdmi and display port) and it is perfect. So it is either a hardware issue in the cables, the mini or the monitor. I use currently use mono price 32 inch hdr monitors.


I've tried two different (known-good) HDMI cables and only have the one DisplayPort cable (which works fine on the i9 MBP and even on my 13" MacBook Air)... it seems to be something funky with the mini only.

At least with the DisplayPort cable, the dropouts don't happen, it's just annoying to have to manually turn off my monitor every time I walk away lest it go into the on/off/on/off cycle while the Mac is asleep.

I did order a CableMatters USB-C to DisplayPort cable today to see if maybe going direct from the USB4->monitor will work better than TB3->CalDigit->DisplayPort->monitor.


Man, I have huge DisplayPort issues when using my Dell 7750 with an external monitor. It can take a couple of reboots before it'll send a signal to it. The OS can see the monitor, but it just won't use it. It's incredibly annoying.


Interesting. I use a USB-C to DisplayPort cable.

I've never run into a bad HDMI cable honestly. But I have run into bad DisplayPort cables a few times. I am unsure why that is.


Their DP problem sounds vaguely familiar; I'm almost certain I had the same thing years ago with a 2014 MBP. Can't remember what the fix was, though...


I would suggest swapping the monitor and the cables separately. I do think there is an issue with one of them.


Generally speaking, what is better if you value performance and energy efficient hardware? M or Intel based MacBook pro? Price wise Intel based machines seems to be more expensive.


Easily M, but wait a few more days and you might have an even better 15-inch M1 MBP to consider.


Can M1 Mac be used to cross-compile binaries for other platforms and systems (Windows, Linux) and would there be a performance gain over a beefy Linux/Windows laptop?


Ok, my home 3900x Compiles a 10-project VS2019 solution in about an eight of the time my half-decade old retail work computer. So?


Maybe speed it up even faster with distcc?

https://distcc.github.io/


Good! The M2 will be even more faster. Can't wait to skip the M1 then.


With my luck, Apple's going to release some new devices that will blow the M1 Macs I just bought last week out of the water... that is the way of things, with tech!

I'm still trying to unlearn my fear and trepidation surrounding the use of my laptop while not plugged into the wall. I was always nervous taking the laptop anywhere without the power adapter in hand, because 2-3 hours of video editing or compilation work would kill the battery.

The Air can go a full day!


I think it’s pretty likely M2 (or M1X, or whatever they brand it) MacBook Pros will be announced next week at WWDC, given the recent rumors generally coalescing. They may not be released right away but most rumors have suggested a summer release. Not that you should regret your purchase, but for (future) reference it’s a really good idea to check the rumors when considering a non-urgent Apple purchase.


I really can't wait for my fleeting happiness of seeing their next processor!

The rumors really describe the perfect machine for me and many people's use cases.


This is surprising. Are we really running out of people who would try run datacenter and an electron App on Apple laptop and then tell us here how these machines are not for professional users.


I will absolutely try to do that with a M2 processor and 64gb RAM per device


macOS though? I don’t feel very productive using their OS. I would rather have a slightly slower laptop and feel more productive. But I don’t compile anything locally or anything. It’s all in the cloud and stuff.


Sounds like you're not the target market then. Apple generally tries to sell computers to people who feel productive using their OS.


The M1 is great indeed but one thing holds true for Apple never buy a first gen device, 3rd Gen onwards is usually where you get to see them becoming viable for long term support.

While the M1 is great there are clearly issues to be ironed out even if it’s just the limited bandwidth available for peripherals.

I’m also betting on major GPU upgrades over the next 2 generations.


I mean, kind of, but it seems that the main issue here with the M1 is that it's only 30% faster than an i9. If I were buying a new Mac today, I would only consider an M1 system. It seems to be better at literally everything I want to do with it than the Intel equivalent.

While M2 will undoubtedly be better yet, I see no downside to jumping aboard M1 today for must people who aren't running specialized software.


M2/M3 is where they’ll likely finalize the majority of their architectural features from a CPU perspective just look at what happened to first Gen Apple devices that used an Apple silicon like well the original iPhone or the Apple Watch series 1.

The M2/3 is when you’ll see a SoC that is finally designed for laptops and desktop computers and where you would likely see some additional ISA improvements on the CPU and on the GPU side too like hardware Ray Tracing support which will surely come.


Plus, I don't think Apple has really released a "Pro" M1 laptop yet. The current M1 MacBook Pro has max 13 inch screen, max 16 GB RAM, max 2 TB storage, only 2 Thunderbolt/USB ports, only a single external display supported, no external GPU supported.

If I had to guess I'd say they meant to call this just MacBook but tacked on the Pro since they discontinued the non-Pro line entirely.


There can be no real pro M1 yet, I don’t have that much issues with the 16GB limit tho some might people might but the other limitations are really due to SoC itself it doesn’t have sufficient bandwidth to support external GPUs multiple displays and a lot of high bandwidth peripherals.

My own personal theory is that the M1 was not originally designed for laptops it think it was originally intended as an iPad Pro/Pro+ SoC to compete with the higher end Surface devices. This is why likely external GPU support and bandwidth for peripherals wasn’t prioritized, its more than enough for a tablet.

I’m not sure if Apple really expected to get that much performance out of it from the get go, when their early samples did they made a decision to launch a full line including laptops despite the rest of the SoC not being designed for that.


The 13” MBP line has been bifurcated for a couple of years, the M1 replaced the low-end of that line.

The split started when Apple first tried to replace the MacBook Air with the MacBook Pro sans-touchbar (aka the MacBook escape), and the low end Pro hung around even after they reversed course and brought out the new retina Air.


You can return it (within 30 days) and get the new generation. I am gonna upgrade my 16" MBP Intel i9 as soon as I can buy the Apple Silicon in 16".


Never thought of that... but I'll cross my fingers then and see what Apple releases.

These Macs may still be perfect for my needs though. 10G on the mini means I skip the giant external adapter, and the Air doesn't have the dumb Touch Bar.


If they announce the rumored M2 Macs next week, you might be within the 15 days to return the M1's and order (with plenty of waiting) the M2's.


I really hope Apple can control themselves with these CPUs. The M1 has the perfect thermal envelope for the Macbook Pro. No thermal throttling ever. I greatly fear the future were Apple starts going down Intels path were you buy a sick CPU on paper. But once you actually try to do anything with it it throttles itself into the ground.


Historically, this is one of the reasons Apple went with Intel CPUs to begin with. The PowerPC G5 was a nice processor but never ended up with a thermal envelope acceptable for a laptop. So from 2003 to 2006, you could buy a Mac G5 desktop, but if you wanted a laptop, it was a G4. 2006 was the beginning of the transition to Intel, who made better processors that Apple could put in laptops.

It's not the only reason Apple switched to x86, but it perhaps the most commonly cited factor.


I complain about how hot the i9 gets... but then I remember the period where Apple was transitioning from G4/G5 to Intel Core 2 Duo chips... in both cases they were _searing_ hot in laptops, and Apple's always fought the battle to keep their laptops thin while sometimes sacrificing anything touching the skin of the laptop (unlike most PC vendors, who are happier adding 10+mm of height to fit in more heat sinks and fans!).


Heck, even before that with the 867MHz 12" Powerbook G4. Pretty sure that thing is why I don't have children.


I don't see a world where the iPad pro has a fan in it so i think they'll have to commit to good thermals for the foreseeable future


I don't think the Ipad will ever contain the higher specced M1 version. M1X or M2 or however it will be called.


My i7-10875H pretty much stopped thermal throttling when running CineBench R23 when I changed the thermal paste to Kryonaut Extreme.


How about running CineBench R23 and a GPU workload continuously for an hour? I'm willing to bet it will throttle. That little chip you have there is not only a CPU. It's also a GPU. Utilizing half of it's function and then saying it doesn't throttle is not that impressive. Still there are many Intel laptops that throttle even by going half power.

What laptop do you have? If it's a gaming or workstation laptop those are generally much better cooled then thin & lights like macbook pros.


Imagine if the processor wasn't proprietary how much faster it could be


Do you have any justification for that claim?

Seems to me that Apple has invested a lot in this line of processors over many years and have been in the position to use their own processors in their macs when Intel faltered. That's an enormous win for them. Where's the part where give away anything to their competitors in order to get faster processors?


I think they’re saying the build process could be improved if the M1 architecture were open. The build is not optimized for compilation on the M1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: