I'm a little bit confused why there are so many comparisons between M1 and older apple hardware. It seems like a weekly occurrence on HN. I think it's great that the M1 performs well. However, posts like these, and in general the comments I read, make me feel a bit uncomfortable. So, take this with a grain of salt, but I feel like there is a strong desire to confirm the idea that the M1 is a beast. This has been reinforced by quite a lot of misleading advertisement.
So, personally, I like to work on a desktop computer. I don't care all that much about the power efficiency as I do about performance. So, I was curious, and ran the same test on a regular good top-end desktop hardware, and the same test took 2 minutes and 53 seconds. In other words 4.3x faster, or "78% faster" if using the same metric. Compared to the M1 time it would be 3.1x and 68% respectively.
Just to be clear. This isn't a comment that "with my beefy computer I can outperform the M1". My desktop environment is not something I can commute with. My point is nothing more than, maybe to add some interesting data-points that I find are hard to come by. I seem to always see the M1 be compared to older apple hardware. Maybe that's all you wish to know. If MacOS is a must-have, then I suppose it doesn't really matter what else exists. And this is fine, I have no objection to it.
I think what people are excited about is not the actual benchmark but the potential of the path we now are clearly on.
When you see “M1 beats XXXXX at XXXXX task” I don’t think anyone really cares that much about that task, but it usually makes you think, “crap, imagine what their proper MacBook Pro pro / Mac Pro chip is going to be able to do. Or what it will do in 3 years...?
Micro Men! Great little TV movie, and really interesting story of the rivalry between UK golden-age home computing giants Sinclair and Acorn (Acorn being the company behind the BBC micro, and also originally the "A" in "ARM").
bbc contracted acorn to produce an educational computer to go hand-in-hand with a tv program. it was a smashing success and made them so much more money than anything else they'd been doing. they birthed arm.
It made them the only money they ever made, but in the end they had to default on payback of 3mil pounds of royalties to BBC thanks to typical 80s mismanagement (oh hi Commodore).
I believe a lot of people don't hear the fans because they're wearing headphones. For the rest of us that like a quiet room, the arm CPUs sound like a godsend.
I say "sound" because I don't feel like an early adopter much; don't have one yet.
> I believe a lot of people don't hear the fans because they're wearing headphones. For the rest of us that like a quiet room
You’re completely focusing on fan noise here, but of course it also enables tiny devices with great battery life and - this isn’t completely irrelevant when you run your computer for a significant part of the day - lowered electricity cost.
Oh, and I believe these processors have quite some head room before they will reach the physical thermal limits others have already run into, so expect some more raw performance on better cooled systems.
> For the rest of us that like a quiet room, the arm CPUs sound like a godsend.
Got an M1 Pro and normal daily use just doesn't trigger the fans. I can do it by, e.g., charging whilst playing Minecraft or running something that uses all 8 cores at full speed for more than two minutes-ish (`wireguard-vanity-address`, Blender render, etc.) but it's an oddity to hear them these days.
Despite all of the 14nm++++++ ridiculousness, 2020 was Intel's best year ever[1].
I think Intel has a huge challenge ahead but one thing that's helped them is that Skylake was competitive enough to get them through the process node mistakes (at least to this point). Maybe what we've learned is that fewer chip buyers care about performance and power tradeoffs than imagined (or at least, than I thought).
As did Blackberry! And countless other "innovators dilemma" examples.
Intel's a little different though. It's not like consumer sentiment has changed like in the case of the C64->PC or BB->iPhone. AMD and ARM are strong competitors but Intel is in this position because they made a major technical misstep. Maybe more like Microsoft and the Longhorn/Vista debacle. I don't think we can conclusively say Intel or even x86 can't be competitive when they get their process figured out.
Commodore didnt go out of business because of C64, if anything C64 kept them afloat far longer than they should of lasted. They actually still sold ~150K C64 in 1994! Commodore went bankrupt because they kept selling 1981 released c64 and 1985 released Amigas all the way to 1994 with almost no upgrades. They did plenty of updates, like new C64 cases, or Amiga 600. A600 had same cpu, same mhz, same Dual Density floppy as 1985 A1000.
Intel has been selling same 4 core CPUs for almost 10 years now, only last year finally releasing something able to compete with Ryzens on $/core count.
> Also, everyone just loves watching Intel suffer.
This is such a weird fucking statement.
Apple literally think they own your hardware after you buy it, and Intel are the evil ones?
I don't really do fanboyism, currently own Mac, windows, Linux, Intel, amd, android, iPhone, switch and PlayStation.
I just use what I use but I don't get so attached, apple has been lacking for years. M1 is ok but honestly it feels like they've only just caught up to everyone else.
Not if you've been in the industry for long enough. And it has absolutely nothing whatsoever to do with Apple, the "everyone just loves watching Intel suffer" (or more charitably, "watching Intel get a bit of comeuppance and much needed kick in the arse") bit stands on its own no matter who is dishing out the suffering because Intel has been a big PITA for everybody since the late 90s/early 00s. Intel is pretty much single-handedly responsible for lack of widespread ECC memory for example. They've nickle and dimed everyone and feature gated important functionality for pricing power and basically just exploited the ever living shit out of their dominant CPU position. And the last time they were getting threatened in terms of that position, they bought themselves what they needed with illegal tactics denying AMD a chance to get the ball rolling, dooming us to another 10-12 years before they could go for it again. And Intel has been this way for ages and ages. People still remember everything that went down around Itanium and how it choked off promising alternatives.
So yeah. Intel have indeed been the evil ones, and in a way that's been a lot harder to avoid than Apple which you might stop and recall has in fact always had only a tiny minority of the PC market. I mean, if you want to talk about Apple specifically:
>Apple literally think they own your hardware after you buy it, and Intel are the evil ones?
That's kind of amusing given the access Intel has given themselves at a deep level on every CPU.
>M1 is ok but honestly it feels like they've only just caught up to everyone else
This is just you being silly. "Only just caught up"? They were using Intel before so they were by definition the same as everyone else. What they've done in mobile CPU design has been genuinely remarkable. And that in turn is a genuinely good thing in general whether you use them or not, same as with AMD. We're already seeing Intel sluggishly but seriously start to respond and shift.
Intel built a fairly open platform whereas Apple is and always has been the China of tech.
> ...given the access Intel has given themselves at a deep level on every CPU.
And they use that to stop end users from doing what?
Meanwhile Apple will have you paying them yearly to sign apps as they continue to build a dystopian new world order where only they control everything you can do with your own hardware. I honestly hope that their marketshare keeps growing in all categories though. The more successful they are the better chance people will rebel against them. You can't run an app business in the United States without dealing with Apple. Whatever Intel has done is hardly comparable.
> ...as they continue to build a dystopian new world order where only they control everything you can do with your own hardware
Well, the new M1 computers have a built-in ability to run third-party OSes. It is not an oversight but rather something Apple intended to do and spend effort implementing in a way that fits their security model. That they did not provide any hardware documentation to go with it, is another thing.
That's legit reason for Intel to be blamed, but also I want to praise Intel for massive contributing for OSS. I wish AMD also increase contributes for OSS as increasing shares and profits.
Thank you. Yeah I never really had much to say about intel. They sell processor, sometime, rarely, I buy processor.
I do look up what make sense to buy, or not, at that specific time.
And most of the time I forgot I did it in a matter of weeks.
I’m not sure what it’s in my current machines. It work really well.
On the other side, for some reasons all my work computers have been macs for a solid 10 years now. Basically after the intel shift.
And boy oh boy I had moment of annoyance toward Apple.
Even in terms of power efficiency AMD is in the same ballpark with their 4 and 5 U-series mobile parts, particularly when comparing multicore benchmarks on their 8 core/16 thread CPUs.
M1 is really most impressive when compared to Intel parts and especially when compared to the underspec’ed Intel parts that Apple used to ship.
Yes, the M1 is good but I’m honestly just tired of hearing about it. The story seems to be about the decline of Intel’s fab supremacy more than anything.
It's kind of a joke at this point. Talking about the runtime of something on a dual 7742 and someone pipes up with "Have you tried M1, foocloud has them available, it's so much faster".
Single threaded performance is on par with AMD 5xxx series parts. Consider that Apple is using the 5nm process while AMD is using the 7nm process.
I'm not sure how it compares in terms of power efficiency though I'd expect the ARM chips to be more efficient both because of the transistor size and the ARM CPU not needing to support instructions from the 60s.
I'm very excited to see a new competitor in the CPU game and I'm a big fan of ARM, hopefully we will see further disruption but in the PC space.
Imagine a 128 core desktop CPU. It will be interesting to see how software has to evolve in order to maximise utilization of the quantity of threads.
> [...] and the ARM CPU not needing to support instructions from the 60s.
Apparently, those weird instructions only take up a teeny-tiny amount of space on the CPU and are basically implemented in micro-code.
To be backwards compatible those instructions just need to be supported, but they don't need to be fast. So they are implemented such that they don't get in the way much.
Is it really a competitor given they don't sell their chips? For Intel maybe, given they lost the Apple market, but it doesn't look like it changed anything for the other players.
There's been a lot of comments about how ARM is going to revolutionise everything, but that doesn't seem to be the case for the chips you can actually buy (e.g. Qualcomm Vs AMD). Honestly, I don't see the M1 as a statement about how good ARM compared to x64, but how good Apple is compared to the rest of the field.
> It will be interesting to see how software has to evolve in order to maximise utilization of the quantity of threads.
Not only this, but the PHP/Python/Javascript kiddos will need to learn:
1. Languages that get closer to the metal.
2. Utilize those languages to make their applications aware of data locality, temporality, and branching.
The bottleneck of the future is memory and pipeline stalls. Spraying objects all over the heap is one of the dumbest things you can do with modern processors, but developers do it all the time. This needs to get fixed.
I think I might have been misunderstood here. I sort of feared I would be, as I have some experience in this regard. I don't really have an opinion on what is nicer, and I also don't mind people liking their stuff, if anything it makes my happy to know.
I also agree that it is weird to compare it to a desktop. In fact, my larger point is that I feel these benchmarks, and subsequent comment discussions, tend to imply this comparison. And, I don't see the harm in then showing what those numbers would be. If power consumption isn't important, I sure would like to know.
I believe it's often on HN because there hasn't been a major "jump" in computing/processing power like this in over a decade. I do remember when every year's PC was significantly faster and yearn those years, as probably many here, and see this as a very exciting moment.
I only use a laptop, like many others, and I believe it's very fair to compare M1 vs last generation Intel. Why? Because both are Apple computers (few differences in screen brightness, colors, etc), same OS (few differences in optimization, stability, support, etc) and similar constrains from being a laptop (thermal, power, etc). Intel's year over year improvements are minimal, so "last year" Intel vs "this year" m1 is a fair comparison. It'll not be so when we get to m3+, but even m2 would be fair-ish game and gain more from comparing an Apple Intel computer vs Apple m2 than comparing it with a totally different windows machine.
Doesn't that imply you're only limiting yourself to apple hardware? Seems like the logical conclusion if the M1 is this "jump" in computational power, yet being three times slower in the very same test I ran?
Is it a good point though? I mean, I've already said that in my situation, I do not care about power consumption. I'm not saying this isn't a good feature of the M1, it is just irrelevant when you're plugged in, and you can dissipate the heat. Both of these are the case when I work, so I would consider it to have no downsides, with a 3x speedup. Likewise, this is not relevant for those who have such power constraint, that need to work on 10 hour long train rides. I'm not sure what is so difficult about accepting these constraints, since most replies so far to my post about "if power consumption isn't relevant to you" with "well, the power consumption is much better".
The point here was the nostalgic "I miss the days when computer performance increased noticeably", which apparently hasn't happened for a while, until the M1 came along. If it is performance you're after, the only way that sentiment could hold true, was if you limiting yourself to the rather poor performance in older apple hardware. Just think about it... if a 1.33 speedup would give this feeling, imagine what a 4x speedup would feel like.
So, it's not that I think the M1 is bad. It's that these posts that keep popping up, week after week on Hacker News, which go to some lengths to show how the M1 is 30% faster at some heavy work load than the older, and much worse Intel processors. It just seems strange.
I'm no expert, but with a 15W power draw the answer to the "how long" question is likely to be "perpetually" with active cooling. Now, Apple tends to let its processors run very hot before throttling (I had an old i5 MacBook which easily hit 98C), but again, a 15W power draw means there isn't all that much energy to push into whatever heat sink or cooling solution they want to use. (In the real world it's rather more complicated, with different load intensity on different parts of the SoC, but still)
No, I'm just saying that if you want to compare apples to apples (only the processor), well Apple just did that and changed mainly the processor, so if we compare two Apple machines with different processors we can see the jump in performance much better.
I'd be very happy to see comparison of other ultrabooks with the m1, but AFAIK everyone who "beats" the m1 are desktop workstations and not ultrabooks.
True all day battery life while holding it's own against Intel desktop-class perf is super exciting. (I'm aware that some Intel chips go faster, but see eg [1] -- not near the price point.)
You could get that battery life before, but not without tradeoffs you really felt.
I can barely feel the perf difference between my 2015 mbp and my 2019; I'm dorkily excited to get a new laptop and actually feel a perf bump. It kind of feels like the 90s again.
And separately excited for what happens when Apple makes an m2 or whatever with more thermal budget to compete in the desktop space.
I remember when the XBOX and PS2 were out. I never got into console gaming, so I was very much on the sidelines. From that perspective it was amazing how passionate people got about a consumer product that realistically should form no part of anyone’s identity. Since then I’ve seen the same pattern over and over again. I don’t really get it. Kudos to modern marketing I guess.
I dunno about the marketing, but I for one was excited about the first consumer devices capable of realtime, immersive 3D worlds. I played a lot of 16 player halo in college classrooms on big screen projectors. It was exciting.
I guess maybe the point I was trying to make wasn’t explicit enough. Getting excited about either one, understandable. Getting into serious and sustained arguments about which one was better like they were political parties or religions, not as understandable.
What Apple has proven with M1 and Rosetta is you can actually quite elegantly support legacy x86 programs, while leaving little on the table, and make use of all of the advantages of ARM/RISC based architectures.
I'm interested if AMD/Intel will respond with their own beefed up desktop profile RISC/ARM processors.
I wouldn't call x86-64 all that legacy (it's like, recently legacy) and that's all Rosetta supports. But performance is not that bad even if you stack something like WINE on top to handle i386 programs.
It may be an 'older model' but it's only from 2019. It's only one year behind the current 16" MBP. So I think comparing a 2 year old top of the line, maxed out MBP to a current lowest end entry level MBP, and see the low end model blow away the high end maxed out beast, is a reasonable thing to be impressed with.
1. If the smallest laptop chips are this efficient what do you think a full fat desktop chip will look like?
2. I think desktop/workstation is the only category where energy usage isn’t that important, it’s probably less than $100 per year with our usage patterns (don’t quote me on that)P. But for server and laptop it’s very important (for totally different reasons).
(Realistically it’s probably a max performance vs efficient trade off, but that may be in part because we’ve not seen any large apple chips, so we don’t know how it scales)s
Well, we know that AMD's chips - which use the next largest TSMC node compared to Apple's - have a lot of diminishing returns from increasing the available power envelope above M1-like ultraportable levels, at least in the kinds of core counts you typically find in consumer desktops. So it's possible that even if Apple did ship a full desktop chip in a system with proper cooling, it wouldn't be that much faster. (It's also likely that they just won't bother, Apple being Apple.)
I don't think you realize how far ahead the M1 actually is... it's about processing power per watt. The desktop you used actually 1) has a fan 2) is probably consuming many more than 15 watts 3) would burn my lap if it was anywhere near it.
The implications are huge. Think about the competitive advantage a data center that operated with no cooling requirements and much lower power requirements would be if it got the same (or better!) performance than an equivalent intel based one.
It's not just faster, its faster than equivalent parts AND way more efficient.
The disconnect here is exactly what I'm referring to in my post. I'm at a loss for words, especially considering I've already addressed your main point. Also, what do you mean by "not just faster"? What are "equivalent parts"?
When they make the desktop pro version its going to slap. This is a 15 watt part! You may not care about efficiency so then wait and see what the desktop part looks like.
With the m1 chip you pretty much can, the heat is low enough that you can just add more transistors to the die and give it more juice and it will fly.
It seems the days of intel are numbered, not even i9 processors can compete against it now.
What did you test was most probably wasnt compiled for m1 or just a rare thing.
Until M1 i didnt buy single Apple product. But M1 is clearly superior. You feel the difference clearly just by browsing web. There isnt even need to test anyhting someone who used decent intel/amd laptop and m1 laptop in short period of time will realize the difference.
Same as for me. I was interested in macos but 2k bucks is something that stoping me all that time :)
I bought in 2019 ThinkPad t490 (for nearly 1k $) after my Dell n5110 it's just wow. Speed, touchpad. And then two weeks ago I decided to give it a try to Macbook Air and bought minimal 8/256.
And this machine slightly (IMO something 2x in indexing my phpstorm project) faster then my t490 with i5-8265U 24/256 BUT its silent and cold and battery life. After full working 10h day I have 30-40% And I was shocked that 8 gigs is enough for me, on ThinkPad I used at least 10-12 gigs for my work.
I just bought minimum to try macos and planned to switch to 16 gig version or go back to ThinkPad, but now I can use it at least next couple of years
Understand where you're coming from but there also seems to be a weird section of the audience that, for whatever reason, doesn't like to compare apples to apples.
I wouldn't expect large numbers of benchmarks of intels new mobile chip against AMDs fastest desktop chip so I don't know why people find it unusual in this particular circumstance.
I’m usually already committed to macOS and want to know if the upgrade is “worth it”. But, for this, I’m more interested in perceived performance than real performance and, generally, the non-M1 post-2015 or so MacBooks feel slower than my older MacBooks ever did.
These benchmarks don’t threaten other ecosystems and show a strong value proposition for others who own the older, hot and noisy MacBooks. Especially in its form factor. Especially with its battery life. What’s wrong with that?
Note: 30% faster than a thermally challenged i9 on a MacBook Pro, not a desktop one. Given the comments on similar threads, I feel this needs to be mentioned.
To go another layer deeper in analysis though, it is still >20% faster on the thermally throttled M1 MacBook Air. That's a laptop without a fan, and it's still faster than an i9 with a fan.
This right here. I was so skeptical of getting an Air. And yes, during heavy compile sessions with make -j8, it can hang after a half hour or so. But (a) you can make -j7 instead, and (b) it’s impressive how long it lasts without hitting that point.
I’ve been thinking of doing the cooling mod too, where you pop open the back and add a thermal pad to the cpu. It increases conductivity with the back of the case, letting you roast your legs while you work, aka cool the cpu. :)
Do any of the laptop cooler systems with fans help the M1 Air thermal issues? I used one on an older 2011 MBP, and it definitely helped that laptop. It might have just been a placebo of getting the laptop off of a flat surface to allow air to circulate around it, but the fans could only help in that.
> And yes, during heavy compile sessions with make -j8, it can hang after a half hour or so
You bought a computer that crashes reliably when used "heavily"? I mean, if you heard that same statement from an individual PC user in past decades, you'd roll your eyes about the ridiculous overclockers and go back to work on your slightly slower but actually reliable development machine.
But this isn't a too-l33t-for-her-own-good overclocker. It's Apple Computer! Jobs is clearly still directing the reality distortion effect from the grave.
Well, depends on your definition of "literally hang." It does indeed literally hang (i.e. system becomes unusable, activity monitor impossible to open) for minutes on end if you're not careful; a fact I was somewhat dismayed to discover.
But it's rare enough that I just don't care much. The performance in other areas is too good to gripe about "if I run make -j8 for an hour, it'll hang for a few minutes."
Geez. It ran 11% slower when it had to throttle a bit. That is not crashing. The only reality distortion occurring is a misreading of the article and overstatement of the effects of the reduced cooling in the Air.
Statement upthread, and confirmed in another comment, is about a system that really does "hang" when presented with heavy load. I'm responding to the commenter I quoted, not the linked article.
It's running out of memory and that's when the memory pressure becomes notable enough to affect the terminal app. The same would happen on any other system - in fact it'll become unusable more quickly on Intel.
That seems unlikely. In fact kernel builds don't really produce much memory pressure, no more than a 1-200 MB per parallel make process in general. (The final link takes more, but that doesn't really parallelize anyway).
I don't know about kernel builds, but memory pressure is the usual limiting factor when building for macOS/iOS builds when you step up parallelization like this.
Another possibility is system bugs that can be exposed when it thermal limits because it will stop running background QoS tasks, and things waiting on them will hang. It's pretty hard to stress it to that level.
I have done both these things by building compilers on M1. The memory pressure wasn't always from the host compiler but rather tools like dsymutil that may not be perfectly optimized themselves.
I would say it's unreasonable to expect 'make -j8' to perform well on a low-end laptop (M1 is a low end CPU) and that the system should be picking the -j number for you.
there should be a term like “reality distortion field” that describes whatever it is that gets people to come out of the woodwork and nit pick acceptable performance compromises in Apple products
To be fair, a fan takes away from potential heatsink space and eliminates contact from heatsink to shell. A fan based cooling system can actually be worse than a fanless setup... especially if they're the Macbooks with constrained intakes. (virtually all of them)
It seem pretty common for people who own Apple products to say "2013 macbook air" and that is easy to find on the device or in the software. Apple makes it pretty clear newer is better. Intel has a bunch of lakes and numbers and an easier way to tell would be nice all around. Though Apple has less skus to differentiate making it much easier for them.
It's pretty common for people who own Intel cpus to say 5th gen or 9th gen or to easily find on the device or in the software the name of cpu like i5-4670k or i9-9900K
I think GP is talking about a branding problem though. Non tech savvy people may not realise that the i7 in that new computer at the store is different from the i7 written on the front of their PC.
For years CPUs had numbers that went up: 286, 386, 486, Pentium.
After that, it was Mhz and Ghz that people used to rate a CPU.
But none of those things are as relevant as they used to be.
So what number goes up? i3, i5, i7, i9. That's what Intel is telling us. That's the number they want us to see.
Except for the Pentium, I can tell the age from the number. The i3 is probably the fastest, but since stuff uses less power these days, there's a small chance the old i5 is faster. I'd look up benchmarks if about to make a purchase.
Is the Pentium equivalent in age to 7th gen core i, or is their numbering different?
These aren't realistic choices unless you are buying a used PC. But even less savy consumers understand generations, if you present a fast 1st gen model versus a not the fastest 11th gen model I think the majority will go with the newer model.
Even on Mac OS, when you go to "About This Mac" -> "System Report", you only see the iBranding and not the generation (I see "6-Core Intel Core i7" on mine for example).
Honestly - it's not even the i3, i5, i7, i9 thing. It's the fact that two i5s, etc; can be ludicrously different in terms of performance from one another because of the sub-generations within the named generations.
Yes - it's ridiculous that I could buy an i7 ten years ago, buy an i7 today, and yet - of course - they are absolutely nothing close to each other in terms of performance.
IIRC the Pentium line did not make this mistake. (Though the Celeron line could be very confusing, if I recall correctly.)
Almost everyone knows that the model year is part of the, idk, "minimal tuple" for identifying vehicles, though, and you can count on it always appearing in e.g. advertisements.
In CPU land, the architecture codename or process node might be part of such a "minimal tuple" but these are frequently omitted by OEMs and retailers in their advertising materials.
I have always hated 0-60 because it depends greatly on transmission shift points, driver ability, rear end ratio, and most of all traction, which OEM street tires have little of.
Let’s go to a full quarter mile, which show a bigger difference, but again not a huge one.
2009: 11.2 seconds/130.5 mph
2019: 10.6-seconds/134 mph
This would be a better comparison of Ford Mustang GT’s of the same year. Massive improvement.
The point is that people think the numeral in the brand is something like a version number in which larger numerals are better. I.e., an i7 is always better than an i5 when in fact a new i5 might exceed the performance of a dated i7 for some particular metric.
I do love collecting my old classics, but every time I'm the one driving my wife's Civic, I think, huh, everything about modern cars is better in every single way.
Depends on if I have a garage and a team to keep it running good, if I'm taking it for a nice drive or just showing it off in a museum.... The long, swooping '69 is iconic, but there are lots of things to prefer about a modern model.
In terms of speed, definitely - the '69 Corvette's zero-to-60 mph time was about 7 seconds, the '11 Corvette took about 4 seconds, and the '21 Corvette takes under three seconds. Sustained performance, maximum speed, etc. has also improved.
Then there's the Acer way of naming products like "XR382CQK bmijqphuzx" so that each one is unique. I like Intel's clearly defined 9 > 7 > 5 > 3. However, I do wish that Intel made the model generation part of their marketing material so that retailers and OEMs would be forced to include that information in their product marketing too. Intel i7.2019, for example.
> Then there's the Acer way of naming products like "XR382CQK bmijqphuzx"
This take is deceitful. Acer typically uses a very simple scheme to define their products, which goes something like "Acer <product line> <model>".
The scheme "XR382CQK bmijqphuzx" is more a kin to a product ID, which goes way beyond make and model and specifies exactly which hardware combination is shipped in a particular product.
Complaining that Acer names its products like "XR382CQK bmijqphuzx" makes as much sense as complaining that Apple names its laptops like MVFM2xx/A, which Apple uses for the same effect.
The model number is placed on the beginning of the title. Or you argue that we should call it as """ 37.5” UltraWide QHD (3840 x 1600) AMD FreeSync Monitor (USB 3.1 Type-C Port, Display Port, HDMI Port) """ ?
This is not at all how monitor marketing works. Every company gives them insane nonsense names. There is no other identifier. for XR382CQK bmijqphuzx there is no product line or model other than "monitor". That's all you get.
> IIRC the Pentium line did not make this mistake. (Though the Celeron line could be very confusing, if I recall correctly.)
Current Pentium and Celeron chips can either be mobile variants of the Core line or... Atoms.
I have a Celeron N4100 in my cheap netbook. It's listed as "Gemini Lake" in Intel ARK, which is a rebranding of its original lineage... Goldmont. (Goldmont Plus)
TLDR: Intel has made it nearly impossible for anyone to know if they're getting Core based or Atom based mobile chips unless significant research is done beforehand.
This is not to say avoid Atoms at all costs. The N4100 I have is reasonably on par with the Core 2 Q6600 quad, which was quite the beast of a chip back in its day.
The current Atoms are nothing like the in-order-execution only monstrosities they originally launched as... but I still find Intel's branding a bit more than confusing. If incredibly deceitful.
Instead they could have just called them i2017, i2018... going with year of manufacturing. That way it is useful to make some sense out of performance with an understanding iN is always better than i(N-1)
Year of manufacturing says nothing; you can have two different gens manufactured in the same year, one for lower price tier and the other for the higher one. Just like Apple still produces older iPhones, same thing.
Instead, you have designations like "Intel Core i7 8565U Whiskey Lake" or "Intel Core i7 10510U Comet Lake". The first one is 8th generation (=8xxx), the second one is 10th generation (10xxx, but the 14nm one, not the 10nm "Ice Lake"), and most OEMs do put these into their marketing materials and they are on their respective web shops (these two specifically were copied from two different Thinkpads X1 Carbon models).
Best is to give them a number that approximately maps to performance.
The "pro" version might be an i8, while the budget version is i3. In a few years time, the pro version will be up to i12 while the budget version is now i8.
You have model numbers for when someone needs to look up some super specific detail.
Right. The issue I have with this is that we have i7 models that are from 5 and 10 years ago with vastly different performance due to their generation. If it was more iterative it would make more sense.
No joke. I still daily use a 2008 MacBook Pro(with a Core2Duo CPU), with an SSD and 8GB of ram and that machine is perfectly fine for what I need it for - browsing the web, replying to emails, listening to Spotify, watching YouTube/Netlix. And people have bought a 2019 iMac with a normal 1TB 5400rpm HDD(!!!!!) And complain that it's slow. Yeah, of course it is, but it has nothing to do with the CPU in there.
I gave a responsible, sane but poor elderly homeless guy I was acquainted with a MacBook Pro (13-inch, Mid 2012), chargers, and canoeing dry bag. The MBP was originally a base 4 GiB and 500 HDD that, at some point, I upgraded to 16 and 480 SSD (OWC when it was $1.5k) + 500 HDD - optical.
OSes, platform toolchains, and apps need to prioritize UX (UI response time and time-to-first-window) to lessen the perception and the reality of waiting on apps doing unimportant activities rather than appearing and beginning interaction.
Yeap. It helps that Intel has missed the ball in the last 5 years or so. And tech savvy users are used to watching CPU/RAM usage, which helps a lot too.
My wife had a “my computer is slow” issue the other day. Fans blowing like crazy, extremely hot, battery draining fast. Turns out, lsof was stuck on some corrupt preferences file. Killed the process and file and all was fine. Regular people lack the tools to diagnose the problem and just deal with it, restart or buy a new one. We really should make diagnosing easier.
The memory hierarchy of a modern CPU with spinning rust might as well be the Parker Solar Probe waiting on a sea anemone.
I am remiss how someone these days can buy a laptop with spinning rust and 4 GiB and then complain it's "slow." Maybe they should've bought a real laptop for real work that's repairable and upgradable?
Marketing "the best" (most expensive / newest) aren't the best, for most purposes.
I bought a T480 with dual batteries that does run for 10 hours. It has a 2 TiB SSD and 32 GiB. Works fine for me. Water-resistant, repairable, awesome keyboard, and MIL-SPEC drop rated too. Optional magnesium display top assembly cover.
Combined with generation it says plenty. It's unfortunate when the public doesn't understand. I'm not sure what should be done.
It is worth noting that if you think GPU model numbers are fine, CPUs are actually very similar. There's the 2600k, 3770k, etc. (sandy bridge and ivy bridge CPUs respectively) The first number goes up each generation and the rest is how powerful it is within the generation. Similar to Nvidia having a GTX 980 and later a GTX 1080.
They pay more attention to the other number because that's how the English language works. "9th generation" is a descriptive prefix to the actual CPU model name.
I don't get why not a single brand other than Apple gets product names remotely right. Every time a relative asks me for a laptop recommendation and shows me a list of models I have absolutely no clue what to tell them. All the model numbers look like they came from a password generator and the only discernible difference without hours of looking at spec sheets is the price.
I get your point and it's only 5 years but my i5-7300HQ and i7-3520M were pretty comparable due to the number of cores, despite what pure benchmark numbers would say (and I'd actually prefer that i7)
The 2019 macbook ironically had better heat dissipation than the previous generations, but it's still pretty bad.
We can blame Apple for using chips that are too intense for their laptops, and we can blame Intel for making garbage chips that can't really perform in real world cases while spending a decade not bothering to innovate. Apple at least is moving away from Intel as a result of all of this, and I'm really impressed with how well the M1 transition has been going.
Ehh. I take the view that Apple has been intentionally sandbagging their laptops for a while to facilitate an ARM transition.
Not to say that M1 isn't amazing, but I think Apple has been preparing for this for a while and needed to make sure it would succeed even if their ARM CPUs weren't quite as groundbreaking as they turned out to be.
Or longer, really; while everyone, of course, loves the 2015 MBP, they're mostly thinking of the integrated GPU one; the discrete GPU version was pretty thermally challenged. Arguably Apple's real problem with the post-2015 ones was that Intel stopped chips with fast integrated GPUs, so Apple put discrete GPUs in all SKUs.
While I don't think it's a likely theory, 5 years from "we should do it", to design, to testing, to preparing fabrication at scale, to design of the system around it, etc doesn't sound unreasonable. I would be really surprised if Apple decided the Arm migration after 2015.
Are there any benchmarks you can point to that have a similarly spec'd laptop (ideally similar size & weight too) that would show that Apple is sandbagging?
Possibility 1: Apple was making do with what Intel gave them, because their profit margins didn't care and they were busy naval gazing into their post-Jobs soul
Possibility 2: Apple had a master plan to intentionally torpedo performance in order to make their future first-party chips appear more competitive
Apple could have had good thermals with what Intel gave them, they just didn't because they seem to value "as thin and light as possible" as opposed to "pretty thin and light, and with sane thermals". Even then they seem to make straight up poor decisions around cooling a lot even when doing the right thing wouldn't affect size/weight. Do they just not have engineers on staff who are good at this or something?
It's absolutely possible to do high end Intel in a compact laptop with relatively good thermals - look at thin Windows gaming laptops for plenty of examples of this (and these have way beefier GPUs than macbooks too).
It's worth noting the engineers actually seemed to get management to allow the 2019 model to be _thicker_ with more thermal mass than the earlier models. But in a sense, Intel's chips have never been amazing for laptops.
The power draw drains batteries, and the heat is... very annoying at best. I have used a couple ThinkPads and a Dell XPS over the past 5 years too, and all of them had the same issues where they'd constantly be pushing out very hot air, and the battery life was never more than 4-6 hours.
What Intel supplied was the bigger problem, but Apple was definitely not trying to make the chips perform well. They were hitting thermal limits constantly, and more directly toward "sandbagging" the recent macbook airs have a CPU heat sink that isn't connected to anything and has no direct airflow. They could easily have fit a small array of fins that the fan blows over, but chose not to.
I've seen the video that makes that claim but it's just plain wrong, if you look at an Intel Air the whole main board complete with heat sinks lives in a duct formed by the rear section of the chassis.
Some air goes over it but it's not enough with the tiny surface area. A proper block of fins would have done so much more while still being small and light.
If it’s that easy, do you have an example of another vendor doing it? It seems like Intel would love to highlight, say, Lenovo or HP to say Apple was cooling it wrong.
Apple is actually just really terrible at thermal design, simple as that. It's surprising, because you'd kinda expect them to at least be slightly competent (like they are in other areas), but they're just not.
They've gotten around it with M1 by building an entirely new chip that's power efficient enough that it practically doesn't need good thermal design to perform near it's peak. It's an impressive technical accomplishment, but it's also hilarious that Apple had to go this far to cool a chip properly.
Yes, we knew that design was common - that’s why that PC fan made the video picking it as a point of criticism – but that doesn’t tell us whether it’s as big a deal as being claimed. That would require some benchmarks running for long enough to see substantially greater thermal throttling, higher CPU core temperatures, etc.
> I take the view that Apple has been intentionally sandbagging their laptops for a while to facilitate an ARM transition
That's just not a defensible position to take.
Intel's inability to execute a node transition has led to a situation where for years their only way to increase performance has come at the cost of major increases to power and heat.
>Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.
Compared to other amazing Intel laptops of similar form factor ? All Intel laptops are insanely loud and generate tons of heat for any reasonable performance level. They are just generation(s) behind in process, plus they start from an architecture designed for servers and desktops and cut down, Apple went the other way so it's reasonable they will do better on thermals and power consumption.
I bought a 2019 Core-i9, top of the line, and plugging a non 16:9 resolution screen into it out of clamshell mode causes the GPU to consume 18W, and basically cause the system to unsustainably heat-up until it starts throttling itself.
It's clear they have hit a limit of what they could really do with Intel's top of the line processors from a thermal perspective, with the form factor they want to deliver.
Now I have sort of an expensive paper-weight sitting next to my M1.
I blame Intel. For years i9 wasn’t just a “beefed up i7”, it was a high end line of CPUs with 10+ cores costing $1000+, meant for professional workstations that didn’t need Xeon’s ECC RAM support. Then suddenly i9 started to also mean “a beefed up i7”.
Maybe they did this in the hope that the i9 HEDT “brand halo” would help them sell more of these consumer CPUs. But if that’s the case, they don’t get to complain when someone posts that an M1 beat an i9 and people accidentally interpret that as “the M1 competes against Intel’s HEDT line”.
You know what's interesting, China and Russia have been struggling for years to get something on the level of intel Westmere. And here comes Apple out of the blue with a proprietary arch and hardware emulator; cinebench showing it to be around a x5650 xeon (Westmere). Easy.
- Apple isn't really smaller then Amd, and Amd also rewrote and restructured their architecture some years ago.
- Apple hired many greatly skilled people with experience (e.g. which worked before with Intel)
- Apples uses state of the art TSMC production methods, Intel doesn't and the "slow" custom chips from China and Russia don't use that either as they want to have chips controlled by them produced with methods controlled by them. (TSMC production methods are based on tech not controlled by Taiwan).
- Apples had a well controlled "clean" use-case, where they bit by bit added more support. This includes that they could drop hardware 32 bit support and don't have any extensions they don't need for their Apple products, this can make thinks a lot easier. On the other hand x86 has a lot of old "stuff" still needing support and use cases are much less clear cut (wrt. thinks like how many PCI lanes need to be supported, how much RAM, etc.). This btw. is not limited to their CPUs but also (especially!) their GPUs and GPU drivers.
So while Apple did a grate job it isn't really that surprising.
> ... (TSMC production methods are based on tech not controlled by Taiwan).
Do you mean "TSMC production methods are based on tech controlled by Taiwan", or "TSMC production methods are based on tech not controlled by China/Russia"?
Update: after reading the thread I think I understand that you meant TSMC production relies on tech/equipment from ASML, which is not controlled by Taiwan.
> On the other hand x86 has a lot of old "stuff" still needing support and use cases are much less clear cut (wrt. thinks like how many PCI lanes need to be supported, how much RAM, etc.).
Good point overall - the performance of Rosetta 2 helped facilitate much of this and - at least from what I've read - does seem to be surprising to folks in the space. So that also helped them out.
Not only do Apple have decades of experience both as themselves and PA Semi, they can also probably outspend efforts that this countries could do politically (Russia yes, China probably not, but you get the idea) especially when weighted against their ease of acquiring information.
Sometimes it's hard to figure out all of the things left out of the plans that were stolen. Some engineer saw something not working, looked at the plans, and then noticed where the plans were wrong. The change gets implemented, but the plans don't get updated. Anyone receiving the plans will not have those changes. Becareful of those plans that fall of the back of trucks.
The team behind the M1 has been doing high performance chips since they did a PowerPC chip in _2005_ and has been doing custom low-power apple arm CPUs for almost 14 years.
The whole point behind the effort of China and Russia is to have a CPU under their control, starting with the design but also including the manufacturing.
And TSMC uses technology for their nodes which is not under their control at all.
The amount of expertise China already has and will accumulate in the next decades should not be underestimated.
This was a famously bad CPU/cooling design, actually. LOTS of people complained about it at the time. You can place blame on either party according to your personal affiliation, but similar Coffee Lake chips from other vendors with more robust (and, yes, louder) cooling designs were running rings around this particular MacBook.
Why? It’s all the more apples-to-apples as a comparison because the form factor remains the same and thermal limitations are similar between two system.
Why would you want to compare a desktop class i9 with a 10 watt M1 chip?
Because there are a lot of issues with i9s in those form factors that leads to less perf than even an i7 from the same generation.
There was a Linus Tech Tips the other day about how even current gen Intel laptops can see better perf on an i7 than an i9. It looks like the i9s simply don't make sense in this thermal form factor and are essentially just overpriced chips for people who want to pay the most to have the biggest number.
The title is implying M1 is always better than every intel, given that I9 is the best intel (consumer) chip.
It's been known for years that apple has been limiting the intel chips by providing insufficient cooling. I don't overly care about how fast an M1 chips in a macbook is compared to an intel chip in a macbook. I want to know how fast an M1 is compared to a desktop I9 (given mac minis have M1 chips now), or compared to a properly cooled laptop latest-gen I9.
All this experiment shows is that insufficiently cooled processors perform worse than sufficiently cooled ones. It's a classic example of cherrypicking data. Admittedly, my solution would be different to the article's author. Instead of using a badly cooled laptop to compile stuff, I'd setup a build server running linux.
Oh come on, comparing it to the "best" intel mac laptop makes sense when you only care about that form factor/specific model, which is the case for a big part of many industries (Software included)
Well that's why the M1 is much better than the I9, not strictly because it's a faster chip, but because it's a more efficient chip. Efficiency = less cooling required.
I think part of it is because I don't care about mac computers. I'm interested in how the M1 - a consumer ARM chip - stacks up against the best x86_64 chip, not how it performs in one particular laptop.
Not to a desktop CPU no, but I'd rather see comparisons to laptops that don't have terrible thermal design. And to AMD as well as Intel - they're doing significantly better than Intel in the high power efficiency space afaik.
It just seems like a cherry picked fight - there's more to the laptop market watt/performance wise than 2019 macbooks.
> I know that cross-compiling Linux on an Intel X86 CPU isn't necessarily going to be as fast as compiling on an ARM64-native M1 to begin with
Is that true? If so, why? (I don't cross compile much, so it isn't something I've paid attention to).
The architecture the compiler is running on doesn't change what the compiler is doing. It's not like the fact that it's running on ARM64 gives it some special powers to suddenly compile ARM64 instructions better. It's the same compiler code doing the same things and giving the same exact output.
Some cross-compilation may need some emulation to fold constant expressions. For example if you want to write code using 80 bit floats for x86 and cross-compile on a platform that doesn’t have them, they must be emulated in software. The cost of this feels small but one way to make it more expensive would be also emulating regular double precision floating point arithmetic when cross compiling. Obviously some programs have more constant folding to do during compilation than others.
My understanding is that LLVM already does software emulation of floating point for const evaluation, in order to eliminate any variation due to the host architecture.
Is constant folding going to be a bottle neck? In this particular instance, in the kernel, floating point is going to be fairly rare anyway, and integer constant folding is going to be more or less identical on 64-bit x86 and ARM.
In theory, yeah. In practice, a native compiler may have slightly different target configuration than cross. For example, a cross compiler may default to soft float but native compiler would use hard float if the system it's built on supports it. Basically, ./configure --cross=arm doesn't always produce the same compiler that you get running ./configure on an arm system. As a measurable difference, probably pretty far into the weeds, but benchmarks can be oddly sensitive to such differences.
there's no reason for a cross-compiler to be slower than a native compiler.
if your compiler binary is compiled for architecture A and emits code for an architecture B, it's going to perform the same as a compiler compiled for an architecture A and emitting code for the same architecture A.
Well there's one. If people tend to compile natively much more often than cross-compile, then it would make sense to spend time optimizing what benefits users.
Yes but you probably would make those optimizations in C code and not assembly. The amd64 compiler is basicially the same C code whether or not it's been bootstrapped on armv8 or amd64.
Well to get a little nuanced, it depends on if the backend for B is doing roughly the same stuff as for A (e.g. same optimizations?). I have no idea if that's generally true or not.
Regarding the point in the article mentioning the fans starting to spin at the drop of a hat: The macbook pro i9 16", albeit a fabulous device in almost every aspect, has a bug: Connecting an external monitor at QHD will send the discrete graphics card into >20W of power draw, whereas usually it's about 5W. At 20W for the graphics card alone, it's not difficult to see that the fans will be spinning perpetually.
It gets worse—if you are charging the battery, you can immediately see the left or right side Thunderbolt ports get a lot hotter, fast. Probably because piping 96W through those ports heats things up a bit.
The thermal performance on the 2019 16" MacBook Pro is not wonderful.
How is that acceptable for a "pro" machine? I would expect that crap in a $100 Chromebook, not in a machine that starts at $1000+ just for the base model.
This problem is so infuriating. There was a thread the other day about it. It's clearly a bug, but it seems to be one that nobody wants to take responsibility for.
The 1440p monitor issue seems like an especially bad bug. I run a 2160p monitor at a scaled HiDPI resolution (3008x1692) and never run into that. The discrete GPU idles at around 9W.
(Though also using a thunderbolt -> DisplayPort cable might be helping? Connecting over HDMI could exacerbate things).
I know you're not going to like it, but you can make it go away by switching to a slightly lower resolution. So the external monitor will be doing the scaling. It's slightly fuzzy, but the quietness is golden.
Huh. I have Dell Latitude 5501 and it's almost always in hairdryer mode when connected to the dock (on which there's 1920x1200 HP Z24i and 2560x1440 Dell U2515H). Your description seems suspiciously similar.
I've a Dell precision 5530 for work, absolutley roasting, continuously, even under no load. It's so bad, I'm switching to a mac book pro 16. Seems like out of the frying pan and into the fire for me!
I pointed an air conditioner at mine for awhile and it definitely helped!
Though when I really want to avoid the fans I just disable the processor’s turbo boost. In my case that means the frequency never goes above 2.4GHz. For sustained workloads it doesn’t matter much since after boosting it’ll just throttle itself back to 2.4GHz or lower anyway.
When I used to game on my 2012 macbook pro, I would rest the laptop on top of ice packs and change them every so often as they warmed. The case on aluminum macs acts as a heatsink so this was surprisingly effective. I was able to get my winter FPS during the heat of the summer this way.
Funny thing is the newest i7 from Intel (10nm) also might compile it noticeable faster then a i9 MacBook.
There are very few if any Laptops out there which handle a i9 well. And Apple is kinda well known to not do a very good job when it comes to moving heat out of their Laptops. This is often blamed on Intel due to their CPU producing to much heat, but there are other Laptops which do handle this CPUs just fine...
Anyway that doesn't change that Apples M1 are really good.
Given the state of Intel macOS of recent past it is not entirely impossible that Apple put in hardware specific optimization work into m1 macOS but ignored Intel one for whatever reasons.
I mean he's compiling the Linux kernel inside of docker which runs on macOS Intel and M1. There's also hypervisor implementation involved - hardware and software.
If he booted ARM Linux on M1 and compared the kernel compile there with one on Linux on Intel MBP it would be more apt. But then again Linux isn't optimized for the Mac especially for IO workloads due to AHCI / NVME driver issues - see phoronix.
So yeah we will need a 8c Ryzen 5xxx mobile in Nuc form factor (PN50) vs Mac mini M1 benchmark when Linux is somewhat decent on that hardware.
Having substantially more L1 and L2 cache per core but no L3 has to be a massive part of why the M1 performance is so good. I wonder if Intel/AMD have plans to increase the L1/L2 size on their next generations.
AMD went from 64kb in Zen 1 down to 32kb in Zen 2/3. Bigger isn't always better. It only matters if the architecture can actually use the cache effectively.
M1 has a massive reorder buffer, so it needs and can use more L1 cache. It's pretty much that simple.
It's more complicated on x86 because of the 4k page size. The L1 is heavily complicated if it is larger than the number of cache ways times the page size, since the virtual->physical tlb lookup happens in parallel. 8 way * 4kb = 32kb. AppleARM runs with a 16kb page size. 8 way * 16kb = 128kb
Removing L3 frees up transistors to be spent on L1/L2. On a modern processor the vast majority of transistors are spent on caches.
Why this might help, ultimately, because the latency for getting something from L1 or L2 is a lot lower than the latency from L3 or main memory.
That said, this could hurt multithreaded performance. L1/2 are used for 1 core in the system. L3 is shared by all the cores and a package. So if you have a bunch of threads working on the same set of data, having no L3 would mean doing more main memory fetches.
If you could get a Mac Pro with 32 to 48 firestorm + 4 icestorm cores with tiered memory caching and expandable to 2TB+ DDR4/DDR5 DIMMs. That would be an impressive machine for the small amount of wattage it would draw from the wall.
How does a virtualized ARM build, of Ubuntu for example, run in Parallels vs. the same workload on an x86 virtual machine in the same range?
If my day to day development workflow lives in linux virtual machines 90% of the time, is it worth it to get an M1 for virtualization performance? I realize I'm hijacking but I haven't found any good resources for this kind of information...
This is very dependent on setup. If your IO is mostly done to SR-IOV devices, your perf will be very close to native anyway. The difference would be about the IOMMU (I have no idea if there's a significant difference between the two here). If devices are being emulated, the perf probably has more to do with the implementation of the devices than the platform itself.
Is there an advantage to compiling native vs non-native code? Certainly during execution I would expect that, but I’m not clear why that would be true for compilation.
Agreed that a better benchmark would be compiling for the same target architecture on both.
Non-native can be a bit harder for constant folding (you have to emulate the target’s behavior for floating point, for example), but I think that mostly is a thing of the past because most architectures use the same types.
What can make a difference is the architecture. Examples:
- Register assignment is easier on orthogonal architectures.
- A compiler doesn’t need to spend time looking for auto-vectorization opportunities if the target architecture doesn’t have vector instructions.
Probably more importantly, there can be a difference in how much effort the compiler makes for finding good code. Typically, newer compilers start out with worse code generation that is faster to generate (make it work first, then make it produce good code)
I wouldn’t know whether any of these are an issue in this case.
> If using HDMI to my LG 4K display at 60 Hz, the display just blanks out entirely for 2-4 seconds every 5 minutes or so. No clue why.
I have the same problem with a 2018 (or 19? whichever's the latest) x86 Mac Mini connected over HDMI. Somehow I think Apple hasn't tested the Minis a lot with multiple monitors... or maybe with HDCP? Could be some HDCP renegotiation that you can't disable.
Regarding the displays. I use a dual screen setup on the mini (hdmi and display port) and it is perfect. So it is either a hardware issue in the cables, the mini or the monitor. I use currently use mono price 32 inch hdr monitors.
I've tried two different (known-good) HDMI cables and only have the one DisplayPort cable (which works fine on the i9 MBP and even on my 13" MacBook Air)... it seems to be something funky with the mini only.
At least with the DisplayPort cable, the dropouts don't happen, it's just annoying to have to manually turn off my monitor every time I walk away lest it go into the on/off/on/off cycle while the Mac is asleep.
I did order a CableMatters USB-C to DisplayPort cable today to see if maybe going direct from the USB4->monitor will work better than TB3->CalDigit->DisplayPort->monitor.
Man, I have huge DisplayPort issues when using my Dell 7750 with an external monitor. It can take a couple of reboots before it'll send a signal to it. The OS can see the monitor, but it just won't use it. It's incredibly annoying.
Their DP problem sounds vaguely familiar; I'm almost certain I had the same thing years ago with a 2014 MBP. Can't remember what the fix was, though...
Generally speaking, what is better if you value performance and energy efficient hardware? M or Intel based MacBook pro? Price wise Intel based machines seems to be more expensive.
Can M1 Mac be used to cross-compile binaries for other platforms and systems (Windows, Linux) and would there be a performance gain over a beefy Linux/Windows laptop?
With my luck, Apple's going to release some new devices that will blow the M1 Macs I just bought last week out of the water... that is the way of things, with tech!
I'm still trying to unlearn my fear and trepidation surrounding the use of my laptop while not plugged into the wall. I was always nervous taking the laptop anywhere without the power adapter in hand, because 2-3 hours of video editing or compilation work would kill the battery.
I think it’s pretty likely M2 (or M1X, or whatever they brand it) MacBook Pros will be announced next week at WWDC, given the recent rumors generally coalescing. They may not be released right away but most rumors have suggested a summer release. Not that you should regret your purchase, but for (future) reference it’s a really good idea to check the rumors when considering a non-urgent Apple purchase.
This is surprising. Are we really running out of people who would try run datacenter and an electron App on Apple laptop and then tell us here how these machines are not for professional users.
macOS though? I don’t feel very productive using their OS.
I would rather have a slightly slower laptop and feel more productive. But I don’t compile anything locally or anything. It’s all in the cloud and stuff.
The M1 is great indeed but one thing holds true for Apple never buy a first gen device, 3rd Gen onwards is usually where you get to see them becoming viable for long term support.
While the M1 is great there are clearly issues to be ironed out even if it’s just the limited bandwidth available for peripherals.
I’m also betting on major GPU upgrades over the next 2 generations.
I mean, kind of, but it seems that the main issue here with the M1 is that it's only 30% faster than an i9. If I were buying a new Mac today, I would only consider an M1 system. It seems to be better at literally everything I want to do with it than the Intel equivalent.
While M2 will undoubtedly be better yet, I see no downside to jumping aboard M1 today for must people who aren't running specialized software.
M2/M3 is where they’ll likely finalize the majority of their architectural features from a CPU perspective just look at what happened to first Gen Apple devices that used an Apple silicon like well the original iPhone or the Apple Watch series 1.
The M2/3 is when you’ll see a SoC that is finally designed for laptops and desktop computers and where you would likely see some additional ISA improvements on the CPU and on the GPU side too like hardware Ray Tracing support which will surely come.
Plus, I don't think Apple has really released a "Pro" M1 laptop yet. The current M1 MacBook Pro has max 13 inch screen, max 16 GB RAM, max 2 TB storage, only 2 Thunderbolt/USB ports, only a single external display supported, no external GPU supported.
If I had to guess I'd say they meant to call this just MacBook but tacked on the Pro since they discontinued the non-Pro line entirely.
There can be no real pro M1 yet, I don’t have that much issues with the 16GB limit tho some might people might but the other limitations are really due to SoC itself it doesn’t have sufficient bandwidth to support external GPUs multiple displays and a lot of high bandwidth peripherals.
My own personal theory is that the M1 was not originally designed for laptops it think it was originally intended as an iPad Pro/Pro+ SoC to compete with the higher end Surface devices. This is why likely external GPU support and bandwidth for peripherals wasn’t prioritized, its more than enough for a tablet.
I’m not sure if Apple really expected to get that much performance out of it from the get go, when their early samples did they made a decision to launch a full line including laptops despite the rest of the SoC not being designed for that.
The 13” MBP line has been bifurcated for a couple of years, the M1 replaced the low-end of that line.
The split started when Apple first tried to replace the MacBook Air with the MacBook Pro sans-touchbar (aka the MacBook escape), and the low end Pro hung around even after they reversed course and brought out the new retina Air.
Never thought of that... but I'll cross my fingers then and see what Apple releases.
These Macs may still be perfect for my needs though. 10G on the mini means I skip the giant external adapter, and the Air doesn't have the dumb Touch Bar.
I really hope Apple can control themselves with these CPUs. The M1 has the perfect thermal envelope for the Macbook Pro. No thermal throttling ever. I greatly fear the future were Apple starts going down Intels path were you buy a sick CPU on paper. But once you actually try to do anything with it it throttles itself into the ground.
Historically, this is one of the reasons Apple went with Intel CPUs to begin with. The PowerPC G5 was a nice processor but never ended up with a thermal envelope acceptable for a laptop. So from 2003 to 2006, you could buy a Mac G5 desktop, but if you wanted a laptop, it was a G4. 2006 was the beginning of the transition to Intel, who made better processors that Apple could put in laptops.
It's not the only reason Apple switched to x86, but it perhaps the most commonly cited factor.
I complain about how hot the i9 gets... but then I remember the period where Apple was transitioning from G4/G5 to Intel Core 2 Duo chips... in both cases they were _searing_ hot in laptops, and Apple's always fought the battle to keep their laptops thin while sometimes sacrificing anything touching the skin of the laptop (unlike most PC vendors, who are happier adding 10+mm of height to fit in more heat sinks and fans!).
How about running CineBench R23 and a GPU workload continuously for an hour? I'm willing to bet it will throttle. That little chip you have there is not only a CPU. It's also a GPU. Utilizing half of it's function and then saying it doesn't throttle is not that impressive. Still there are many Intel laptops that throttle even by going half power.
What laptop do you have? If it's a gaming or workstation laptop those are generally much better cooled then thin & lights like macbook pros.
Seems to me that Apple has invested a lot in this line of processors over many years and have been in the position to use their own processors in their macs when Intel faltered. That's an enormous win for them. Where's the part where give away anything to their competitors in order to get faster processors?
So, personally, I like to work on a desktop computer. I don't care all that much about the power efficiency as I do about performance. So, I was curious, and ran the same test on a regular good top-end desktop hardware, and the same test took 2 minutes and 53 seconds. In other words 4.3x faster, or "78% faster" if using the same metric. Compared to the M1 time it would be 3.1x and 68% respectively.
Just to be clear. This isn't a comment that "with my beefy computer I can outperform the M1". My desktop environment is not something I can commute with. My point is nothing more than, maybe to add some interesting data-points that I find are hard to come by. I seem to always see the M1 be compared to older apple hardware. Maybe that's all you wish to know. If MacOS is a must-have, then I suppose it doesn't really matter what else exists. And this is fine, I have no objection to it.