I'm sure you understand the performance differences between a 10W part with integrated graphics designed for a fanless laptop and a desktop part with active cooling and discrete graphics.
This article from Anandtech on the M1 is helpful in understanding why the M1 is so impressive.
I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.
You say something like that about a low power fanless design and every tech nerd's first reaction is "bullshit". And now they want to call you on your bullshit.
"And in MacBook Air, M1 is faster than the chips in 98 percent of PC laptops sold in the past year.1"
There is a subtle difference between "98 percent of laptops sold" and your rephrasing as "2% of laptops you can buy today".
If you doubt the meaning, check out the footnote which refers to "publicly available sales data". You only need sales data if sales volume is a factor in the calculation.
I also don't doubt that "every tech nerd's reaction is 'bullshit'", but only because supremely confident proclamations of universal truth that are soon proven wrong is pretty much the defining trait among that community (c. f. various proclamations that solar power is useless, CSI-style super-resolution image enhancement is "impossible" because "the information was lost", "512k should be enough...", "less space than a Nomad..", and everything Paul Graham has said, ever).
> I also don't doubt that "every tech nerd's reaction is 'bullshit'", but only because supremely confident proclamations of universal truth that are soon proven wrong is pretty much the defining trait among that community (c. f. various proclamations that solar power is useless, CSI-style super-resolution image enhancement is "impossible" because "the information was lost", "512k should be enough...", "less space than a Nomad..", and everything Paul Graham has said, ever).
Notwithstanding the fact that you do have a point here, polemically phrased as it may somewhat ironically be, I just want to point out that the Paul Graham reference is probably not the best example of the "tech nerd community" trait you're describing. At least this particular community doesn't quite believe that everything Paul Graham say is true; a couple of examples:
I could share a lot more HN discussions, and some of his essays where he pretty much describe the trait you're taking issue with here -- but I'm already dangerously close to inadvertently becoming an example of a tech nerd that believe "everything Paul Graham has said, ever", is absolutely true ;) I don't, and I know for a fact that he doesn't think so either (there's an essay about that too).
Grandparent doesn't understand information theory. True superresolution is impossible. ML hallucination is a guess, not actual information recovery. Recovering the information from nowhere breaks the First Law of Thermodynamics. If grandparent can do it, he/she will be immediately awarded the Shannon Award, the Turing Award, and the Nobel Prize for Physics.
True superresolution is impossible, but a heck of a lot of resolution is hidden in video, without resorting to guesses and hallucination.
Tiny camera shakes on a static scene gives away more information about that scene, it's effectively multisampling of the static scene. (If I have to hunch it, any regular video can "easily" be upscaled 50% without resorting to interpolation or hallucination.)
Our wetware image processing does the same - look at a movie shot at the regular 24fps where people walk around. Their faces look normal. But pause any given frame, and it's likely a blur. (But our wetware image processing likely does hallucination too, so it's maybe not a fair comparison.)
It's not temporal interpolation. It's using data from different frame(s) to fill in the current frame. It's not interpolation at all. It's using a different accurate data source to augment another.
Super resolution can and does work in some circumstances.
By introducing a new source of information (the memory of the NN) it can reconstruct things it has seen before, and generalise this to new data.
In some cases this means hallucinations, true. But in other times (eg text where the NN has seen the font) it is reconstructing what that font is from memory.
But the thing is, in that case the information contained in the images was actually much less than what we are meant to make believe.
So if we are reconstructing letters from a known font we essentially are extracting 8 bits of information from the image. I'm pretty certain that if you distort the image to an SNR equivalent of below 8 bits you will not be able to extract the information.
Lossy image compression creates artifacts, which are in a way of form of falsely reconstructed information - information which wasn't there in the original image. Lossless compression algorithms work by reducing redundancy, but don't create information where it wasn't there (thus being very different from super-resolution algorithms).
Not if it’s written text and you are selecting between 26 different letters. It’s a probabilistic reconstruction, but that’s very different to a hallucination.
You're both right, but they're more right because the subtle difference you mention is the problem they're highlighting: Apple went out of their way to be unclear and create subtle differences in interpretation that would be favorable to Apple, as a company should.
After the odd graphs/numbers from the event, I was worried it was going to be an awful ~2 year period of jumping to conclusions based on:
- Geekbench scores
- "It's slow because Rosetta"
- HW comparisons that compare against ancient hardware because "[the more powerful PC equivalent] uses _too_ much power" implying that "[M1] against 4 year old HW is just the right amount of power", erasing the tradeoff between powerfulness and power consumption
The people claiming this blows Intel/AMD out of the water need to have stronger evidence than comparing against parts launched years ago for budget consumers, then waving away any other alternative based on power consumption.[1]
Trading off power for power consumption is an inherent property of chip design, refusing to consider other chipsets because they have a different set of tradeoffs mean you're talking about power consumption alone, not the chip design.
[1] n.b. this is irrational because the 4 year old part is likely to use more power than the current part. So, why is it more valid to compare against the 4 year old part? Following this logic, we need to find a low power GPU, not throw out the current part and bring in a 4 year old budget part.
Hate to draw a sweeping conclusion without any facts like this without elaborating, but, late for dinner :(: it's an absolute _nightmare_ of an article, leaving me little hope we'll avoid a year or two of bickering on this.
IMHO it's much more likely UI people will remember it kinda sucks to have 20 hour battery life with a bunch of touch apps than that we'll clear up the gish-gallop of Apple's charts and publications rushing to provide an intellectual basis for them, without having _any_ access to the chip under discussion, so they substitute iPhone/iPad chips that can completely trade off thermal concerns for long enough for a subset of benchmarks to run, making it look like the the powerfulness/power consumption tradeoff doesn't exist, though its was formerly "basic" to me.
My quote was from Apple's MacbookAir page[1], including the footnote.
Your quote is just not very specific. On its face, it could mean every PC Laptop ever produced. I'm somewhat certain that every Apple computing device ever has beaten that standard. Even Airpods and the Macbook chargers might be getting close these days.
2019 laptop sales are 9.3% of laptop sales for years 2010-2019 (https://www.statista.com/statistics/272595/global-shipments-...). The phrase "Faster than 98% of PC laptops" in itself is very general, an it's fair to assume that it's e.g. all laptops currently in use, or ever made - in part since this is a sales pitch, rather than a technical specification item. If we add 1990-2009 laptops to the statistic above, the share of purportedly modern laptops will just shrink, and substantially at that.
Then you have to add on top of that the consideration of what are the laptops people are actually buying and using. I can assure you that neither enterprise clients nor everyone outside of select, relatively spoiled, group people concentrated in just a few areas are regularly buying top of the line workstation or gaming laptops.
The parent is very much right, amongst techies there's an unhealthy tendency for self-righteous adjudication of the narrative or context.
>I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.
Exactly. It's a meaningless number.
They also conspicuously avoid posting any GHz information of any kind. My assumption is that it's a fine laptop, but a bullshit performance claim.
The clock speed of ARM chips that have big.LITTLE cores is not very meaningful. The LITTLE cores can run at lower frequencies than the big cores. The Apple A series (and M series I will say with some confidence) support heterogeneous operation so both sets of cores can be active at once. The big cores can also be powered off if there's no high priority/power tasks.
The cores and frequency can scale up and down quickly so there's not a meaningful clock speed measure. The best number you'll get is the maximum frequency and that's still not useful for comparing ARM chips to x86.
Even Intel's chips don't have frequency measures that are all that useful. Some workloads support TurboBoost but only within a certain thermal envelope. Even the max conventional clock for a chip is variable depending on thermals.
I don't think it's worthwhile faulting Apple for not harping on the M series clock speeds since they're a vector instead of a scalar value.
Nobody is “faulting” Apple. They tend to avoid saying things for weird PR reasons.
In this case, they make impressive sounding, but almost completely meaningless assertions about performance. If anything, they are underselling an impressive achievement.
Intel does provide performance ranges in thermally constrained packages that are meaningful.
The frequency matters because it would give a far better insight into expected boost behavior & power consumption.
For example when you look up the 15w i5-1035G1 and see 1ghz base, 3.6ghz boost, you can figure out that no, those geekbench results are not representative of the chip running at 15w. It's pulling way, way more because it's in the turbo window, and the gap between base & turbo is huuuuuge.
So right now when Apple claims the M1 is 10w, we really have no context for that. Is it actually capped at 10w? Or is it 10w sustained like Intel's TDP measurements? How much faster is the M1 over the A14? How different is the M1's performance in the fanless Air vs. the fan MBP 13"?
Frequency would give insights into most all of that. It's not useless.
The GeekBench tests have revealed that the clock frequency of the M1 cores is 3.2 GHz, both in MB Air and in MB Pro.
Therefore M1 succeeds to have a slightly higher single-thread performance (between 3% and 8%) than Intel Tiger Lake and AMD Zen 3 at only 2/3 of their clock frequency.
The speed advantage of M1 in single-thread is not high enough to be noticeable in practice, but what counts is that reaching the same performance at high IPC and low frequency instead of low IPC and high frequency results in having the same performance as the competitors at a much lower power consumption.
This is the real achievement of Apple and not the ridiculous claims about M1 being 3 times faster than obsolete and slow older products.
Geekbench doesn't monitor the frequency over time. We don't know what the M1 actually ran at during the test, nor what it can boost to. Or if it has a boost behavior at all even.
It is good to be skeptical of that "faster" and that 2% measurement, because there are lots of opportunities to make them ambiguous. But clock frequencies are hardly useful for comparisons between AMD and Intel. They'd be even more useless across architectures.
Benchmarks are as good as it gets. Aggregate benchmarks used for marketing slides are less than ideal, but the problem with marketing is that it has to apply to everyone. Better workload focused benchmarks might come out later... In their defense most people won't look at these, because most people don't really stress their CPUs anyway.
> They also conspicuously avoid posting any GHz information of any kind
Is that information actually of any real use when dealing with a machine with asymmetric cores plus various other bits on the chip dedicated to specific tasks [1]?
What does ghz have to do with performance across chips? I’m reminded of the old days where AMD chips ran at a lower ghz and outperformed Intel chips running much faster. Intel had marketed that ghz === faster so AMD had to work to get around existing biases.
Even Intel had to fight against it when they transitioned from the P4 designs to Core. They began releasing much lower clocked chips in the same product matrix and it took a while for people to finally accept that frequency is a poor metric for comparing across even the same company’s designs. And I think Apple also significantly contributed to this problem in the mid-late PPC days with “megahertz myth” marketing coupled with increasingly obvious misleading performance claims.
GHz is also a fancy meaningless number that was spawned by Intel's marketing machine. The Ryzen 5000 chips cannot hit the 5GHz mark that Intel ones can, and yet the Ryzen 5000 chips thrashed the Intel ones in every benchmark done by everyone on youtube.
And that's when both the chips are on the same x86 platform. Differences in architecture can never be compared or should be compared with GHz as any kind of basis.
The only place where the GHz is useful for comparison is with products running the same chip
Why is the clock speed relavant? Yes it lower than the highest end x86 chips, but the PPC is drastically higher. They don’t want clueless consumers thinking clock speed matters to performance.
I dont know if you’ve actually done the numbers.. but most laptops on the market, have low to mediocre specs. It would surprise me if more than 2% are pro/enthusiast.
Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.
As a Joe Schmoe it's hard to get good figures, but it appears the total laptop market is about $161.952B[1] with the "gaming" laptop segment selling about $10.96B[2]. Since gaming laptops are more expensive this undercounts cheap laptops, but there are other classes of laptop that are going to outperform this mac, like business workstations.
There might be one way to massage the numbers to pull out that statistic somehow, but it is at best misleading.
>Apple didn't specify if they're counting by model or total sales,
If it was Model they would be spinning it. But they said sold in the past year. I dont know how anyone else would interpret it, but in financial and analytics that is very clearly implying unit sold.
Your [1] is Laptop with Tablet, total Laptop market is about $100B, although this year we might see a sharp increase cause of pandemic.
Let say there are $10 Gaming Laptop market. So 10% of Market Value are going to Gaming Laptop. Total Laptop Market includes Chromebook, so if you do ASP averaging I would expect at least 3x ( if not 4x or higher ) difference in Gaming and Rest of Laptop Market. So your hypothesis of "All Gaming Laptop" would be faster than M1 gives you roughly 3.3% of the Marketshare. Not too far off 2%.
And all of that is before we put any performance number into comparison.
On the fact that discrete gaming laptops have higher power requirements and better cooling solutions, in turn allowing much faster CPUs to run in them.
That's the most meaningful constraint for mobile CPUs today, after all.
If you're not going to compare Apples to Apples, i.e. if power, cooling and size is a constraint you're not going to care about at all, you might as well count desktop PCs as well.
Apples measurement comes pretty close to comparing "laptops that most people would actually buy". Not sure why it's meaningful that a laptop maker can put out a model that's as thick as a college book, has the very top bins of all parts, sounds like a jet engine when running at max speed, and is purchased by 1% of the most avid gamers.
Oh, and if someone puts out a second model that adds RGB backlit keyboard, but is otherwise equivalent, that should somehow count against Apples achievements, because for some reason counting by number of models is meaningful regardless of how many copies that model sold o_O
So no data except a view that more power and more cooling automatically leads to better performance independent of process, architecture and any other factors?
> Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.
This is what I don't get.. why would you ever assume they meant counting by model? That's a nearly meaningless measurement. How do you even distinguish between models in that measurement? Where do you set the thresholds? The supercharged gaming laptops are absolutely going to dominate that statistics no matter what, because there's a huge number of models, lots of which only differ mostly by cosmetic changes. The margins are likely higher, so they don't need to sell as many of a given model to make it worthwhile to put one out. Does a laptop maker even have to actually sell a single model for it to count? How many do they have to sell for it to count? Does it make sense to count models where every part is picked from the best performing bins, so that you're guaranteed that the model couldn't count for more than a fraction of sales?
Counting by number of laptops actually sold is the only meaningful measurement, at least you have a decent chance of finding an objective way to measure that.
And I thought it was 100% obvious from Apples marketing material what they meant, so I really don't get why anyone is confused about this.
> Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.
Not a single one will beat it in single core performance.
I suspect they're not technically saying that, since they're saying "PC laptops." But as Coldtea notes, it's pretty clear the M1-based laptops embarrass all current Intel-based Mac laptops. I'm just not going to fault Apple too much for failing to explicitly say "so this $999 MacBook Air just smokes our $2800 MacBook Pro."
I just had the thought that the figures could be skewed by education departments all over the country making bulk orders for cheap laptops for students doing remote learning.
It beats Apple’s own top of the line intel laptop at a fraction of the cost. According to the tests which have surfaced. That ought to count for something.
Apple has not made pure fantasy claims in the past so why should they now? The trend has been clear. Their ARM chips have made really rapid performance gains.
We don’t even have a significant quantity of people with hardware in hand yet so I’d like to reserve judgement.
At best we have some trite stats about single core performance but I’m interested to see whether or not this maps to reality on some much harder workloads that run a long time. Cinebench is an important one for me...
Every true tech nerd knows this is fucking impressive ever since iPhones started being benchmarked against Intel and every true tech nerd can tell that this is only going to get better. :)
Yep, people who actually care about tech and hardware are applauding this. Anti-Apple and x86 fanboys are the ones doing everything they can to discount it.
>I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.
No, it's faster than all those laptops, not just "fanless" ones. It's actually faster than several fan-using i9 models. And it has already been compared to those.
The grantparent talks about comparisons for GPUs...
> I'm sure you understand the performance differences between a 10W part with integrated graphics designed for a fanless laptop and a desktop part with active cooling and discrete graphics.
The problem here is that the obvious competition is Zen 3, but AMD has released the desktop part and not the laptop part while Apple has released the laptop part and not the desktop part. (Technically the Mini is a desktop, but you know what I mean.)
However, the extra TDP has minimal effect on single thread performance because a single thread won't use the whole desktop TDP. Compare the laptop vs. desktop Zen 2 APUs:
Around 5% apart on single thread due to slightly higher boost clock, sometimes not even that. Much larger differences in the multi-threaded tests because that's where the higher TDP gets you a much higher base clock.
So comparing the single thread performance to desktop parts isn't that unreasonable. The real problem is that we don't have any real-world benchmarks yet, just synthetic dreck like geekbench.
I'm not so sure the fellow does understand that difference.
Their take reminds me of the Far Side cartoon, where the dog is mowing the lawn, a little irregularly, and a guy is yelling at him, "You call that mowing the lawn?"[1]
This is true, but I think a lot of folk are assuming that a future M2 or M3 will be able to scale up to higher wattage and match state-of-the-art enthusiast-class chips. That assumption is very much yet to be proven.
This is true, but I think a lot of folk are assuming that a future M2 or M3 will be able to scale up to higher wattage and match state-of-the-art enthusiast-class chips.
Apple wouldn't go down this path if they weren't confident that their designs would scale and keep them in the performance lead for a long time.
Look, the Mac grossed $9 billion last quarter, more than the iPad ($6.7 billion) and more than Apple Watch ($7.8 billion). They've no doubt invested a lot of time and money into this; there's no way, now that they've jettisoned Intel, they haven't gamed this entire thing out. There's too much riding on this.
Yes, Apple's entry level laptops smoke much more expensive Intel-based laptops. But wait until the replacements for the 16-inch MacBook Pro and the iMac and iMac Pros are released.
By then, the geek world would have gone through all phases of grief—we seem deep into denial right now, with some anger creeping in.
> Apple wouldn't go down this path if they weren't confident that their designs would scale and keep them in the performance lead for a long time.
There are two reasons to think this might not be the case. The first is that they could justify continuing to do this to their shareholders based solely on the cost savings from not paying margins to Intel, even if the performance is only the same and not better. Their customers might not appreciate having the transition dumped on them in that case, but Apple has a specific relationship with their customers.
And the second is that these things have a long lead time. They made the call to do this at a time when Intel was at once stagnant and the holder of the performance crown. Intel is still stagnant but now they have to contend with AMD. And with whatever Intel's response to AMD is going to be now that they've finally got an existential fire lit under them again.
So it was reasonable for them to expect to beat Intel's 14nm++++++ with TSMC's 5nm, but what happens now that AMD is no longer dead and is using TSMC too?
The first is that they could justify continuing to do this to their shareholders based solely on the cost savings from not paying margins to Intel, even if the performance is only the same and not better.
You know Apple’s market capitalization is a little over $2 trillion dollars, right? And Apple's gross margins have been in the 30-35% range for many years. This isn't a shareholder issue. They are by far the most profitable computer/gadget manufacturer around.
So it was reasonable for them to expect to beat Intel's 14nm++++++ with TSMC's 5nm, but what happens now that AMD is no longer dead and is using TSMC too?
No matter what AMD does in the short term, they're not going to beat the performance per watt of the M1, let alone the graphics, the Neural Engine and the rest of components of Apple's SoC. It's not just 14nm vs. 5nm; it's also ARM’s architecture vs. x86-64.
Apple has scaled A series production for more than a decade and nobody has caught them yet in iPhone/iPad performance. There were 64-bit iPhones for at least a year before Qualcomm and other ARM licensees could catch up.
There's no evidence or reason to believe it'll be any different with the M series in laptops and desktops.
> You know Apple’s market capitalization is a little over $2 trillion dollars, right?
That's the issue. Shareholders always want to see growth, but when you're that big, how do you do that? There isn't much uncaptured customer base left while they're charging premium prices, but offering lower-priced macOS/iOS devices would cannibalize the margins on existing sales. Solution: Increase the margins on existing sales without changing prices so that profitability increases at the same sales volume.
> No matter what AMD does in the short term, they're not going to beat the performance per watt of the M1
Zen 3 mobile APUs aren't out yet, but multiply the performance of the Zen 2-based Ryzen 7 4800U by the 20% gain from Zen 3 and the multi-threaded performance (i.e. the thing power efficiency is relevant to) is already there, and that's with Zen 3 on 7nm while Apple is using 5nm.
> it's also ARM’s architecture vs. x86-64.
The architecture is basically irrelevant. ARM architecture devices were traditionally designed to prioritize low power consumption over performance whereas x86-64 devices the opposite, but that isn't a characteristic of the ISA, it's just the design considerations of the target market.
And that distinction is disappearing now that everything is moving toward high core counts where the name of the game is performance per watt, because that's how you get more performance into the same power envelope. Epyc 7702 has a 200W TDP but that's what allows it to have 64 cores; it's only ~3W/core.
> Apple has scaled A series production for more than a decade and nobody has caught them yet in iPhone/iPad performance.
Ryzen 7 4800U has eight large cores vs 4 large plus 4 small in the M1 and even with your (hypothetical) 20% uplift multicore is just about matching M1. Single core is nowhere near as good.
'Architecture is basically irrelevant' not the biggest factor but not irrelevant - x64 still has to support all those legacy modes and has more complex front end.
You're working very hard to try to deny that Apple has passed AMD and Intel in this bit of the market. We'll have to see what happens at higher TDPs but they clearly have the architecture and process access to do very well.
> Ryzen 7 4800U has eight large cores vs 4 large plus 4 small in the M1 and even with your (hypothetical) 20% uplift multicore is just about matching M1.
We're talking about performance per watt. The little cores aren't a disadvantage there -- that's what they're designed for. They use less power than the big cores, allowing the big cores to consume more than half of the power budget and run at higher clocks, but the little cores still exist and do work at high power efficiency. It would actually be a credit to AMD to reach similar efficiency with entirely big cores and on an older process.
> Single core is nowhere near as good.
Geekbench shows Zen 3 as >25% faster than Zen 2 for single thread. Basically everything else shows it as ~20% faster. Geekbench is ridiculous.
> 'Architecture is basically irrelevant' not the biggest factor but not irrelevant - x64 still has to support all those legacy modes and has more complex front end.
This is the same argument people were making twenty years ago about why RISC architectures would overtake x86. They didn't. The transistors dedicated to those aspects of instruction decoding are a smaller percentage of the die today than they were in those days.
> No idea what Qualcomm has to do with this.
The claim was made that Apple has kept ahead of Qualcomm. But Intel and AMD have kept ahead of Qualcomm too, so that isn't saying much.
> You're working very hard to try to deny that Apple has passed AMD and Intel in this bit of the market.
People are working very hard to try to assert that Apple has passed AMD and Intel in this bit of the market. We still don't have any decent benchmarks to know one way or the other.
Half the reason I'm expecting this to be over-hyped is that we keep getting synthetic Geekbench results and not real results from real benchmarks of applications people actually use, which you would think Apple would be touting left and right if they were favorable.
We'll find out soon enough how things stand but just to point out that your first comment on small vs large cores really doesn't work - the benchmarks being quoted are absolute performance not performance per watt benchmarks. Small cores are more power efficient but they do less in a given period of time and hence benchmark lower.
AMD should easily beat Apple in graphics, all they have to do is switch to the latest Navi/RDNA2 microarchitecture. They are collaborating with Samsung on bringing Radeon into mobile devices, surely that will translate into efficiency improvements for laptops too.
> ARM’s architecture vs. x86-64
x86 will always need more power spent on instruction decode, sure, but it's not a huge amount.
Perhaps you haven't read the Anandtech article? [1]
Intel has stagnated itself out of the market, and has lost
a major customer today. AMD has shown lots of progress
lately, however it’ll be incredibly hard to catch up to
Apple’s power efficiency. If Apple’s performance trajectory
continues at this pace, the x86 performance crown might
never be regained.
I suspect the opposite. It is possible that Apple has decided that 640 KB is enough for everyone — that performance level of a tablet is what most common people need from a computer. Which is not entirely untrue, as a decent PC from 10 years ago is still suitable for most common tasks today if you are not into gaming, or virtual machines building other virtual machines. Most people don't really use all their cores and gigabytes. Also, consumers got used to smartphone limitations, so computer can now be presented as overall “bigger and better” mobile device with better keyboard, bigger drive, infinite battery, etc.
If they wanted raw performance in general code, they would stay with what they already had. The switch means that their goal was different.
I guess we'll see hordes of fans defending that decision quite soon.
I suspect the opposite. It is possible that Apple has decided that 640 KB is enough for everyone — that performance level of a tablet is what most common people need from a computer.
Perhaps you haven't been paying attention but this is Apple's third processor transition: 68K to PowerPC to Intel to ARM. Each time was to push the envelope of performance and to have a roadmap that wouldn't limit them in the future.
When the first PowerPC-based Macs shipped, they were so fast compared to what was available at the time, they couldn't be exported to North Korea, China or Iran; they were classified as a type of weapon [1].
The fact the PowerMac G4 was too fast to export at the time was even part of Apple's advertising in the 90s [2].
It's always been part of Apple's DNA to stay on the leading edge, especially with performance.
Apple's strategy has never been to settle for good enough. If that were the case, they wouldn't have spent the last 10+ years designing their own processors and ASICs. Dell, HP, Acer, etc. just put commodity parts in a case and ship lowest-commodity hardware. It shouldn't be a surprise that the M1 MacBook Air blows these guys out of the water.
Anyone paying attention saw this coming a mile away.
I have a quad-core 3.4 GHz Intel iMac and it's pretty clear the MacBook Pro with the M1 is going to be noticeably faster for some, if not all, of the common things I do as a web developer.
We know the M2 and the M3 are in development; I suspect 2021 will really be the year of shock and awe when the desktops start shipping.
There seems to be no evidence that Intel will be able to keep up with Apple. The early geek bench results show the M1 laptops beating even the high end Intel Mac ones. And that's with their most thermally constrained chip.
Apple will be releasing something like a M1X next, which will probably have way more cores and some other differences. But this M1 is incredibly impressive for this class of device. Intel has nothing near it to compete in this space.
The bigger question is how well does Apple keep up with AMD and Nvidia for GPUs and will they allow discrete GPUs.
Indeed, but given they are on TSMC 5nm and the apparent strength of the architecture and their team I think most will be inclined to give them the benefit of the doubt for the moment.
Actually biggest worry might be the economics - given their low volumes at the highest end (Mac Pro etc) how do they have the volumes to justify investing in building these CPUs?
I suspect the plan is to redefine computing with applications that integrate GPU (aka massively parallel vector math), plain old Intel-style integer and floating point, and some form of ML acceleration.
So multiple superfast cores are less important for - say - audio/video if much of the processing is being handled by the GPU, or even by the ML system.
This is a difference in kind not a difference in speed, because plain old x86/64 etc isn't optimal for this.
It's a little like the next level of the multimedia PC of the mid-90s. Instead of playing video and sound the goal is to create new kinds of smart immersive experiences.
Nvidia and AMD are kinda sorta playing around the edges of the same space, but I think Apple is going to try to own it. And it's a conscious long-term goal, while the competition is still thinking of specific hardware steppings and isn't quite putting the pieces together.
Good point. Apple dominates a unique workload mix brought on by the convergence of mobile and portable computing. They can benchmark this workload mix through very different system designs.
Probably nothing to stop them running linux on M series chips. I'd be a bit surprised actually - suspect we'll see something like a 32 Core CPU which will go into the higher end machines (maybe 2 in the Mac Pros).
The point of a computer as a workstation is it goes vroom. Computer that does not go vroom will not be effective for use cases where computer has to go vroom. It doesn't matter if battery life is longer or case is thinner. That won't decrease compile times or improve render performance.
The point of a computer as a workstation is it goes vroom.
The M1 is not currently in any workstation class computer.
It is in a budget desktop computer, a throw-it-in-your-bag travel computer, and a low-end laptop.
When an M series chip can't perform in a workstation class computer, then your argument will be valid. But you're trying to compare a VW bug with a Porsche because they look similar.
The "low-end laptop" starts at $1300, is labeled a Macbook Pro, and their marketing material states:
"The 8-core CPU, when paired with the MacBook Pro’s active cooling system, is up to 2.8x faster than the previous generation, delivering game-changing performance when compiling code, transcoding video, editing high-resolution photos, and more"
> It is in a budget desktop computer, a throw-it-in-your-bag travel computer, and a low-end laptop.
I took "budget desktop computer" to be the Mac Mini, "throw-it-in-your-bag travel computer" to be the Macbook Pro, and "a low-end laptop" to be the Macbook Air.
But I agree - the 13" is billed as a workstation and used as such by a huge portion of the tech industry, to say nothing of any others.
None of those are traditional Mac workstation workloads. No mention of rendering audio/video projects, for example. These are not the workloads Apple mentions when it wants to emphasize industry-leading power. (I mean, really, color grading?)
This MBP13 is a replacement for the previous MBP13; but the previous MBP13 was not a workstation either. It was a slightly-less-thermally-constrained thin-and-light. It existed almost entirely to be “the Air, but with cooling bolted on until it achieves the performance Intel originally promised us we could achieve in the Air’s thermal envelope.”
Note that, now that Apple are mostly free of that thermal constraint, the MBA and MBP13 are near-identical. Very likely the MBP13 is going away, and this release was just to satisfy corporate-leasing upgrade paths.
"workstation class" is a made up marketing word. Previous generation macbooks were all used for various workloads and absolutely used as portable workstations. Youre moving the goalposts.
Ah but according to the Official Category Consortium you’ve just eliminated several products[1] which would presumably be included if the “mobile workstation” moniker was designated based on workload capabilities.
[1]: including the 16” MBP, but certainly not limited to it
Laptop used to be a form factor (it fits on your lap) while very light, very small laptops, were in the notebook and subnotebook (or ultra portable) category.
I usually think of "subnotebook" as implying the keyboard is undersized; a thin and light machine that is still wide enough for a standard keyboard layout is something else.
I think we should bring back the term "luggable" for those mobile workstations and gaming notebooks that are hot, heavy, have 2hr or less of battery life.
Docked laptop is. With a benefit that if you want to work on the road you can take it without having to think about replicating your setup and copying data over.
Then what is a macbook for? Expensive web browsing? I've been told for a long time that macbooks are for work. Programmers all over use them, surely. Suddenly now none of that applies? To get proper performance you have to buy the mac desktop for triple the price?
This article from Anandtech on the M1 is helpful in understanding why the M1 is so impressive.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...