Hacker News new | past | comments | ask | show | jobs | submit login
Macintel: The End Is Nigh (mondaynote.com)
166 points by donmcc on Aug 3, 2014 | hide | past | favorite | 152 comments



What would happen to the cost, battery life, and size of an A10-powered MacBook Air?

It would be worse in almost every aspect.

The cost wouldn't change much, and Apple wouldn't profit much more from the switch. The cited CPU cost is the suggested retail price. Apple's volume lets them negotiate huge discounts.

Battery life would go up a bit, but battery life is already pretty good on Apple laptops. My 2013 11" Air gets 8 hours easily, but can go as long as 12 or as short as 2 depending on screen brightness and CPU usage. Unless you're keeping the CPU busy, the screen is the biggest consumer of power. Cutting CPU power consumption in half would only increase battery life by 20-ish%.

The PowerPC to Intel transition worked for several reasons. Most importantly, Intel CPUs were much faster than PowerPC. In everyday usage, the fastest ARM chips are 10x slower than a quad-core Haswell. Moving to x86 also increased the features of Macs, allowing them to dual-boot Windows and efficiently virtualize other x86-based OSes. Switching to ARM would backtrack on both fronts. An ARM-based Apple laptop would have no boot camp, no Windows virtualization, and no efficient emulation of legacy applications.

CPUs are only a small part of why tablets have longer battery life than laptops. Tablets have no keyboard, trackpad, or hinge, so they can basically be a giant battery with a screen attached. The iPad has a bigger battery than the 11" MacBook Air, despite the Air weighing 50% more and taking up 30% more volume. (Edit: This has recently changed. The 11" Air has a 38 watt-hour battery. The iPad 3 and 4 had a 42 watt-hour battery, but the iPad Air has a 32 watt-hour battery. Still, it's even smaller than the earlier iPads, massing less than half the 11" Air.)

In short, it doesn't seem worthwhile to do all this work and sacrifice so much performance for some incremental increases in battery life and profit.


You say that ARM chips are 10x slower than Haswells, but the difference isn't actually that large anymore and for Apple's A7 it's around 3x versus a normal desktop chip like the 4770 and only 1.7X versus the 4250U you'd find in a Macbook Air[1]. Diminishing returns is a constant factor in engineering and I'd expect it to be very hard for Apple to make up the remaining distance between themselves and Intel, but we shouldn't exaggerate how large that difference is. And it is possible that some hypothetical A8 could be made competitive with Intel in the 15W range due to not having to make the compromises required to hit 4 GHz.

I do agree that the ~10W you'd save by using an A7 versus an 4250U in a Mackbook Air would be a false economy.

EDIT: To explain that bit about targeted power ranges a bit more, to hit high frequencies at a given process node (like 45 nm or whatever) you need to break up your logic into a fairly high number of simple stages separated by clock driven latches. A core that doesn't worry about hitting high frequencies can divide up it's logic into fewer stages, simplifying design and requiring fewer transistors for latching.

[1]http://www.computingcompendium.com/p/arm-vs-intel-benchmarks...


I think GeekBench is a poor benchmark, since it's measuring peak CPU performance in a narrow domain, but the table you cited shows that the 4770 is 6.5x faster than the A7 (16774 vs 2564). Even at the same frequency and core count, Haswell is 1.8x faster in the benchmark.

Whatever the exact number, it doesn't invalidate the point I was making: Switching to ARM will ruin performance for x86 programs, which need to be emulated. ARM has to be significantly faster than x86, or else the emulation overhead will make users hate the transition.

This is a huge challenge for the Mac Pro and rMBP, which are all about performance. Generally, a given microarchitecture can only scale TDP by a factor of 10 by changing frequency and number of cores. Satisfying the pro line would require a new microarchitecture. Considering an x86 to ARM transition would take 1-2 years, that's asking a lot of Apple's chip designers. If Apple was going to go through all the trouble of building a new ARM core for high-performance, high-TDP applications, they might as well build their own x86 CPU. It would save a transition and the necessary emulation overhead. They could roll it out for the MacBook Airs first, then address the Pro line at their leisure (or not and just use Intel CPUs).


I was looking at the single threaded numbers because those are usually more relevant to what you would be doing on most Macbooks.


Citation needed.


https://en.wikipedia.org/wiki/Amdahl%27s_law perhaps. Single threaded speed is most relevant to software because it's difficult to make it take advantage of multiple cores.


Which is why I think some of where AMD is looking to go is very interesting.


> for Apple's A7 it's around 3x versus a normal desktop chip like the 4770 and only 1.7X versus the 4250U you'd find in a Macbook Air

That would still be noticeably slower for native software, and unbearable for anything run under emulation. It would also be very painful for use cases such as people wanting occasional Windows applications or Linux VMs.


Right, using an A7 in a Macbook would be totally crazy. The idea is that Apple might create an A8 that's about as fast as the current Macbook Air and use that. Getting as much performance increase going from A7 -> A8 as they did going A6 -> A7 strikes me as implausible, and it would probably be a bad idea to switch architectures anyways. But it's not totally crazy.


The difference is actually smaller, since we're comparing to a 1.3 Ghz Air, not a 3.3 Ghz (or whatever) desktop chip.


The 4250U I mentioned is the one in the 1.3 Ghz Air.


You're making the incorrect assumption that apple would in fact emulate. As they control the whole stack, they can (relatively) quickly switch architectures. They'd lose the market that does in fact install windows on mac hardware, though that is likely a negligible niche. So for the vast majority of their customers there would not be a performance penalty. Quite the opposite.

Apple chips are in terms of tech already competitive with anything Intel has to offer [1], though of course currently optimised for mobile. Introduce a dedicated chip for macOS without the heat and power restrictions and Apple won't be chained to intel any more. It will save the extra $50-500 they pay intel for every chip. On top of that, the battery will be smaller/cheaper; it will require fewer external chips as the SoC will incorporate this functionality (also cheaper, more control); the retina graphics will be driven by a chip that is adequately powered and optimised (intels graphics chips are extremely slow): all of the sudden the whole package is extremely compelling.

All you lose is windows compatibility. A small downside to multiple big upsides.

[1] http://www.anandtech.com/show/7910/apples-cyclone-microarchi...


No. You also lose compatibility with every game and most major applications like Adobe, since all of these use lots of highly optimized, handwritten assembler code for critical sections, as well of all code paths using x86 vector instructions or other hw acceleration.

Thanks but no thanks.


I don't think they would snap their fingers and every Mac they sell would abruptly switch to ARM. If they chose to go with ARM they would almost certainly move more gradually and come out with only one or two ARM machines at first. Maybe an iPad "Pro" or something like that which leverages all the software already written for the iPad and iPhone but has a detachable keyboard can also run Mac software if it's recompiled... something like that.


You're making a big assumption. Sure, some apps have handwritten assembly, but that's pretty rare these days. Apple's Accelerate framework (which has been around since the PowerPC days) allows apps that need high performance in certain areas to do that without resorting to assembly:

>This document describes the Accelerate Framework, which contains C APIs for vector and matrix math, digital signal processing, large number handling, and image processing.

https://developer.apple.com/library/mac/documentation/Accele...


>You also lose compatibility with every game and most major applications like Adobe

Just like they lost compatibility when they switched from PPC to Intel?

I think that's one thing that a lot of commenters in this thread are ignoring: Apple has done this before. They've switched from PPC to Intel when IBM wasn't committing to making the power efficiency gains that they were looking for, and they'll switch from Intel to their own homegrown ARM chips for essentially the same reason.


That switch took years and years and YEARS, and it was done for clear benefit. Desktops were one thing but in laptops Apple was way behind in performance. The new machines were so much faster that the emulated software didn't feel slower in normal use and you gained access to Windows software (which was horrendously slow when emulated on a PPC).

I'm not sure they could do such a transition again on the desktop. Other commenters ideas of making an 'iPad pro' and coming at it from that direction seems like a much better idea if you think any of this should be done.

Don't forget that Apple is up against Intel. They're called Fabzilla for a reason. Apple has a done an amazing job with their phones and tablets but when it comes to raw performance on the desktop I don't think they'd be able to keep up when Intel decided to start playing hardball.


Intel is failing to deliver their existing roadmap on time. Is this because they just haven't 'decided' to play hardball?


They're having issues right now, but they'll pull back. We've been through this before. Look at how far Intel's chips have come from just the Core2 processors of a few years ago.

Either way, this is nothing compared to the mess of delays the G4 and G5 went though, especially on mobile. Compared to what Apple has seen before this is a tiny detour.

And of course since Intel is the market leader, Apple is (at worst) in the same boat as everyone else. With the G4/5 they were falling further and further behind each year.


Well it's certainly plausible that this is the case... They have no meaningful competition at the top end of performance, so clearly they're don't need to bother dumping as much money into R&D as if they did. If they've found that a basic level of incremental improvements maximises profit, then they'll do that. And at the moment they're held up by fabrication issues, sure, but die shrinkage is an important part of their 'tick tock' development model so it makes sense they'd be waiting for this, it's hardly an indication that they're struggling in the grand scheme of things, or that they will fail to deliver the same steady performance improvements moving forward.


They'd also lose compatibility with any 3rd party hardware (think ThunderBolt), any video editing device drivers which were made for the x86 platform will need to be recompiled for the ARM platform.

A switch like this would kill Thunderbolt and many USB drivers.

Besides, we already saw what happens when Microsoft tried to switch to ARM through WinRT. That was an absolute disaster. Why would Apple try to do the same?


Video editing device drivers might be a problem but I'm pretty sure Apple provides the Thunderbolt drivers so a recompile won't be a problem for that if they make the (very unlikely imo) move to ARM. Correct me if I'm wrong on the Thunderbolt drivers point.


Those vendors will follow. As they have previously when they transitioned from powerpc to x86. Games is a good point. But this is not a big thing on MacOS (as yet) anyways. If it saves hundreds of dollars per machine, its a no-brainer for the vast majority of users.


Why should they?

Intel chips blew the doors off PPC by the time Apple got around to the PPC -> Intel transition. That (and good emulation software) allowed a relatively pain-free transition for most consumers: PPC apps didn't feel any slower when run on a new Intel-based Mac. That made it fairly easy to accumulate a critical mass of users with Intel systems, yet it still took ages for native apps to become available from the likes of Adobe and others.

Fast forward to today.

ARM chips are, at best, a little slower than the Intel chips they would replace. On top of that, the x86 ISA has a reputation for being hard to emulate well. So put yourself in the shoes of the consumer: are you going to upgrade to the new ARMBook Air, if it runs all your old apps at 1/2 or 1/4 the speed (and I'm being super optimistic here) they ran on your old MacBook Air? No! You're not! You're going to stick it out until the second generation of ARMBook comes out.

That means no critical mass of people who own ARM-based Macs and are willing and able to pay for native ARM software. That means vendors will. not. follow. This doesn't even take into account that Apple have burned their bridge with Adobe (a major vendor of 3rd party software for Mac).

Last point: I think you're wrong about games. There are tons of games for Mac on Steam. Would vendors go to the trouble of making a Mac port if there was no money in it?


It took a VERY long time for the large ISVs to switch, and while games aren't big on OS X many people like the fact they can boot into Windows to play them.

If Apple was able to get the performance of their chips up to the level of Intel chips, would they be able to save hundreds per machine? Intel has some very large economies of scale.

If Apple wanted to save hundreds per machine they could do that right now. Intel's marketing campaigns that Apple doesn't take part of (like the 'Intel Inside' stickers) come with large rebates. Apple could use that and not have to go through the huge pain of another architecture switch.


What if Apple introduced workstation-class ARM another way; made a really powerful iPad/iPhone which could sit in a dock with keyboard/mouse/monitor, and run both iOS and ARM OSX?

x86 computers could continue to exist for high-end users, but typical-users might be content with a hybrid tablet/computer. As ARM increased in power, x86 might dissapear entirely.

Your comments on Windows compatibility/virtualization are spot on. Cloud streaming, Citrix/XenApp, can help in some situations here. In a few years virtually all major apps will probably be cloud/browser based (Microsoft Office, Adobe Suite, streaming games/apps, etc). Windows on Mac might not be as important then. It's possible Microsoft might release ARM Windows too (besides RT). If ARM gets popular in the datacenter you may see Windows Server 2015 ARM Edition. They've done this in the past with Itanium.


I think that it would be more likely that they'd just drop OS X in that scenario, and just have a beefed-up version of iOS with better keyboard and touchpad support.

The number of users who would actually benefit from OS X vs iOS, but would not be ticked off at an inability to dual boot Windows or run any legacy applications, is very small indeed.

iOS for ARM and OS X for x86 seems likely to remain in the future, but I do think that Apple could do a netbook or dockable tablet running ARM/iOS if they really thought there was demand. I'm not sure there is, but if the product was good enough they have shown in the past an ability to manufacture demand where it didn't exist previously...


What you're describing (a mobile device that docks into a workstation environment and powers the KVM) sounds like the dream mobile product designers have been having for well over a decade. I remember Canonical recently coming up with a concept of a phone that would do this (see: Ubuntu Edge[0]). That being said, the roadmap implied by iOS 8 and Yosemite strongly suggests that they're just going to bridge these gaps over the Internet. Google is also pretty clearly going in this direction, and given the ubiquity of Internet access (especially compared to available KVM terminals), I think this is the direction we're all going to go in.

0: http://en.wikipedia.org/wiki/Ubuntu_Edge


It's been done several, dare I say many times. Badly, of course. I saw one at fry's where the base even had a more powerful CPU and when you docked it, the tablet was just the monitor... but you could access the tablet functionality while docked through something that looked like your typical telivisions "Picture in Picture" functionality. There was also a phone, a while back, that would dock into a video/keyboard in a laptop form factor. (Many phones these days, I am given to understand, have hdmi out, so this is largely a mechanical engineering problem. Well, that and making the software usable on both screens.)

So yeah, if it is, in fact, a useful form factor (and I have my doubts) it's in the perfect place in the technology curve for apple; the tech is all there, but nobody has implemented anything usable.


The discounts aren't that "huge" when compared to ARM chip prices. They are only "huge" in comparison to other Intel customers. But the difference still remains something close to an order of magnitude.

It's been shown before that Intel chips make up roughly 40 percent of the BOM/retail price (however you want to account it) of a PC. I doubt it's less than say 30 percent for an Apple Mac.

The retail price for a high-end chip is like $30, max. The retail price for a Core i5 can be more than $300. So yes, I think the pricing difference can be significant. Or rather the profit difference for Apple.

Apple could basically sell an ARM-based Macbook Air for $700, and make the same amount of profit per unit - BUT, sell a lot more of them, since they're $700 per unit. Apple could also sell it for $800, and make a lot more in profit per unit, too.


Microsoft tried to sell WinRT based ARM tablets at $300, and still couldn't sell any. In comparison, the $1000+ Surface Pro (once you get the keyboard attachment and everything, it breaks $1000 easy) has been getting pretty decent reviews and has fallen into a solid niche.

Why would an ARM-based Mac do any better than WinRT?


"The retail price for a high-end chip is like $30, max."

Citation?


Since when Apple cares about their bottom line?


The transition also worked because they had done it before. NEXTSTEP was already running on several CPU architectures (both big- and little-endian) including x86 before Apple even entered the picture. Undoubtedly Mac OS X is already running on ARM in Apple's labs.

I don't think boot camp/Windows virtualization is relevant. I don't know anyone who uses these features on their Macs. I'm sure there are people for whom this is a make-or-break capability, but it may be that Apple thinks that at this point, they are expendable.

Processor speed means a lot less than it used to. Even processors from half a decade ago are more than fast enough for what most people do with a computer.


I would also venture a guess that Apple may know a percentage if not even a majority are only interested in trying a Mac because the x86 proc gives them a bridge from Windows. If you take that away then mac sales may take a hit and possibly a significant one.

Anecdotally I know for me, this would cause me to move from a top of the line Macbook Pro Retina to an Air or even a mini. Time and market has changed to where having a mac is a must for me as a professional developer but I won't spend 3K on a single machine whose primary differentiator is ability to compile iOS applications.


> It would be worse in almost every aspect.

I don't see any explanation in your comment of how any of those aspects would be worse.

You say "Apple wouldn't profit much more from the switch" because of the volume discounts they get from Intel, but you don't cite exactly how much of a discount they get, and you still concede that Apple's own ARM chips might have a lower cost.

You say "battery life would go up a bit," and seem to imply that it could be up to 20%, which in my opinion is massive.

And you don't address size.

How then are any of those aspects worse? Even under your own seemingly pessimistic estimates, cost and battery life would improve.


Worse from the customer perspective.

All their old apps now run probably 2 to 10 times slower, until new native binaries arrived. Compare that with the PPC->Intel transition, when old binary apps stayed about the same or maybe got a little faster.

Also, emulating Intel binary apps with the CPU pegged for twice as long nullifies any energy savings, so the battery life in practice gets worse. Even if it is 20% for native binary apps, consumers are going to see "new computer, worse battery life, Og smash!" during the transition.

You're not going to see a 20% gain in battery life for native binaries, though. The only way you'd see that is if the ARM CPU used no energy at all. Extraordinary claims, evidence, and all that.

Is there really any reason to think that an Intel and ARM chip of comparable performance will have significantly different package sizes? Intel has always been the leader in process technology, so it would surprise me if the ARM die were any smaller.


You're forgetting about there being no fan.

No fan noise, no fan volume, no openings for dust to get inside the system. I personally can't wait until laptop fans go away.


ARM doesn't mean "no fan". I can't wait for laptop fans to go away but currently the only fanless laptops you can buy are atom based and all of them outperform their ARM equivalents.

When you start getting into "laptop level" power (ie comparable to pentium), even at the ARM level, you start seeing fans. For example, the first generation nvidia shield.


"The aging x86 architecture is beset by layers of architectural silt accreted from a succession of additions to the instruction set... Because of this excess baggage, an x86 chip needs more transistors than its ARM-based equivalent"

I wish people would stop saying this. On the inside, x86 CPUs are basically RISC. There is a translation layer from the publicly facing instruction layer to the internal representation. The transistor and power budget for this translation layer is absolutely trivial.

Intel has an enormous amount of expertise in the actual manufacturing and design layers, as evidenced by the rate of improvement in their Atom CPUs (striking range of ARM, actually), integrated GPUs, and quasi-GPU compute cards. They are not in danger of "losing" in the long term in a performance or performance / watt race. The risk is they get disrupted due to all CPUs turning into a commodity in roughly the same way that RAM is a commodity.


Yeah, as soon as the guy started saying this I discounted the whole article. Yes, maybe Apple will switch to A7 chips, but this author sure doesn't know enough about processors to have any kind of a privileged viewpoint.

If one doesn't even know what uops are, and doesn't have a mental estimate of what percentage of i7 silicon is devoted to the instruction decoder, one doesn't get to write articles comparing Intel to ARM chips.

Edit: Going back to the article I note the author is Jean-Louis Gassee, so this is just bizarre. He is kind of just talking out his ass and I would hope he'd know better than that, because whereas making stuff up is a survival skill in exec-land, being blatantly and demonstrably wrong about said make-ups is not.


> I note the author is Jean-Louis Gassee

Yeah, not sure why anyone is interested in what Gassee has to say. He used to be "head of Apple's advanced product development", but was forced out for failing to deliver.[1]

Also, he should get the small details right. He claims in the article

   The x86 nickname used to designate Wintel chips
   originates from the 8086 processor introduced
   in 1978 – itself a backward-compatible extension
   of the 8088…)
This is exactly backwards. The 8086 was first, ahead of the 8088. Not sure why Gassee even felt he needed to throw that detail into the article (unless he's getting paid by the word), but if he's going to say it, he should at least check Wikipedia first.[2]

[1] http://en.wikipedia.org/wiki/Jean-Louis_Gass%C3%A9e [2] http://en.wikipedia.org/wiki/Intel_8086


Gasse is probably better known to HN folks as the head of the BeOS system, which he did after Apple and was very well regarded in its day. In fact it was almost the next Mac OS, but he overpriced the deal.

In addition he has been an excellent writer about technology businesses for years now.


It actually is true, though it's easy to exaggerate how important it is. Decoding an x86 instruction into the RISCish internal format isn't hard, but decoding 4 at once can be quite a challenge. There's no way to look at a single byte of the incoming instruction stream and figure out if it's the start of an instruction or not, you need to do a fair amount of decoding on each byte of the chunk that you fetch from I$ before you can tell where the instruction boundaries are. For A64 ARM you know that every fourth byte is going to be a new instruction start. I wouldn't put this penalty at more than maaaaybe 20% at the very most, but it is a thing.

EDIT: And there are things like instruction gates that seriously are cruft that gums up how the memory system works, and some would argue that the x86 memory ordering constraints fall under this as well. On the other hand you should listen to Linus rant about how REP MOVS is the best thing since sliced bread that every other architecture is missing out on.


OTOH you can cache the decoded microops (IIRC modern Intel chips do this, while modern AMD and the original Pentium cached instruction boundaries), and in thumb mode ARM is also variable length.


We could also recycle lots of Intel's own verbiage from its own Itanium introduction :-) That didn't work out so well.

There are some interesting questions related to whatever convergence of tablets and laptops ends up happening. But it's not clear to me that especially favors ARM. And, it's also a hybrid sort of device that seems so obvious yet Apple finally got tablets to take off by taking a different path. So I'm skeptical about time frames at a minimum.


[deleted]


What does the Mill have to do with either the article or the original comment? The Mill (while cool) is completely different and incomparable. What makes more sense is comparing an x64 to an ARM chip.


The article, and quote from the root of this thread, may well be wrong in the detail but correct in the long-run as evidenced by the ideas exposed in the Mill.

The comment I was responding too seemed to be generally discounting the possibility that there was any room to improve upon the internals of Intel CPU's which was the motivation for linking back to the Mill.


Intel chips even support 16-bit processing. How is that not baggage?


An 8086 had <30k transistors. Haswell is well over a billion. They could add a hundred dedicated 8086 cores to Haswell and you wouldn't notice the difference.


It is baggage, but it costs very little.


From the article:

  > Because of this excess baggage, an x86 chip needs more
  > transistors than its ARM-based equivalent, and thus it
  > consumes more power and must dissipate more heat.
This is true but it ignores the primary reality of "desktop class" processor design today: RAM is the bottleneck in a really major way and most of a desktop class CPU's transistors are dedicated to overcoming this.

In the ancient days, CPUs ran synchronously (or close to it) with main memory. Hasn't been that way for decades. CPU performance has ramped up so much more quickly than that of main memory that it's ridiculous.

And this is where most of your transistors are spent these days - finding ways to allow the CPU to do some useful work while it sits around waiting for main memory. Look at a modern i5 CPU die:

https://www.google.com/search?q=intel+core+i5

Things to note: - Tons of L1/L2/L3 cache so we can keep things in fast memory. The transistors dedicated to cache dwarf those allocated to the actual processing cores, let alone the parts of those processing cores dedicated to those crufty ol' x86 instructions - Lots of transistors dedicated to branch prediction and speculative execution so we can execute instructions before we've even waited around for the data those instructions depend upon to arrive from slow-ass main memory

Sure, mobile ARM chips are tiny and efficient! They run at 1-2GHZ while paired with fast RAM that's not that much slower than their CPUs. They don't need to devote gobs and gobs of transistors to speculative execution and branch prediction and cache.

But all that changes if you want to scale an ARM chip up to perform like a "desktop-class" Intel chip. You want to add cores and execution units? If you want to keep them fed with data and instructions you're going to need all that extra transistor-heavy baggage and guess what -- now you're just barely more efficient than Intel, and you can't match Intel's superior process technology that's been at least a transistor shrink or two ahead of the competition since the dawn of the semiconductor industry.

Eventually, yes, the ARM chip makers will solve this. RAM will get faster and processes will be enshrinkified. Just understand that transistor size and pokey RAM are the bottlenecks, not that nasty old x86 instruction set.


Agreed. To expand:

The "excess baggage" argument being made in the article dates back decades, to the earliest RISC vs. CISC days, and within the Mac ecosystem that argument ended when Apple ditched PowerPC and went to x86.

Back in the 1990s you could make a pretty good argument for RISC architectures vs CISC ones like x86, because the circuitry for all those "extra" instructions took up a lot of die space. But new processes have meant that the percentage of die space that must be devoted to the x86 instruction set gets smaller and smaller with each generation. In other words, if x86 was going to lose out to RISC architectures, it would have happened in the 1990s. Their advantage has only eroded since then.


Problem here is that you are mixing memory latency and memory bandwidth together. We have memory that can easily sustain 16 simultaneous cores in bandwidth (and honestly, memory bandwidth potential is mostly untapped - you only see higher bandwidth benefits for integrated GPUs because they have many more execution units demanding more data).

Meanwhile, the latency has been getting worse. The refresh rates increasing abates it slightly but all the indirection to make high bandwidth ram, plus the commoditization of ram to make high capacity rather than "fast" (transitor only memory like cache shows what is possible for orders of magnitude more complexity and cost).

Adding more cores doesn't impact that latency at all, it just demands more bandwidth. If anything, the diminishing returns of what Intel has done - dedicating a lot of per-core die to prediction just to throw away computations because the per core power is too high - make less sense than just putting a lot more dumb cores on the die.

But then you get GPUs. Shitty latency, huge bandwidth, huge flops, terrible context switching, etc.

It is worth mentioning that both sides of the equation are doing the same thing, though. RAM makers are dedicating a majority of the silicon on ram modules to controllers to accelerate lookup, rather than actual capactive storage.

For the average user, you don't need that hugely complex Haswell logic. Tablet class performance for the web, office suites, and even programming sans compiling are all perfectly competent. If we wrote better software that utilized all the available cores sooner, we would have gone down the route of 16 - 32 core main CPUs instead of extreme precomputation. That has a lot more potential performance, but it requires the software to use it.

ARM is kind of uniquely poised to do that as well. Most of its ecosystem is fresh, it went through an extremely fast multicore expansion, and its architecture lends itself to more cores instead of trying the "dedicate everything to offsetting slow memory" problem. If software architects started writing their programs to be core-variable as possible, ARM might be the first realistic platform to break consumer 16 core computing, because the Windows world is frozen in time.


1. Memory isn't just slow because they went for capacity not performance (except vacuously), it's slow because of the laws of physics. c.f. L3 memory is made of the same stuff as registers but takes about 30 times longer to access. 2. No, adding lots of dumb cores makes no sense. 3. GPUs are useful because many tasks are embarrassingly parallel. Many more are not. 4. 'If we wrote better software' adding many more cores increases the difficulty of reasoning about software hugely. Many tasks are not easily performed in parallel, or the speedup is not impressive enough. Most operating systems (my guess is that OS X is included) will choke if you give them too many threads - performance drops hugely, or many threads are left totally idle. This is due to lock contention etc. 5. Of course no-one 'needs' that Haswell logic - but it's sure nice having my computer do stuff quickly. My top-of-line phone struggles to play through its animations properly, and loading websites frequently takes a while. Good-enough is not really a good place to be. Furthermore, greater performance motivates more demanding applications. 6. We dedicate everything to offsetting slow memory because it's the only way to get good performance from the majority of tasks. Sure if your task can be handled by a GPU, by all means run it on a GPU. For those that cannot, we have a CPU. There's a reason why the iPhone and iPad only have two cores - it's not worth their while adding more but does add lots of cost and complexity.


  > Memory isn't just slow because they went for capacity
  > not performance (except vacuously), it's slow because
  > of the laws of physics. 
Yes. The farther away RAM is from the CPU code, the more stuff needs to happen before it can get into those precious, precious registers. Even if data from main memory didn't have to travel over a bus/switch/etc between the DIMM and the CPU, it's not physically possible (in any practical sense) to have main memory running at anything close to the speed of the CPU once we're talking about multi-GHZ CPUs. DIMMs and the CPU are running on separate clocks, you have the sheer distance and the speed of electrons through the metal to consider, etc.

  > There's a reason why the iPhone and iPad only have two
  > cores - it's not worth their while adding more but does
  > add lots of cost and complexity.
Yes! There's a reason why the A7 in my iPhone 5S blows away the quad-core ARM chip in my 2012 Nexus 7. That reason is because "adding more dumb cores" is not the answer to anything, aside from marketing goals.


  > Problem here is that you are mixing memory latency and memory bandwidth together.
Yes, I intentionally did. You are of course correct that latency and bandwidth are two different things. I stopped one level of abstraction above that, so to speak. The concept I was trying to get across was the reality that most of the transistors on a x86/64 die are spent compensating for memory performance either directly or indirectly and the price we pay for the x86 "cruft" these days is still there but is pretty small.

  > If software architects started writing their programs to be core-variable as possible,
And cars will be a lot more reliable when car designers simply design them to be engine-variable! When you invent a convenient way for software to use all those cores, be sure to remember us when you collect your Nobel Prize. Seriously though, writing code to take advantage of multiple cores has been one of the hardest things in computer science since forever.

The reality is that a great many computing problems simply don't lend themselves to parallelization. Some things are embarrassingly parallel (like a lot of graphics work) but a lot of algorithms simply aren't able to be implemented in a very parallel way since each "step" depends heavily on things you need to calculate in the previous "step." (Example: simulations, games, etc)

Things will improve a little bit, as our languages support easier parallel/concurrent code and our compilers get better at things like auto-parallelization, but this won't magically make stubbornly sequential algorithms into things that scale to two cores, much less "a lot more dumb cores."

  > just putting a lot more dumb cores on the die 
I wish it was as simple as putting a bunch of dumb cores on the die. Thing is, they can't be "dumb." You still have to spend serious transistors on things like cache coherency and so forth.

The "lots of dumb cores" thing has been tried before. Like this: http://en.wikipedia.org/wiki/Connection_Machine and Intel's Larrabee and things like that. Seriously, don't you think that hardware designers have thought of this before? They have. There's a reason why Intel doesn't just throw 100 Pentium cores onto a single i7-sized die and dominate the entire world.


Cribbing a die shot:

http://www.tweakpc.de/hardware/tests/cpu/intel_core_i5_760_b...

Notice the size of the L3. Now look at a core- each core is probably 50% L2 by area, and the L1 + branch predictor/prefetcher probably occupies another 25%.


>Look at a modern i5 CPU die - https://www.google.com/search?q=intel+core+i5

That link didn't seem quite right - maybe you meant something like this?:

http://www.techpowerup.com/reviews/Intel/Core_i5_2500K_GPU/


You're actually wrong with regards to the branch prediction. the A7 is (according to anandtech [1]) closest to the "big processor" designs that intel makes with aggressive branch prediction (massive issue width, huge branch prediction buffer, huge caches).

[1] http://www.anandtech.com/show/7910/apples-cyclone-microarchi...


Agreed. I usually like JLGs commentary, but he's no expert on processor technology, and this time his analysis falls short.


I think it's funny that the main reason people cite the need to switch from Intel to ARM is power consumption. A big reason to stay on Intel is that a lot of people have real needs to run Windows software (whether in Bootcamp or a VM). Switchers feel a lot better jumping to the Mac if they could fall back to Windows if they wanted to.

Regarding battery life: The latest MacBooks get 9-12 hours of battery life. I haven't experienced battery anxiety on a Mac in a long time. In contrast, my iPhone is dead by the end of the day and watching it creep below 50% makes me start thinking about the nearest Lightning adapter (yes, I understand the Mac has a much larger battery and cell radios are power hungry).

Put another way, max power consumption on an iPad Air is ~11W[1]. The max power draw on a Haswell MacBook Air is 15-25W (~50% improvement in battery life from 2012 to 2013, which had 21-34W max draw)[2][3]. Given that Macs have more space available for batteries due to larger screens and the need for keyboard and trackpad, I don't see power consumption argument holding water.

[1]: http://www.anandtech.com/show/7460/apple-ipad-air-review/3

[2]: http://www.anandtech.com/show/6063/macbook-air-13inch-mid-20...

[3]: http://www.anandtech.com/show/7180/apple-macbook-air-11-2013...


I can't help thinking losing Windows now would be really significant for them. More specifically losing Office - if Windows on ARM keeps flaking can they rely on a first class Office build for the foreseeable future on ARM? Their office stuff is probably awesome but it's not office and it seems that could hurt a lot.


And going the opposite direction, I'm not sure the loss of the hackintosh people would be a net gain or loss overall.


I recently switched back to Mac for the first time since 2006 and one of the factors in my decision is that I can run other x86 compatable OSes. I haven't felt the need to but it is nice to have the option.


Agreed entirely, save for that I develop my game on Windows via Boot Camp (and, when lazy, VMWare Fusion to mount the partition). An ARM Mac means I go to Linux and Windows full-time and, probably, decide not to give a damn about a Mac port.

Fortunately, because I really like OS X, I think it's vanishingly unlikely that Apple is this stupid.


Compatibility with x86 is a lot like why people don't buy purely electric cars. Being able to run another OS is much like being able to take a very long road trip.


That's why I like BMW's solution for their i3. It's a purely electric car, but if range anxiety is an issue, you can option a tiny petrol engine from a scooter that acts purely as a generator to charge the battery.


The real weakness of Intel in power consumption relative to ARM hasn't been so much in the chips as the chip sets, software and the balance-of-system.

To create a low power system, ALL components need to be low power, and it is easier to start from a system built to be low power and scale up capabilities rather than go the other way around.

For instance, I have a Windows-based laptop which is a great machine, but if I have a web browser open, any web browser, the fan spins, it gets hot, and battery life is less than 1.5 hours.

Is it the fault of Windows, the browser vendors, the web platform, adware, crapware, who knows what? It doesn't matter, but controlling power consumption on a legacy platform is a game of whac-a-mole that doesn't end.

Because Windows users expect to plug in devices that draw power from USB, a Windows tablet has to have a huge power transistor to regulate voltage, a power supply system scaled up so it can supply enough power through the USB port to charge an Android tablet, at this point you might add the fan and then you are doomed.


The x86 instruction baggage is a red-herring. It takes maybe 3% of the transistor budget to support it.

I saw an interesting interview once with an ARM exec who leveled and said they don't really have a big power advantage over Intel, despite what people commonly think. I didn't manage to find the article with Google though.


To me it seems mostly a problem of unwalled garden. When Apple owns the platform from the silicon to the development environment, it's easier for them to manage power. Wintel on the other hand has so much variety all the way down the software stack and even into hardware, it is more difficult to control... x86+Linux (Lintel?) suffers the same problem.

The unwalled garden can still be made low power. Android has set an example of tackling that problem with the battery tracker, which profiles which application is chewing up battery.

For power over USB, what do you mean? My Android cell phone can power keyboards, flash drives, and the like with an OTG cable. It certainly has no fan.


If you think Android isn't a walled garden (unless you mean non-Google Android?) then the devil has convinced you he doesn't exist.


Wikipedia says: "A closed platform, walled garden or closed ecosystem[1][2] is a software system where the carrier or service provider has control over applications, content, and media, and restricts convenient access to non-approved applications or content."

Google doesn't restrict access to applications or content on my Nexus 5, so how is it a walled garden?


It is at least much less of one than iOS


Which may appeal to you as a developer, but from a user perspective, that isn't a good thing.

Consider this analogy: Would you buy an "open-source" deadbolt for your house? Great feature for aspiring locksmiths! But very likely not such a great feature for 99.9% of home owners.


Apple does look like it's making chips to handle heavy workloads, not just to compete with the latest Krait or whatever. But I'm not sure ARM Macs are the direction they'll take that.

It'd make some business sense for them to instead position iOS so it can take over more and more traditional Mac duties. The IBM push could be an example of that. Investing in iOS gives Apple the tight control and the 30% cut they're used to from that space, and it avoids the Windows-RT-ish heartbreak of "why is this ARM OS like my Intel OS but without all my apps?" (If there were Mac-on-ARM, I'd expect it to be Mac App Store only.)

Anyway, the A7 is already a beast (http://www.anandtech.com/show/7910/apples-cyclone-microarchi... does various measurements, http://cryptomaths.com/2014/04/29/benchmarking-symmetric-cry... is an interesting case study), and there are still future process nodes and microarchitecture changes that will let them make better chips. I don't know if an ARM MacBook Air is specifically where this goes, but they're certainly making ARM capable of more serious stuff.


> why is this ARM OS like my Intel OS but without all my apps?

The problem should be much smaller on OS X than on Windows. Apple has a culture of breaking things regularly, instead of worshipping backwards compatabillity.

The result is that most Mac apps are kept up to date by the developer. Recompiling for an arm-based OS X shouldn't be an issue.


> The result is that most Mac apps are kept up to date by the developer.

That's maybe true for indie developers. It's not generally true for things like Photoshop, Microsoft Office, MATLAB, Mathematica, Skype, and dozens of other major titles that would likely be complete dealbreakers for a pretty significant portion of their userbase.


Apple kept Carbon (the OS9 to OS X transitional bindings) around for so long because of Photoshop and Office.

From personal experience, I can tell you that this does not happen with big software. Apple released the first Intel Macs in early 2006. The next year Intuit released Quicken 2007 which was PPC only.

The first version of Quicken to be x86 for Mac was Quicken Essentials 2010. First of all that's four years after the transition. Second, Quicken Essentials was crippled. Here's a quote from Wikipedia:

> Some of the features of Quicken are not present in Quicken Essentials for Mac, such as the ability to track investment buys and sells or to pay bills online from the application.

So unless you wanted something ultra-basic you were screwed. Intuit's answer was that they'd give you a free copy of Quick '08 for Windows. All you needed to do was buy Parallels (etc.) and Windows.

This was a big problem because OS X finally dropped PPC emulation support, called Rosetta, in 2011 with the release of Lion. So after 5 years Intuit, a VERY big company, was unwilling to update their software or provide a real equivalent for their Mac users.

If you look at the announcements for each version of Adobe Creative Suite for Mac you'll find references to starting to use features of the OS that Apple introduced years ago.


I'm saddened that nobody ever brings up video games when talking about these architectural shifts. Practically all PC games in the past few decades have been written for x86, and most of them will never be patched for ARM compatibility. These games are as important to many of us as movies or music, and yet I fear they're destined to disappear from our cultural memory if this shift ever happens. Virtualization just won't cut it; most modern games barely even run with Wine, to say nothing of performance. And given the slowing of Moore's law, we can no longer count on emulation to give us seamless reproductions a few years down the line. Does nobody care? Why doesn't anyone say anything? I love my Mac, but if I have to choose between ARM and my Steam library, I'll choose Steam and begrudgingly go back to whatever Windows version still supports it.

On a related note, I think it's important to differentiate between utility software that's assumed to be temporary, and one-off pieces of software that are intended to live forever. I wish there was an easier way to write software in such a way that it can easily be guaranteed to run in the future, no matter the architecture. (Open source is not a guarantee. Ever try compiling the source to a AAA game?)


> I wish there was an easier way to write software in such a way that it can easily be guaranteed to run in the future, no matter the architecture.

Opening the source would help quite a lot with that.


Is the problem really with architecture? How many games (not game engines) have large globs of assembly? Or is the problem more to do with use of OS-specific APIs?


If this happens I hope entry level Mac’s get below $500. I bought a MacBook Air for use as a Windows machine with the plan to learn iOS development at some point. It was worth the cost since I can use it as my day to day machine running Windows. I wouldn’t be able to justify spending $1000 on a Mac Laptop that could not run Windows. I could justify $400 for an entry level Mac Mini running ARM but not a lot more then that.


Based on Apple's pricing history, I don't think ARM based Macs would be cheaper, at least not by much.


I find the "Apple is too expensive" argument to hold little water when you really look at it. I won't argue that up front you will pay more for the lowest-end Apple laptop/desktop over a Windows laptop/desktop but in my experience Apple products hold their value much better than any Dell/HP/etc product. Even if you hate OS X then you still are better off buying an Apple laptop and running Windows than you are buying a Windows laptop. The hardware is more reliable, the resale value is greater, and they look 100x better than anything else out there.

Comparing a Dell XPS to a Macbook Pro Retina with the very similar stats (CPU/RAM/HD), the Mac has a few better specs, leaves us with a price difference of $100 in Dell's favor but I can promise you that the MBPr will resale for much higher than the XPS in 1-2 years time. I had friends in college that always would joke that I paid a small fortune for my MBP ($1500) however these were the same people who would buy a $500 laptop every year or so because their bargain bin laptops just didn't last long before they started having hardware issues or massive slowdowns. My MBP ran quite smoothly for 2.5 years before I needed a faster CPU (I'm a developer) and I sold my machine for $900 which resulted in an operating cost of $240/yr for the period I owned it.


It is exactly for this reason my next laptop will be a MBP running Linux (I have zero interest in MacOSX).

I bought a mid level Vostro a while back and it is absolute piece of shit.

Touch pad detects my palm from across the room but is inaccurate when I actually touch it, screen is mediocre and lets dust in constantly, keyboard is mushy with no positive click.

The spec looked good and Vostro's used to be ok, the Vostro 1700 I had prior was a fine machine.

It's so bad I've found myself using my ancient Thinkpad (Celeron M) when I have to do a lot of typing.


Check out the current Thinkpads; they might also have something that suits you.


Thanks I will :).


I find the "holds value" argument to hold little water. While Apple hardware does hold its value very well, most people don't resell their hardware every year or two. Yes, some people do, and they're quite organised and know what they're doing when it comes to moving all their stuff between machines and their setup, but most people don't.

As for massive slowdowns, I'm having a problem at the moment in that everyone I know with an old 4GB ram mac is having a massively slow system once they upgrade to Mavericks. On a clean boot, these machines are already using 3.5-4GB, which makes no sense.


My parents just upgraded and sold their old (4 and 5yro) MBP's for a yearly "cost" similar to what I saw ($240/yr, it was a little closer to $300 for them). I will agree that selling and re-buying every 1-2 years can increase your savings but the fact that a 4 & 5yro laptop is worth anything is testimate enough IMHO. You don't see that with windows laptops.

Please show me a windows laptop that is still worth even half it's value after 2 years.


Please show me a windows laptop that is still worth even half it's value after 2 years.

About two months ago I was looking for a Thinkpad w700ds, and while admittedly an unusual laptop, I found two items on ebay going for more than they originally cost. As for a 5yo laptop costing $1500 after resale, the laptop I currently use is a Thinkpad x200, a 6yo laptop that I bought new for $1300 and I see now on ebay for an opening bid of $200. $1300/6 = $217/year (less if you count the $200 ebay sale), which is cheaper than any of the options you're proferring. Where is this magical Apple-only value you're talking about with laptops, then? I find that Apple fans are very skilled at convincing themselves that everything else sucks, which is fine (each to their own) until they start proselytising. Yes, Apple machines do hold their value well, but from your own numbers breakdown, my Thinkpad more than holds its own if you want to talk turkey.

In any case, the point I was making wasn't about the resale value itself, it was that most folks don't actually resell their computers. Most people run their computers into the ground, then buy another one.


Lets take the case of a ThinkPad T510 vs a 2010 MBP, the thinkpad is going for 250-350+ (low of 100 for parts and a high of 600 for a refurb system), the MBP 450-600+ (250 for a low, and 1000 or so for a refurb system) - the Apple clearly has the better resale value - that said, I think anyone buying a piece of consumer electronics for resale value is a fool - I recently bought a rMBP, simply because power for performance was better bar none than anything else I could find in the same class (read ThinkPad) - plus I wanted to give OSX a try, I wanted a Unix workstation that could run MS Office (I need Excel - and LO/OO Calc is not a suitable replacement) so that basically left MacOS - and I'm supremely impressed.


Thinkpads retain value quite well. It's probably more accurate to say that you pay for what you get. Quality, non-Apple hardware are pricey as well.


FWIW, it should be able to run ARM Windows, right?


These arguments make no sense. Aside from anything else, the computing power of the Mac I'm currently using—quad 2.4GHz Ivy Bridge—is so far in excess of anything that is available in the ARM architecture that it's difficult to see this being the case at any point.


There are people building serious 64 bit arm chips, for server type applications. No idea what actual performance will be yet.


I'm probably in the vast minority, but I tend to use apple hardware to run windows... so I'm hoping this doesn't come to pass. I use OSX occasionally, but the thing that got me to switch to apple in the first place was bootcamp.


It's quite possible that ARM Macs will run Windows. Windows RT already runs on ARM, and Microsoft seems to be moving in the direction of embracing alternative platforms (see how they push Azure for iOS). They'd sell more Windows licenses and gain more users, and at worst they'd lose some Surface sales.

Whether Apple would allow it is a different question.


WindowsRT is a dead man walking.

I wouldn't predict a long maintenance life for it. Surface2 and the Nokia 2520 is about all the hardware left running it?


I'd wager most people with a need for running windows on a mac do it to run some "legacy" win32/win64 binary application (and not just iexplore.exe or msword.exe), so an "ARM Windows Bootcamp" mode probably wouldn't solve anyone's problem.


Losing native Windows virtualization seems like a big deal. That was a huge selling point with the switch to Intel. It's become indispensable for many Mac users and was the reason many Windows users were able to switch.

Unless Microsoft also drops Intel, I don't see this happening.


The "transitions" comments leave out a big piece of functionality; certainly big for me, and I imagine big for others. That's VMs for Windows and Linux. I have all kinds of Linux VMs I run on my Mac, and I imagine others that need "that one Windows app" run Windows a lot in VMWare and VirtualBox. Going to ARM would torch a big part of Mac functionality for me.


From what i've understood from benchmarks and reviews the latest Intel atom processors have a better power/performance ratio then arm processors. I'm not sure if the performance/price ratio is better though, does anyone know?


If Apple were to switch from Intel, I'd probably have to (reluctantly) go back to a Windows laptop. I love my MacBook, but most of my money is still earned working with clients who use Windows environments. There's still some software that never made the jump to Mac either, for which I still have to use Parallels Desktop. The best bit of having a Mac is that I can run OS X, Windows & Linux all on the same box.

Of course, if Apple is also making their own x86 compatible chips, that's a different story. I don't need an Intel chip specifically, I just need something that runs Windows / x86 perfectly....


As much as i want this to happen. I dont see this coming in the near future. Why would they release the new MacPro with Intel Xeon if they had ever planned to switch away? And would MacPro be staying in x86 land if Apple decide to switch the rest to ARM?

Another obstacle is Thunderbolt. This DisplayPort + External PCI-Express Cable is totally in control by Intel. AMD's version is based on USB 3, which is an ugly hack that Apple will unlikely use.

The performance gap between Haswell and A7 is huge. Watts to Watts, at Notebook / Desktop power range Intel wins hands down. Although the gap is shrinking with each Ax SoC.

Then there is the part about Intel Atom losing on performance. Which is wrong. Intel Atom SoC performs really well. It didn't get much wins simply because of its ecosystem and prices.

The Mobile SoC market are operating at thin margin comparatively speaking. Even if Intel are offering Atom at the same price, why would any OEM wants to be bound by Intel & x86 again? So Intel decide to offer those SoC at a lost and what happened? On the Western world it is dominated by Qualcomm where it offer better solution or cheaper TC with its integrated Modem. In Eastern World or China it is hit by "8 Core" marketing team from MediaTek. Everyone thoughts more Core = better.

I dont see how Intel is going win this Mobile battle. Apple will pretty much drive Intel to where they want, which is to Fab SoC for them.


I blogged about this in April. http://vishaldoshi.me/2014/04/25/apple-intc/

It looks like Chromebooks are a very popular format and I think an ARM based 'AirBook' could compete in that space.

Napkin Math

MacBook Air current generation (mid-2013) retail price: $999 ($1099 for 13″ model with same CPU).

Intel Core i5-4250U, Tray: $315, http://ark.intel.com/products/75028/ (sure Apple will be getting large discounts on this, but it can't be that large, since $INTC has ~60% gross margin overall)

Apple A7, Tray: $20 (estimated)

Intel Atom E3827, Tray: $41

Tray = 1000 pcs;

The Core i5 has a 15W TDP; 1.3 Ghz clock (turbo to 2.6Ghz); 2 cores, 4 threads. Sunspider 250ms.

The Apple A7 has a 2W TDP; 1.3Ghz clock; 2 cores, 2 threads. Sunspider 397 ms.

http://cpuboss.com/cpus/Intel-Core-i5-4250U-vs-Apple-A7

It’s not looking all that different! Esp. when you take into account that the A8 will be twice as fast (think Tegra K1) – i.e – a Sunspider score of 200ms maybe?


Had a hard time loading it, so here's the cached version: http://webcache.googleusercontent.com/search?q=cache:SDIaXc5...


The Windows kernel guys were evaluating ARM systems for us, and the gist was "Run away, don't walk; the memory systems on those things are terrible."

I like the ARM architecture a lot. It's simple and easy to write software for, all the way from embedded controller stuff to "real" operating systems. But they're not all that great at doing massive computation. We were going to use one in the video path of a popular gaming system for a while, and it turned out to be inadequate by at least an order of magnitude (probably a factor of 100, but comparing CPU cycles to GPU cycles is pretty unfair).

Intel is executing really well, and it'll probably take an alien invasion to dethrone them.


When was this? ARM historically has abysmal memory systems, but there have been significant improvements over the past few years (I don't know that they've caught up, but they are in a different league from about 5 years ago or so).


Within the last three years, but projecting availability out about a year.


Apple needs x86 to run existing apps. Who would accept a a big slow down for a slightly cheaper laptop. x86 intel chips are a RISC design with a glue layer which grafts x86 on the outside. Could Apple design an x86 chip with an ARM RISC core hidden inside? Sounds pretty complicated, I doubt they have a big enough hardware engineering team. If there was an ARM core inside could it be exposed for recompiled software to take advantage of it? Big issues managing state/contention between 2 different ISAs?


This is a horribly low-quality article. The central proposition is entirely analyst speculation, with no actual information to hook it onto that doesn't date back years. The author appears, from how he gets details subtly wrong, not to actually know anything about the history of CPUs (and not to have bothered e.g. checking Wikipedia).

There is nothing here that is backed-up news whatsoever.


According to Geekbench the current CPU in the iPad Air (the A7) scores about the same as a Core 2 Duo from 2006 (the E6600). Nothing to sneeze at and definitely getting close to the low power Haswell CPU's in systems like the Macbook Air. But I reckon it'll be a few generations of A series CPU's before they reach the point where they can challenge Intel for the crown


For someone who's not very knowledgeable about architecture shifts, what software changes would be require to make this happen? I'd assume that the Mac OSX itself might be able to make the shift easily, given that Swift and ObjC code seems to run on both iOS/ARM devices and MacOS/x386 machines. What else will have to change? Or is this purely a hardware choice?


The fact that iOS shares a lot of the same code as OS X will make this a much easier transition. Apple can take a lot of the low-level stuff from iOS, where they've had years to figure out how to get Darwin working smoothly on ARM.

Beyond that, all apps on the OS X app store will have to be recompiled and resubmitted. None of the binaries installed on current OS X machines will still work, unless Apple includes an x86 emulator like they did for the transition from PowerPC.

This will also open up the possibility of running iOS apps natively on OS X, but I doubt Apple will pursue that at all (for UX reasons).


The transition will be easy for applications that rely on Apple's APIs only, the fun will be for programs that use proprietary third party components all of which will need to be rebuilt, as well as all binary plugins to applications.

I can imagine this might be a huge job for some of the more heavyweight production programs for Mac , as well as AAA games.

It would basically spell doom for Parallels and other virtualization software which give Mac switchers an escape hatch back to any Windows programs they might use.


Another negative in the same vein is that it would prevent fast emulation of Windows and many Linux/BSD distributions in something like VMWare.


Most Linux and BSD distributions are available for ARM already.


  > Beyond that, all apps on the OS X app store will have to be recompiled and resubmitted.
This is one of Apple big benefits from having the App Store (and possible a strategic reason for establishing it) — they can simply tell developers: rebuild for ARM or you're out.


Do you have any idea if they could pull off X86 emulation on a theoretical but plausible ARM chip? It seems like a really hard thing to emulate an i7 or am I missing something?


You're not missing anything.

I once managed to create a cross-build system "wrong way". (OBS made this error both possible and easy.) I tried to build some pretty heavy x86 software (QtWebKit) and ended up creating a config that used a cluster of panda boards to run qemus in x86 emulation mode.

To my chagrin, I didn't realise the error until I was wondering, after ~5h, why my build was still not finished.


All software would need to be rebuilt .. OR, like before they may have some emulation, but then older software would be quite a bit slower.


If you wish to make an apple pie from scratch, you must first invent the universe

How far does Apple want to go? The rabbit hole is pretty much infinite. Do they want to make their own plastics and paints and metals? Do they want to mine the materials for all of the above? Do they want to make the mining machines for their mines? Etc, etc, etc.


An ARM based Air might make a nice facebook machine but what about Pros/Desktop and the high performance market? I really doubt there are any ARM chips anywhere near the performance of an i7. What are they going to do segment their PC lineup or just abandon the high performance side? I doubt they are willing to do either.


I wonder if this will mean the end of Flash Player for Mac? Will Adobe bother releasing an ARM version?


> I wonder if this will mean the end of Flash Player for Mac?

You say that like it's a bad thing.

I wish Flash on Mac would die die die. Youtube has the ability of showing me non-flash video on a Mac. And yet they often won't. The exact same video that won't play on a Mac will play just fine on an iPad. Evil bastards! With Flash dead, Google wouldn't be able to demand I use it.


Activate Developer Tools in the preferences of Safari. This gives you a Developer menu in the toolbar. When a site refuses to play video on your Mac because it needs Flash, you tell Safari to pretend it's an iPad. The site invariably serves you the video you wanted, without Flash.


I read this advice previously on HN, and it worked for a while. Then Google changed something, and it stopped working. I don't know, maybe I was screwing up. Or maybe it's just easier for Google to roll ads ahead of the video when it's in Flash. Or maybe Google wants me to download Chrome, which has a built in Flash player. Not gonna do that until it supports NoScript. YMMV, obviously.


No, it still works just fine. I was using it just yesterday...


I think all existing x86 apps would be emulated.


Emulating x86 on ARM would be trickier than emulating PPC on x86. But even if emulation did work, that would mean you would have to use an x86 browser to use Flash. You wouldn't be able to use the x86 plugin with a native browser.

In practice, that would make a huge difference in terms of Flash's market share.


There are already ARM versions for Android, but whether they will bother making the mac version build for it is a different question.


Not any more. Flash player for Android hasn't been updated or available for more than a year.


While I think the article's prediction is incorrect (see my other comment for why), it did give me an idea: Why doesn't Apple design their own x86 CPU?

Their work on the Ax series has probably taught them quite a bit that could be ported over to x86-land. Also, Apple could leave out x86 cruft they don't use: legacy addressing modes, PAE, etc. And of course, they could design the CPU specifically for their products instead of searching for the closest match sold by existing vendors (or cajoling Intel to tweak their designs).

Apple already has strong relationships with fab companies. They have the talent and teams to design such a CPU. One wonders if they're already working on such a thing. Even if it never shipped, it could be used to negotiate lower prices from Intel.


If they left out stuff they didn't use, they would still have to keep in stuff that others use. Apple might not use Feature A, but application A might use Feature A, or at the least, VMWare might need Feature A in order to run Windows 7 or Ubuntu.

If Apple wanted to go full-iOS and lock out everything that isn't Apple-approved, then this would be a good idea. If they wanted to keep Bootcamp, virtualization, or an unregulated third-party application development environment, designing chips without the standard features of Intel chips wouldn't be a great idea.


You can't without a license from Intel, for various patents etc. Well it has been tried see http://jolt.law.harvard.edu/digest/patent/intel-and-the-x86-... for a brief history, but it is pretty hard, most of the people who used to make x86 chips (Transmeta, NEC, ...) have dropped out.


> Why doesn't Apple design their own x86 CPU?

There's the small matter of getting an x86 license. Which would mean buying Intel or AMD, since it's vanishingly unlikely that Intel will be handing them out any time soon.


> Which would mean buying Intel or AMD

Apple could afford to buy Intel, but antitrust considerations would probably prevent that from happening.

Apple could buy AMD, but the Intel/AMD x86 cross license probably terminates in a "change of control" situation. Which might mean a restart of the Intel/AMD lawsuits of years gone by.


What seems more likely to me is a hybrid ARM / x86 machine that switches dynamically between processors, kind of like how some macs can switch dynamically between their integrated and discrete GPUs.


Very interesting, wouldn't surprise me if we saw an AMD/ARM acquisition by Apple by this time next year.


I actually see Apple branding A7 as 'desktop class' to leverage better CPU price from Intel.


apple has been known in the last decade for picking the best hardware for the job. i don't see how ARM chips can compete with intel chips in terms of performace or performance per watt - unless you want a 4W macbook. (hint: you probably don't.) ARM competes very well in performance per dollar - but AMD does, too, and I don't see macs with AMD CPUs in them.

i can see a laptop-sized ARM-based product from apple, but it won't replace any of the macs, it'll be something completely new - let's call it a macpad.


> Googling “Mac running on ARM” gets you close to 10M results.

Why should we listen to someone who doesn't know what quotes do in search queries?


Seriously, Not really.


Half right, but only by accident.


The transition from powerpc to x86 was surprisingly relatively painless. There was some pain but most apps ran fine in the translation layer and eventually the native apps moved over.

It'll probably be even easier the next time since apple has done it once already and knows how to provide the proper dev tools.


They've done it twice, not once. They moved from 68k to PPC in the early 90s, again with full emulation. In that case it was even more extreme, as much of the OS ran emulated in early PPC releases (yet it was still quicker than running on actual 68k hardware!).


You've highlighted the big difference this time. Switching from Intel to ARM would be a step down in performance, or at least not a step up. There's no headroom for a legacy emulation layer.


However, it does work with the playbook of going ever closer to computing appliances from general-purpose computers.

In the beginning there was the motherboard and a CPU. Then, before homogenic PC era, we had dedicated chips that took care of certain operations. The SID chip in C-64. The blitter chip in Amiga. (Can't remember the name, I'm sorry.) Even the x87 math coprocessor in the 386/486 age!

With advent of PC and the megahertz wars, dedicated peripheral chips became less common - except in SoC environment. Where the x86 world went with raw processing power, embedded world had to find ways to fit specialised chips on the board.

My experience is mostly centred around crypto accelerators, but I know from very painful experience that all Maemo devices had on-board DSP units to handle some sound decoding, and pretty much all video processing. So the pendulum swings: CPU for everything, then peripheral devices for specific high-intensity jobs. Some of the most commonly used get integrated into CPU's, making entire classes of chips irrelevant ... until the next CPU-intensive thing comes up, and the main processor is again too slow.

Apple is banking on their ability to both predict and dictate the direction of near-future computing needs. I expect the A7/A10 boards to come up with all kinds of integrated support chips to handle the heavier loads.

As long as their predictions are correct, all is well. Any bets on what's the next CPU burner that will require a dedicated ASIC to preserve even the semblance of battery longevity?


I think it's technically irrelevant, how many people from the 68k/ppc era are still there, and the os/tooling/techniques back then have probably nothing to do with what's in place nowadays.

Pyschologically it still matters though, as you said, they did it twice and the first was surely a harder task.


They almost certainly have some 68k/ppc guys left, and while tooling has changed they at least had a historical recipe (e.g., go ahead an emulate somethings, transition gradually, etc) that was proven to work.


this is about the most poorly written article i've ever read! well, second that to some of the trash on techcrunch. how about some benchmarks of compiling code on intel vs arm instead of saying that since ipads (arm based) cost more than macbook airs (intel), so therefore intel will fade from apple's line of laptop/desktops. i think arm processor laptops are going to become mainstream, but not for most of the arguments specified in the article. i believe this fate is still far away as x86 still offers total raw compute power over arm, even though arms are more energy efficient.

also, the self noted digressions in the article arent even funny, feels like someone with an english degree and subscription to "i can spell x86" magazine wrote this article




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: