Hacker News new | past | comments | ask | show | jobs | submit login
Intel made a huge mistake 10 years ago (vox.com)
163 points by molecule on April 20, 2016 | hide | past | favorite | 189 comments



TL;DR: Intel turned down the opportunity to make the iPhone's chip and gain a foothold in the mobile market.

A more direct source, from Intel's own CEO (at the time): http://www.theinquirer.net/inquirer/news/2268985/outgoing-in...

    Otellini said Intel passed on the opportunity to supply Apple because the economics did not make sense at the time given the forecast product cost and expected volume. He told The Atlantic,     "The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it.
    "It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."
But the thing I don't understand is why Intel gave up on XScale, their ARM-compatible effort (they held one of the few expensive ARM licenses that allowed them to expand the core architecture). How's Atom doing nowadays? Last I heard, Intel had partnered with Dell to make the Atom-powered Venue android tablets. Can't say they're grabbing headlines with them...


They don't need Atom, as far as mobile devices go they are bringing Core to ARM's power envelope. You can get Core M CPU's today with 3W power envelope with considerable performance lead over most ARM SOC's.

Intel has made a bet that it would take just as much time for ARM to reach x86 level of performance as it would take Intel to bring x86 power consumption down to SOC levels. Now Intel can play on both sides with their low power x86 parts, Xeon-D with upto 16 cores and Core-M in 2c/4t configuration.


>Now Intel can play on both sides with their low power x86 parts, Xeon-D with upto 16 cores and Core-M in 2c/4t configuration.

There's only one little problem: the market has settled on ARM and doesn't care anymore.

So how are they gonna play those "both sides" again?


Not really, the market has settled on CPU-independent bytecode. I've got a x86 tablet, and you don't notice it at all, other than the Intel logo on the back. As long as there is an ART port it could be MIPS and still have pretty much the same compatibility.


While this is mostly true, it is just untrue enough to cause trouble for a non-negligible number of apps.

The Intel Houdini software used to make Android on X86 possible is a marvel of engineering but it is not perfect. I worked for a company that made an Android tablet around an Intel chip instead of ARM. We were hit with a great number of user reports of certain apps crashing, overheating, or simply refusing to run in the first place. We verified in QA that these apps would work just fine on ARM tablets with the same Android version and similar specs.

To be fair to Intel, they were very good to work with and really did (and do) want to make it work. I have no doubt they will. Eventually.


... Compiler.


AFAIK if app uses NDK it has to have compiled code for every architecture. If developer didn't make version for x86 you won't run it.


Yes, all major mobile OSes have followed mainframes and have settled in CPU-independent bytecode, for most part, as long as C or C++ code isn't used.

Currently iOS is the only one that also supports having C and C++ as bytecode (LLVM one).

Android and WP, only support it for Java, RenderScript and .NET languages.


Actually the market for mobile yes. But on servers it will actually take a long time to replace ARM. So it's actually a win for customers when these two are rivals since we get cheaper servers or lower tdp on servers and cheaper mobile devices.


Server side, there is an initiative to get rid of Intel as well[0][1] with a PowerPC based chip. It'll be interesting to see if it ends up working, but I think everyone wants to get away from Intel's high prices.

[0] https://cloudplatform.googleblog.com/2016/04/Google-and-Rack...

[1] https://en.wikipedia.org/wiki/OpenPOWER_Foundation


First off, those "3W Core CPUs" are a scam and a dud. They will overheat and throttle.

Second, they cost WAYYYY more than a mobile chip. They cost more than an iPhone's ENTIRE BOM, or about 10x more than what Apple pays for its chips.


> First off, those "3W Core CPUs" are a scam and a dud. They will overheat and throttle.

Throttling has been a problem on high-end Androids for years now.

> Second, they cost WAYYYY more than a mobile chip.

That's the real issue – the margins in the mobile market are ridiculously slim, and there is no x86 monopoly Intel can abuse to lock in vendors and customers. So cheap ARMs it is.


Single core Atom (2012 Z2480 on Razr I) at 2GHz has significant perceived performance advantage even compared to processors released after. It's way snappier than the Moto G quadcore ARM cpu from 2013. I don't know how much battery life is sacrificed for this though. But I'd be very eager to try a Core M smartphone, since Atom has always been the low-end of Intel cpu architecture. That said I lost track of Intel marketing, maybe Core M is just a rebranding of the same or similar arch.


There's a very large difference between a Cortex-A7 in a Moto G and a Cortex-A57 or Cortex a72. The latter are about 3x faster.

Atom has barely increased its single-threaded performance from like 300 pts in Passmark to about 540 right now (in Pentiums, in PCs, which cost $160 a pop).

Intel is not competitive on price. That's why Atom didn't pan out in mobile. They tried to do it by heavily subsidizing its high-end mobile chips, which typically cost close to $50, to compete against a $15 mid-range Qualcomm chip. That's why some people were "impressed" by what an Asus Zenfone could do for instance, for its mid-range price overall.

And if Intel can't compete with Atom on price, then there's NO CHANCE it will ever compete with Core M in mobile. The only reason it even exists in so called "tablets" that go for $1000 right now, is because Microsoft failed to make a good case for ARM-based Windows machines with its poor app support. But that will always remain a niche.


> And if Intel can't compete with Atom on price, then there's NO CHANCE it will ever compete with Core M in mobile. The only reason it even exists in so called "tablets" that go for $1000 right now, is because Microsoft failed to make a good case for ARM-based Windows machines with its poor app support. But that will always remain a niche.

And outside the Surface those aren't selling well either.

(IMO rightfully so, the Venue 11 Pro is the worst device I've seen in years – hypothetical hardware dickwaving aside, even a $50 OEM Android tablet has better UX and usability than Windows 10 without mouse/keyboard.)


Asus T-100 TA series work pretty well as Windows 8/10 tablets.


Yes, because they ship a keyboard and touchpad. But you wouldn't buy the Asus ZenPad instead and try to use Windows 8/10 solely with touchscreen, would you?

It's the complete opposite of how iOS/Android tablets operate.


Core M is the "same" CPU as their server and desktop CPU's. It's not based on the Atom line. https://en.wikipedia.org/wiki/Skylake_(microarchitecture)


That moto G had great battery life I could tell you.


That core M will be $1xx compared to $1x-2x for ARM chips.


That's economy of scale I don't see a reason for it to be more expensive the Apple A8x had a higher transistor count than an 8 core Core i7 Haswell-E CPU. Also I'm pretty sure that high end SOC's that end up in iPhones, Galaxy S's and the like cost more like the Core-M than the 20$ or so you would pay for a MediaTek SOC.


I've design with Intel Atoms and ARM chips. Atoms are very bloated and require a lot of external support to get up and running. Intel's firmware/boot support is atrocious. They still have PC OEM's in mind. ARM chips on the other hand are much easier/cheaper to design for. The SoC is better integrated and targeted towards a low overall BOM cost. Core M is worse than Atom.


Not true at all. If Intel could do that:

1) they wouldn't have replaced its Core-based Celerons and Pentiums with Atoms to increase profits - and they sell those for $110-$160

2) they wouldn't have licensed Atom IP to Rockchip and other low-end chip makers, essentially pursuing an ARM-like IP-revenue model (for which they would've made peanuts and didn't pan out anyway).


Intel Celerons even the embedded ones are currently based on Skylake. https://en.wikipedia.org/wiki/List_of_Intel_Celeron_micropro...

Intel licenses allot of things they still have ARM licenses and probably even Power PC ones.


mediatek socs are actually $4(this is a price from 2014, when Intel had to match that $4 price while subsidizing Atoms), and that includes pmic. $20 is a price for a hieng quad core chips.


The recommended customer prices for older Core M's is listed as $281


so intel is going to start selling CORE processors for 1-5usd apiece?


> why Intel gave up on XScale

The gist of the Digital Semiconductor acquisition was that DEC had kicked Intel's ass in circuit design but turned out to have insufficient volume to be profitable, so Intel and DEC agreed to move DEC onto the upcoming Itanium and DEC could get rid of their bleeding edge but money losing semi business. Alphas and XScales were both remarkable feats - on high performance and low power end, respectively. DEC agreed to kill Alpha and move to Intel's Itanium - but Intel had no existing ARM style product at the time.


The Alpha's extremely weak memory order gurantees would have doomed it in the multicore market. Its hard enough to get multi threaded code correct on X86.


Intel was essentially forced into buying XScale from Digital via a legal settlement. DEC sued Intel for uncompetitive behavior (and looked likely to win).

I don't think Intel really wanted ARM and so made half-hearted attempts at selling it.


This is meta, but you can make the quoted text more readable if you add a newline after some X number of words, like this:

    Otellini said Intel passed on the opportunity to supply
    Apple because the economics did not make sense at the
    time given the forecast product cost and expected
    volume. He told The Atlantic, "The thing you have
    to remember is that this was before the iPhone was
    introduced and no one knew what the iPhone would do...
    At the end of the day, there was a chip that they were
    interested in that they wanted to pay a certain price
    for and not a nickel more and that price was below our
    forecasted cost. I couldn't see it.

    "It wasn't one of these things you can make up on
    volume. And in hindsight, the forecasted cost was wrong
    and the volume was 100x what anyone thought."


One downside of this is that it becomes very unreadable on small devices (the user has to horizontally scroll back and forth to read each line, ugh)

Representing quoted text in italics does not suffer that downside.


Oh, I never noticed this as I use an app to read HN (forgot the name).


Oh, so that's how it's done? I always thought there was a formatting rule I was missing. Thanks.


It could be done automatically as it is indeed confusing. Each new line still needs 4 spaces though... I think!


I was an industry analyst at the time and I think the short answer is that Intel made the decision to pursue an x86-everywhere strategy. They thought 1.) That it would benefit them if there was a the same x86 architecture from server to mobile and 2.) They thought Atom would meet mobile requirements.

(1) was doubtless true. But (2) didn't pan out.

One can find slides from IDFs etc. at the time with Intel trying to buttress its x86-everywhere arguments by showing that Flash ran more consistently on small devices when they were x86-based.

In retrospect, Atom was mostly wish fulfillment. I can't say whether this was an execution issue or whether the strategy was intrinsically doomed to fail.


Anything in Intel that's not x86 gets killed by internal politics. Somehow Intel didn't understand that when they bought it.


Perfect example with their best replacement for x86: the i960. It was one of most impressive compromises I've seen from the time period in terms of speed, reliability, and security.

https://en.wikipedia.org/wiki/Intel_i960

Some variants are still in production mostly for legacy purposes but not the one (MC) I wanted. Too bad.

http://www.intel.com/design/i960/index.htm


Why have you posted that link?!?

Now I will have to research about BiiN and Intel iAPX 432. :)


Start with the classic!

Performance Effects of Architectural Complexity in the Intel 432

https://www.princeton.edu/~rblee/ELE572Papers/Fall04Readings...


Yep, yep, the other important one. Of course, one could read this along with the i960 and System/38 papers to see if any good ideas pop up.


Thanks!


I posted it a bunch. I usually post it, Burroughs B5000, and capability-systems (includes AS/400 predecessor) links [1] together. Intel's i432 was an amazing attempt to clean-slate the machine of future. Safe, manageable, and consistent from the ground up. Just overdid it in terms of hardware requirements and certain components should've been firmware for easier improvements.

The i960/BiiN system improved on that by greatly reducing complexity of the hardware. It was a fast RISC system, had all kinds of error detection, supported HA configuration, and had 432's object-descriptor protections. Object- and page-oriented system. I expect our computer security and reliability situation on Wintel might have turned out differently given what highly-secure systems did with x86's shitty segment/paging/ring combo and HA with lock-step.

Note: A Slashdot article on legacy systems once knocked the F-35 for using old i960 CPU's. Thought that was stupid idea. I think designer was probably thinking too many steps ahead with the market ruining it for him/her.

[1] http://www.cs.washington.edu/homes/levy/capabook/index.html

[2] http://craigchamberlain.com/library/blackhat-2007/Moyer/Extr...


My take is Intel doesn't really understand markets where they need to fight for market share. They invest in something, it doesn't become a large profit business in a few years so they kill it. Customers see this and passively/actively avoid Intel products when possible.

Passive avoidance: Customer doesn't even think to consider what products Intel is offering. Active avoidance: Tries not to spec Intel products when possible.

The internal politics part comes from this scenario. Say someone makes a good proposal, spend $20 million. In return a get product line worth 100 million yr gross and $40 million net. Manager thinks, $100 million a year isn't going to get me a VP position at a $50 billion dollar company.


> Say someone makes a good proposal, spend $20 million. In return a get product line worth 100 million yr gross and $40 million net. Manager thinks, $100 million a year isn't going to get me a VP position at a $50 billion dollar company.

You could also be describing execs at Apple.


Friend of mine worked at Apple in the period between Jobs 1.0 and Jobs 2.0. He described an environment of management 'wolf packs' slowly destroying everything. Stuff like this, guy gets promoted as a manger of a group. Proceeds to force out current employees and replace them with his associates. Then abuse the review process to boost one of those to a higher position in the company. Playing the game right they all move up in 18 to 24 months. All fun and good except the groups they pass through are trashed in the process.


Eh, don’t forget the external forces. Remember Itanium vs AMD’s x86_64?


Itanium died because it was a very niche architecture Intel was working on their desktop/common server x64 line when AMD came out with Athlon64 which more or less won because it's desktop performance over the Pentium 4. Itanium was a RISC-y endeavour by Intel and it had it's problems but you can't really say it died because of x86_64.


Itanium was a trojan horse to get their rivals to give up on MIPS/Alpha/UltraSparc development. As soon as those CPUs lost all traction, Intel dumped Itanium as well. Intel was never serious about the Itanium. Considering how performance sensitive it ought to be, Intel chose to build Itaniums on older manufacturing processes. The fact that Itanium performance was hopeless should not have been a surprise.


Intel chose to build Itaniums on older manufacturing processes

They didn't have any choice. Itaniums were big, and their defect rate on newer processes was too high to get any yield on chips that size.


Itanium was EPIC, pretty much the opposite of RISC. Intel bet on compilers being good enough to tell the CPU which branches it could execute in parallel, and lost that bet.


Er... those are orthogonal concepts. Itanium had a simple instruction set with primarily register-register operations. The explicit ILP part has nothing to do with that.


Itanium was a RISC-y endeavour by Intel and it had it's problems but you can't really say it died because of x86_64. Microsoft adopted it and dropped IA-64 like a hot potato. Intel dropped Yamhill and other projects that attempted to bring IA-64 to the desktop as soon as they realized MS wanted x86-64. Performance didn't factor in because AMD would have never had the chance to provide the world all the CPUs it wanted (supply constraints)


I think you are mixing a few things here Yamhill was supposed to be a x86_64 CPU (under license from AMD) it was dropped in favor of promoting Intels own IA64 instead. http://www.geek.com/chips/intels-otellini-says-no-to-yamhill...

Microsoft dropped support for IA64 only a few years ago with Windows Server 2008 R2 being the last OS that supported it due to the limited market share. This was quite long after Intel has killed IA64 internally on it's own.


But the thing I don't understand is why Intel gave up on XScale, their ARM-compatible effort This isn't the first time Intel sold off a slice of itself. I remember AMD doing this many, many times. For over 40 years, the form factor didn't matter. All that mattered was having the fastest and most performant CPU in the entire world. There was a real demand to crunch numbers and move bits since the 70's, so that's the world Intel, AMD, and everyone else knew.

To sell off a division that sold low-power chips for tiny margins in an industry that never, ever, ever explodes (until it did) was just plain normal. He would have gotten pressure and funny looks from the industry and shareholders if he didn't sell it off.


Well by the same token AMD is entering the ARM chip business.

7 years ago they sold their Imageon GPU to Qualcomm, which is now going gangbusters inside every Snapdragon-based phone.


Adreno is an anagram of Radeon.


Atom is doing 'great'. Intel spend over $7 billion(1) subsidizing Atom in hopes chinese vendors would use it, they rarely did. Finally in 2015, partially driven by fear of prosecution for price dumping against native Chinese silicon manufacturers, they entered into a 'cooperation'(2) with gov run semiconductor giants, and by cooperation I mean Intel gave them 1.5B, free Atom license and committed to spend $5.5B on flash fabs in the mainland. Intel also also promised to shareholders they would stop subsidizing Atoms ... which didnt last that long (3).

1 http://appleinsider.com/articles/15/01/07/after-intel-spent-...

2 http://www.cnet.com/news/intel-doubles-down-on-mobile-with-1...

3 http://www.digitaltrends.com/computing/intel-to-chinese-elec...

Latest Atoms are almost usable, and you can get passable quad core 4GB ram/64GB flash tablet for $160 (Chuwi Hi10).


I believe the easy answer is that they thought they could adapt x86 to all markets, and that became the strategy.

More complicated is that Intel came into the mid-90s with several "next generation" architectures, some that weren't solely developed at Intel. XScale came from Digital and had IP from ARM. i960 was a joint venture with Siemens. And they still had the i860 which by this point it was clear was never going to meet expectations in the market. So when the x86 folks said "Intel should put all it's wood behind our arrow", politically they were fighting groups that were weakened by NIH or who'd already for all practical purposes failed. Probably didn't hurt that Pentium was doing quite good and P6 was looking good on the horizon.


For the past decade plus, Intel has been their own biggest competitor. Atom processors aren't weak and built on an old process because Intel can't make them better, but rather because Intel's greatest fear is undercutting their more lucrative markets. Their very high profit markets.

So if you go back ten years and say "what if Intel did this" (which in that case was making a processor for Apple that Apple was paying maybe $20 each for, estimating on the very high side), it is over simplified to just imagine that it's additive. Intel has been rolling on profit margins that the hyper-competitive ARM market can only dream about. It may be time for them to adapt (and arguably they have been), but those 12,000 didn't lose their job because Intel didn't do something different ten years ago. They, and thousands others, might never have had an Intel job in the first place if Intel made different choices.


It's not just intel's (and ipads) fault that pc sales are down.

I think the big mistake PC makers are making right now, is that the PCs they make aren't improving from generation to generation for their mass market products. Sure the processors aren't doubling in MHz like they used to, but the rest of the machine isn't improving either. If I go into a shop with $3-400 today and buy a laptop, the machine I get is the same as I would have gotten 3 years ago:

1. 768 line display

2. 5400 rpm hdd

3. 2 GB of ram (4 if I'm lucky)

4. Similar weight

5. Similar poor battery life

6. loads of crapware.

The pc manufacturers aren't pushing hardware manufacturers to improve the cheapest spec. Why don't cheap new laptops have greater DPI on their LCDs than 3 years ago? Because manufacturers haven't changed their main production lines. They are saving money on retooling, but on the other hand their product isn't improving, and now they're paying the price. Apple is doing the same thing with their Air line, which is only improving the processor generation, it has the same body and screen as years ago.

If manufacturers improved their cheapest line every 3 years, people would see enough of an improvement in their price range to buy a new machine every 3 years like they used to.


I'm currently shopping for a laptop and has my eyes set on the Thinkpad Carbon X1. I did a comparison between the current flagship model versus the flagship models from the past two years:

2014 Intel Core i7-4600U 1718 [0]

2015 Intel Core i7-5600U 1677 [1]

2016 Intel Core i7-6600U 1847 [2]

Two things stood out:

1. Single-thread performance actually decreased in 2015 for some bloody reason.

2. The single-thread performance of the 2016 flagship CPU is only 7.5% higher than the 2014 CPU, a negligible amount when it comes to purchasing decisions. Much of this improvement probably comes from the faster DDR4-2133 MHz RAM (vs DDR3-1600 MHz) rather than the CPU itself.

[0] http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-4600U+...

[1] http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-5600U+...

[2] https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-6600U...


I'd be inclined to agree. I do not know anyone who doesn't also own a laptop, even the most extreme mobile junkies. They may keep their laptop for longer, and perhaps do not bring their laptop everywhere, which means less coffee spilled on it. But they won't work, refresh their CV, plan the design of their new kitchen or build their budget for the holidays on a mobile or a tablet. Are mobiles and tablet really substitutes for laptops? They probably become obsolete more quickly. But even that is not really the case anymore as the slow down in iPhone sales is showing.


I went shopping for a laptop a couple of weeks back. Ended up getting the Microsoft Surface Book - not the manufacturer I expected walking in, but the specs are really good, even just ignoring the tablet aspect:

1. 3000x2000 display, touchscreen

2. 512GB SSD

3. 16GB RAM

4. Incredibly light and thin

5. 10 hour battery life

6. No crapware

Of course you'll pay for all that, but it shows there's definitely improvement happening at the high end - other manufacturers are getting there too, though they've got a way to catch up with MS (the HP Envy and the high-end Lenovos were both a lot nicer than anything I saw 3 years ago). Maybe the improvement isn't trickling down as much as it should, but there's room for it to - honestly I think you're forgetting how much more we used to pay for laptops. Time was when $1000 was a basic model. It's just that these days shops push the budget end a lot more, probably because that's what customers want.


> 6. No crapware

It didn't come with Windows?


I down voted this comment because it doesn't add value to the conversation.


I don't think 2GB of RAM is common anymore (it is not even possible with DDR4 I think). Most low end laptops now ship with at least 4GB.


Not sure why no one's selling to the general public a computer that's "twice as fast as the competition" because it has a SSD in.

I'm waiting for cheap usb-c powered laptops before buying any more


Probably because Joe Consumer only sees 1TB of storage vs 256GB (or more likely 128GB) and buys the one with more.


The biggest problem pcs face is that 90% of the stuff they are used for can be done with older generation pcs or mobile. People don't have a reason to buy new pcs. 2 times powerful processor etc won't make someone that uses the pcb for email, Facebook and wordprocessing


You can actually blame Intel, but also the OEMs and the consumers for those crappy specs.

Intel, because it charges so much for its chips, making it a big part of a laptop's BOM - like up to 40%. ARM chips are more like 10-15% of a phone's BOM.

OEMs for putting up with it. And consumers for always wanting the fastest (and more expensive) processor in a laptop at any given price, over virtually any other specification.

It's kind of how phones never seem to get more than 2 days of battery life, and bast majority only get 1 day. They value too much chip performance or big screens or high resolutions before they value battery life. There are $200-$250 phones out there with 6,000 mAh batteries and 720p screens, but they come from noname OEMs and most people aren't interested in them either.


Exactly the same complaints could be made about the bottom of the smartphone market ($100-200 range). Higher end smartphones may have improved more than high end laptops over the past few years, but mid-range to high end laptops are definitely nicer than they were 3 years ago.

You're also a bit disingenuous on your laptop complaints, the best selling laptop on Amazon in that price range has a 1080P screen, 4GB RAM, a decent i3 Broadwell (which is faster and uses less power than its equivalent Sandy/Ivy Bridge processor you would have gotten 3 years ago). Crapware is only a valid complaint if you choose to use Windows and are incapable of taking an hour to install a clean copy when you unbox it.


> Crapware is only a valid complaint if you choose to use Windows and are incapable of taking an hour to install a clean copy when you unbox it.

Isn't this most people?


Besides, it's forbidden to install a fresh version of windows on an OEM, unless you purchase a full $300 Windows license. Moreover, if you do so, you lose the warranty. PC manufacturers have only their shoulder to sob upon.


Is it? If you use the Microsoft provided media creation tool for Windows 10, it will automatically pick up OEM BIOS keys (including from earlier versions of Windows) and install the appropriately licensed version.

I did it last week on a laptop that I put in a new SSD in and hasn't had a Windows install in over a year (but came with Windows 8 installed when I bought it). I really don't think Microsoft would go out of their way to make this possible if it was against their terms.


> Crapware is only a valid complaint if you choose to use Windows and are incapable of taking an hour to install a clean copy when you unbox it.

Assuming you get a Laptop that actually works with GNU/Linux.

The only laptop that I still run with GNU/Linux on is an Asus netbook, which was explicitly sold with Linux support.

Guess what, it took more than one year for Ubuntu to properly support its WiFi chipset and I was forced to use a network cable if I wanted any form of networking.

My first GNU/Linux kernel was 1.0.9 with Slackware 2.0, so it is not I am that dumb in GNU/Linux land.

Nowadays I stop bothering and use Windows on all other laptops that I have.


My 2014 X1 Carbon runs perfectly fine on Debian Wheezy, Jessie, and Testing. Also ran Linux Mint on it (forgot version, circa late 2014). Not a single problem, from touchpad to sleep mode to wifi. Even saw better battery life.


My HP Spectre X360 runs fine with Apricity / Arch, including touchscreen / sound / wifi.


I believe that we're on the cusp of having a home server market again. There are number of technologies in play currently and when someone figures out how to string them all together we're in for something new.

We are nearly at the point where we don't need to give all of our information to someone else just to have it available to us from our cellphones and laptops.


Beware, for this article includes a gem like this:

Instead, these companies turned to a standard called ARM. Created by a once-obscure British company, it was designed from the ground up for low-power mobile uses.

Nope, their price budget required plastic instead of ceramic packaging, which had a 1 watt power budget. They were sufficiently conservative that it ended up dissipating 1/10 of a watt. The usefulness for mobile applications came later.

On the other hand, if Intel turned down an offer from Apple to supply the iPhone CPU, well, that sounds like a mistake. Then again, it's such a different business that it's not clear it would have worked for them, especially given the opportunity cost. So different, their FPGA aquisition Alteria is still having their lower end more price sensitive chips fabricated by TSMC, apparently because Intel is just too expensive for that market.

And Apple could well have changed to ARM later, Macs are now on their third CPU architecture.


> The usefulness for mobile applications came later.

True, but not quite the whole story.

Some enterprising engineers wrote a cellular protocol stack in ARM assembly language that let everybody use a far cheaper core than anything else.

Very quickly, ARM became entrenched in the feature phones. Then, the evolution occurred to smartphones, and ARM was already in the phone.


Maybe it is good that Intel turned down that offer, because otherwise now we could have desktop sized iPhones with built in winter warmers.


Waaaaay back when I worked at Intel it was pretty clear they didn't stop doing things that worked. And when the going got tough they stuck with what worked. In the 80's Intel had a really remarkable set of of computing products, from high integration "soc" type x86 chips (80186), high end graphics chips, (8276x), embedded chips (8051), and "server" chips (431 series). Plus a memory business and a whole passel of support chips.

But the chips in the PC had the best margin by far. So the more of those they made, the more profitable they became, and when the chip recession was in full swing in the late 80's and early 90's that is what they kept, shedding all the rest.

In the early 2000's when Moore's law ran right smack into the power wall, Intel was betting they could have an "enterprise" line (Itanium) and a "desktop" line (Pentium), and an embedded line (8051). They guessed wrong and for a brief time AMD's Opteron was kicking their butt. But once they realized the writing on the wall they realigned around 64 bit in the Pentium line and got back on track.

The problem with the ARM assault is that unlike AMD, which could be killed by messing with other users of the chipset and patent attacks and contract shennanigans, killing off someone making an ARM chip does nothing but make the other ARM chip vendors stronger. And they can't kill all of them at once. And worse, to compete with them they have to sacrifice margin on their x86 line and that is something they have never done, it breaks their business model.

Its a real conundrum for them, they don't have a low power, reasonable performance SOC architecture to compete with these guys. And that is driving volumes these days. Further, the A53 (ARM 64 bit) killed off the chance of trying to use 32 bit only ATOM microarchitecture chips in that niche without impacting the value of the higher end Pentiums.

One of the things Web2.0 taught us was that it doesn't matter how "big" the implementation of a node is if your going to put 50,000 of them in a data center to run your "cloud." Ethernet as an interconnect is fast enough for a lot of things.

It definitely makes for an interesting future.


The problem with the ARM assault is that unlike AMD, which could be killed by messing with other users of the chipset and patent attacks and contract shennanigans, killing off someone making an ARM chip does nothing but make the other ARM chip vendors stronger. And they can't kill all of them at once. And worse, to compete with them they have to sacrifice margin on their x86 line and that is something they have never done, it breaks their business model.

Someone who was in IT before MS and Intel showed up said MS and Intel got big even though they did not have the best technologies. They got big essentially with smart alliances and good business practices (which included messing with competitors, some practices that can be called dirty).


Recently I talked to a 20 year old kid who didn't know that Apple existed before the iPhone...

Microsoft and Intel got big for one and only one reason: IBM chose them as suppliers for the IBM PC. Had IBM used their own in-house chip or licensed CP/M as operating system computer history would have been different (the Kildall link is especially interesting, Microsoft got really lucky here):

https://en.wikipedia.org/wiki/IBM_801 https://en.wikipedia.org/wiki/Gary_Kildall

For much of the 80s the Intel chips were inferior to other designs like the Motorola 68K. That is why the Macintosh, Atari (today only known for Pac-Man, but yes, Atari made computers rivaling Apple) and Commodore Amiga used more powerful Motorolas.

http://www.skepticfiles.org/cowtext/comput~1/486vs040.htm

But the "IBM-compatible" architecture won despite its inferiority through path dependency and the clones driving price down.


Random bit, there was a story Greg Allman used to tell on the radio that he was in a record shop when a young girl picked up a Beatles album and says to her friend, "Hey look, Paul McCartney was in a band before Wings!"

I do agree that IBM's choice of the 8088 for their original PC was one of Intel's greatest design wins, but at the same time it was the execution of both Microsoft and Intel in their focus on pursuing the business market which made it truly successful. The "hobbyist" microcomputer market in the late 70's and early 80's was scattered. With small wins all over the map.


Sadly, Motorola came out with the expensive part (16 bit bus 68k) late. Had they shipped the 8 bit bus version earlier, I think things would have looked a lot different. Had they followed up with the 68000 and 020 earlier, things would be a whole lot different.

https://en.wikipedia.org/wiki/Motorola_68008


Motorola did have an inexpensive next-gen 8/16-bit chip. It was called the mc6809. That said, the whole point of the 68000 was that it's 16-bits wide, so I'm not sure a cheaper, narrow bus version would have made any difference.


Isn't the CISC model contributing to better cache usage? As I understood, this was something that showed its strength in the time of raw computing-power rush and when cache started to matter.


> if your going to put 50,000 of them in a data center to run your "cloud."

And therein lies Intel's quest to become relevant again. Make more bloated languages so more processors can be sold to servers :/


So what programming language has Intel ever designed?


Anyone remember PL/M?


I do, it was actually developed by Digital Research for Intel.

And it was better than C in terms of safety.

What influence did Intel have on the design, besides paying for it?


The most important part of the article is easily missed unless you've read The Innovator's Solution. Which is the follow up book and spends a lot of time looking inside of organizations to see why it is so darned hard to catch the disruptive train.

A company with a profitable niche and a profitable technology will wind up with high internal costs. That's fine in their main business because they have a profit margin to play with. But it is surprisingly hard to trim back that "fat" to go after much lower margin revenue with a cheaper technology. (Fat is in quotes because it isn't really fat. It is necessary for the high margin business.) It is common to try, and to conclude that it is a failure.

That is why Intel made this mistake.


Until Intel isn't producing chips with way more profit margin than an ARM at 100% capacity, it's not a mistake.

ARM has volume, but x86 has profit.


It can be a good business decision to run a business for maximum profit and a quick exit. However when you do that with a major corporation like Intel, it tends to make people unhappy.

Intel will continue with the high profit margin right until the chips don't sell. And then Intel gets to go out of business.


> Intel will continue with the high profit margin right until the chips don't sell. And then Intel gets to go out of business.

Or, it can switch to selling ARMs and be a fraction of the company it is right now.

Intel's revenue last year was a record $60 Billion. If Intel could get $30 per ARM chip and didn't have to pay a royalty to ARM (ha ha) and had every single iPhone sold last year, that would only make them a $6 billion company. In reality, Apple would never pay that and ARM takes their cut. So, at best, Intel would only be about a $1 billion revenue company, and, that was only last year when Apple moved 200 million iPhones, the previous year was only 100 million.

So, Intel would be roughly 1/10th to 1/100th the size it currently is if it manufactured ARM's instead of x86. Yeah, I'm sure managers inside Intel are lining up for that business decision.

It is going to be better for Intel to glide to "irrelevance" for a very long time rather than switch to making ARM's.


So in your scenario, Intel stops making desktop, laptop and server chips completely. Doesnt replace them with ARM, but throws those businesses away and only gets any revenue from the existing mobile ARM market.

I dont think that's actualy what anyone is suggesting.


But it is. Intel runs its fab lines as near to full capacity as they can. Consequently, every ARM you make means you make fewer x86's.

So, you're telling me that a fab line manager is going to reduce his profit margin by 10% just so he has a fallback when the x86 market collapse and decimates Intel?

This is like finance guys before the stock market crash: "I can be contrarian but it does me no good. If I'm right, my company is so invested in the stock market going up that my company is dead anyway. If I'm wrong, well, I look like an idiot and don't make money. So, I'll just try to make money and cash out."


So, just like Apple, with the same "high margin" strategy, their 19th year of record revenues and $600 billion in store?

And how exactly would them (Intel in this case) selling commoditized, low margin, products make it better for them?

Isn't it even worse, even quicker, for low margin players when their stuff doesn't sell?


The challenge of disruption is when you have a clear value proposition that everyone agrees on. The genius of Steve Jobs is that every year or three he'd introduce a new product line with a new value proposition and cannibalize his existing products in the process.

He's gone, and Apple has stopped doing that. Apple is now losing marketshare. (Android is estimated at 82%.) Their app store is globally under half of the market. They are projecting a year over year decline this quarter.

It probably won't be visible to the untrained eye in the next 5 years. It won't be missable by anyone in the next 10. But Apple's best days are behind it.


>He's gone, and Apple has stopped doing that

Actually they did just that with the Apple Watch -- which added ~6 billions to their revenues, and even as an early v1.0 eclipsed all "wearables" to date. http://www.cnet.com/news/thanks-to-apple-watch-smartwatch-sa... http://www.macrumors.com/2016/01/26/apple-watch-apple-tv-rec...

And their services dept doesn't do that bad either: http://appleinsider.com/articles/16/04/20/as-a-standalone-co...

They've also keep improving the Apple TV (people who haven't followed Apple forget how slow and incremental the rise of the iPod was -- from 2002 to 2007 people gathered at Keynotes to cheer if it got silly features like Wifi or a color screen or some smaller sibling), working on a car, and other things besides.

>Apple is now losing marketshare.

Barely -- from April 2015 to now, they've gone up and down, some quarters winning over Android, others losing. http://www.comscore.com/Insights/Market-Rankings/comScore-Re...

Besides I've never understood this "let's pin Apple, a single company, against the whole of the industry put together". They've never had the "most" market share -- just the most of the most lucrative (higher end, high margins) segment of the market.

>It probably won't be visible to the untrained eye in the next 5 years. It won't be missable by anyone in the next 10. But Apple's best days are behind it.

I think I've read that again, in 1997, 2000, 2002, 2004, 2006, 2007, 2009, 2012, 2015 etc. A.k.a "Apple is doomed".


The iPhone meant you no longer needed an iPod. The iPad meant grandma no longer needed a Mac. The Apple Watch requires an iPhone.


What kind of bizarro metric is that?

The iPod never meant you don't need a Mac.

The iPad never meant you don't need an iPhone (and hardly ever meant you don't need a Mac/PC).

The idea that a device should replace previous devices was never much of a concern. The only thing that qualifies 100% in that story is that the iPhone was by nature also a portable music player -- and if you had one obviously you didn't need the iPod. Apart from that, all were individual lines, with their own strengths and limitations -- not supposed to replace one another.


Apple isn't selling technology, though, they're selling a brand. There are different dynamics in play.


Disruption as a concept seems to be a bit of an empirical controversy these days.

http://www.bloomberg.com/news/articles/2015-10-05/did-clay-c...


I would agree with the analysis, but I think it's missing an interesting fact: The ARM threat was non-existant until DEC Alpha engineers created StrongARM and showed the world that you could make a fast ARM. StrongARM was effectively renamed XScale around the time Intel got hold of the IP.


Former DEC Alpha engineers also proved you could make a power-efficient PPC at P.A. Semi, and were involved in the early versions of the MIPS SoC line that is now owned by Broadcom.

Really it's hard to find a platform that isn't from Intel from the last 20 or so years that doesn't show an influence from them.


Precisely, and let's not forget AMD64, while Intel was pushing Itanium.


Oh, good point, I had forgotten about SledgeHammer and its descendants. Intel had to pull some shady stuff to not lose a lot of market-share while the extracted themselves from the NetBurst dead-end.


It's sad that many places call it x86_64 while it's actually amd64. If Itanium had been a success and AMD built such chips, they wouldn't call it aa64 but ia64 like the existing name. Hence, it would only be fair to keep calling it AMD64. I don't know if it's just Intel not being comfortable with selling CPUs that implement AMD64. Maybe that's why we have ARM64 and AArch64 for the same thing, one vendor neutral, one with ARM in it. I really don't understand what's so wrong with giving credit. It's not like Intel's shareholders would care.


Because AMD64 is actually distinct from Intel's implementation, which has some subtle differences - https://en.wikipedia.org/wiki/X86-64#Differences_between_AMD...

The AMD64 usage implies non-compatibility with EM64T..


Yeah, but given the various names others came up for it, it would have been nice to have one short name. It's normal for difference to exist even withing revisions of a chip, so it wouldn't have hurt anyone to use one name, When we say x86, we mean everything Intel from the 1980s until now, including Cyrix, AMD, Via, etc. 32bit or 64bit.


Yeah, but given the various names others came up for it, it would have been nice to have one short name.

Colloquially, "x64" works pretty well.


Yep, it's quite odd but works, though many dismiss it as the inferior name, no idea why.


> and showed the world that you could make a fast ARM.

My memory may be a bit hazy, but wasn't one of the demonstrations of the original ARM a program in interpreted BASIC that did in twenty seconds something for which a compiled C program on a 80386 needed thirty seconds? Or something like that?


There were two experiments that caught my memory back then.

The first was them trying to understand why the ARM chip they had on a test board was working, despite the fact that the power supply wasn't plugged in. It transpired that there was leakage currents from the keyboard which ended up giving the CPU enough power. (Discussed in a Computerphile YouTube video on the origins of ARM.)

The other was a demo of showing an ARM chip being powered off a thermocouple of the waste heat from a 286 (or maybe 386 ... either way an old Intel CPU)


That's demonstrating you can make a fast BASIC on an ARM. What I'm talking about is making a faster ARM cpu. According to Wikipedia, StrongArm debuted at 233 MHz. I can't remember how fast the contemporary implementations were at the time, but I remember that 233 MHz was a lot faster. (For the record: I'm neither fan of ARM nor x86).


IIRC, StrongARM was about 5-10x faster than contemporary (non-DEC) ARM chips of the time, which topped out around 30-40MHz.

What made BASIC blisteringly quick on it was that the BASIC interpreter that Acorn had originally written for their ARM-based micros was small enough to fit into the instruction cache on the StrongARM. (According to Wikipedia, StrongARM was 16kb I + 16kb D cache, a Harvard Architecture; whereas the Acorn designed cpus only had a 4kb unified cache at the time.) BASIC programs (which were byte-coded) were enormously faster on the StrongARM.


It seems to have been introduced the same year as Pentium 2 which had clock rates of 233-300 MHz. I don't know about the architectural differences at that time, but at least currently Intel processors smoke ARMs with same clock rate. The difference was probably a lot less pronounced at that time as the architectures were simpler.


The Pentium 2 was an out-of-order CPU, StrongARM was in-order.


My also hazy memory recalls that at the launch of the Newton, its CPU (ARM610 I think) was described as more powerful than a 486, at the time the chip to lust after.


" The PC era was about to end."

Not bothering to read the rest. This is entirely 100% wrong. The PC era has not "ended". It's just that we only upgrade every few years instead of every year. And grandma now reads her email on a tablet instead but that was never what PCs were really for.

PCs are still just as much used as ever. We just use other things too, and don't buy a new one every year.

If they can't get this basic fact right then I have no hope for the rest of the article.


I disagree. The PC era ended when the PC was no longer the dominant computing platform. We can argue on dates specific markets etc.

Eras are ill-defined, but if you can assert that we once were in a PC era, you must also accept a definition that allows, in principle, for an end to an era.

The dominant platform of its era is the one with the greatest user and developer person hours, sales volume, zeitgeist and so on.

We are in the mobile era.


PCs are still used more than mobile. It's just that the average person had bought a Pentium 4 PC running XP a decade ago and is not going to upgrade it until it breaks.

PCs are now a mature technology, just like cars, and have the resulting long lifecycle.

Smartphones are also getting into this stage.


Post PC never meant "no PC," it just meant the end of the PC as a growth market. There is still money to be had in the market, but it is all rather predictable and boring now. When you are a company looking at getting your profits and stock price up, that is important, especially for Intel.


>Now 12,000 workers are paying the price

I guess there are 12,000 other workers somewhere else in the world that now have a job because they get to create what Intel doesn't. BTW, according to past statistics, most of the workers that are now "paying the price", wasn't even Intel's employees 10 years ago.


Sigh, the author forgets that Intel tried to leave x86 with Itanium, which was an expensive disaster, and they vowed to never make that mistake again.


Ugh, in 2005 AMD X2's were wiping the floor with any desktop processor Intel had. The only reason Intel stayed in business was that being much, much bigger company than AMD, so they could 1) outsell AMD on availability basis and 2) ditch NetBurst and come up with newer architecture (which was a glorified version of their mobile/older architecture).


Don't forget the giant (possibly illegal) payouts to keep big customers (Dell in particular) Intel-only. $4.3 billion to Dell between 2003 and 2006.

http://money.cnn.com/2010/07/23/technology/dell_intel/


The article uses Clayton Christensen's theory of disruption to explain why Intel missed the mobile phone market and gave it away to ARM. I would just add that I think the same is happening in the Internet of Things.


Would be interested in hearing your analysis on IoT.


What I meant is I think that Intel is focused on producing high-profit margin chips, so it is missing the boat on IoT because its chips cost too much, and ARM is dominant. But I'm not an expert in this area, so I might be wrong.


According to the article, back in the 1990s DEC was forced out of business because they underestimated the imapact that PCs would later make on the market, leaving Intel as a leader.

In the 2000s smartphones and mobile devices outnumber PCs. Intel missed that and so ARM dominates the business.

Maybe that same pattern will repeat again with the rising of IoT and wearables, where smaller and cheaper chips become ubiquitous.

The development of Edison and Curie processors might indicate that Intel is betting on this, gearing up for the next "disruptive innovation".


In which case we should all buy shares in Broadcom and learn to program Raspberry Pis.

Or possibly more likely - the next transition which will be much messier, with no single winner.

The mainframe -> mini -> desktop -> laptop -> mobile -> SBC path was about miniaturisation. The devices all provide general computing facilities, but they get smaller and faster and use less power, while the UI becomes more accessible to non-expert users.

I'm not seeing how that translates into wearables, because IoT and wearables aren't general purpose computing devices. They're more like embedded hardware and/or thin network clients.

So I don't think that's where the disruption will happen. It's maybe more likely the disruption will happen in consumer AI, where the UI gets simpler still through speech recognition and NLP, and the screen/mouse/keyboard start to disappear.

My guess is traditional processors will become front-ends and glue for hardware AI systems, and the companies to bet on are the ones producing the subsystems. The hardware will follow the same path as general purpose processors, but they'll move from "mainframe" to "SBC" much more quickly.


But isn't intel trying to cram the same technology in what is basically a smaller physical envelop, instead of looking at it with a different mindset?


Ironic, given that Intel's long-time leader famously said "Success breeds complacency. Complacency breeds failure. Only the paranoid survive."


Doesn't this mean Qualcomm should be doing splendidly? But its not[1] :(. What gives?

[1] http://www.sandiegouniontribune.com/news/2015/sep/17/Qualcom...


Qualcomm did poorly last year, because of its failed Snapdragon 810 chip. It lost a ton of sales. Otherwise Qualcomm was and still is dominant in the mobile chip market with close to 50% market share.


Yep, those things were throttling like mad due to heat issues and performance was not on par with its competition, the A9 and the Exynos processors.


Disruption theory predicts that mobile SoCs will be more profitable than PC processors at some point in the future and by that point it will be too late for Intel/AMD to start the transition. But it's hard to predict when that point will be and it may never happen because disruption theory isn't always right.


Early on, Microsoft missed the internet revolution but were big enough and good enough to survive that early misstep. Intel is big enough and good enough to survive their mobile blunder (though, admittedly, they're taking more time than MS did to get back on the horse).


Early on, Microsoft missed the internet revolution but were big enough and good enough to survive that early misstep.

This has become popular myth, but I don't think it's true. Sure, Microsoft wasn't far ahead of the curve on the internet, or they wouldn't have been developing proprietary online services designed to compete with AOL and CompuServe.

But the Bill Gates "Tidal Wave" internet memo was sent in May of 1995! Netscape's IPO was still months in the future, Amazon had maybe a few million $ worth of revenue, and essentially every other web-based company still in business today hadn't been founded yet.

IMHO, billg's memo should be seen as an example of great prescience, rather than a belated corrective maneuver.


I guess I'm looking at it from where I was back then (a spotty young teenager). I remember Windows 3.1 needed Trumpet Winsock, then came IE 1,2,3 and they were awful, when IIS appeared it took about a decade to become any good versus Apache. From my point of view things started changing with IE4 and then Windows 98 getting automatic updates. For me that's when MS turned it around (i.e. ~4 years after Mosaic).


everyone is a pundit with the benefit of hindsight. Intel made the best decision with the available best information. Moreover, Apple is a notoriously difficult partner that will extract every penny from its suppliers. What if Intel did invest few billions to support Apple and then Apple went ahead and did their own chip, like they do now, leaving Intel with costs that cannot be recouped. Same pundit will say "Intel was stupid to spend so much on Apple."

Love it or hate it, Intel still has the right technology and products to appeal to a broad market and make good money. One cannot expect to win every market, you can try if it makes sense and you should know when to walk away.


$2 billion in profits is a lot. 12K jobs cut is a lot. I can't help but find it a bit crazy that 12K people who helped make $2 billion in profits are suddenly extraneous. Are that many people really redundant within Intel?


I think the culture of Intel is such that they'll turn this around - sadly it had to come to job losses first. But it was the same (if less severe) in the late 90s when AMD nearly stole their lunch.


don't forget that Intel also made a big bet at a critical time on a partnership with Nokia. that was another couple of years wasted .. further behind Intel fell.


Intel? What about Nokia making miserable 'smartphones' running 24MHz i386???


I don't think 10000 people have problems finding jobs..if you worked at Intel, I'm pretty sure that sounds good in your resumé


I think in the long run this is one of those mistakes that we will all benefit from.

"Monopolist Missteps and Loses Monopoly Position"


A very long-winded article with a clickbait title saying… nothing more than that they missed the mobile revolution.


I disagree. This is an interesting article explaining what is happening to Intel and why. But you're not the target audience if you know this already; every piece of information in this article would be novel and interesting to someone outside of tech.



I actually think that other factors have not been mentioned in the article, like the contract lost to AMD in the video console market, or the long winded legal battle with Nvidia that didn't go the way intel had wished.


Is there any new information on the 3DXP technology?



yes, I know it is a bit OT, but there were other Intel related discussions, saw no harm in asking.


Intel's largest mistake was integrating graphics on their CPU. This cost them more than the entire cellphone CPU market is worth.

This ate up valuable chip real estate, RAM bandwidth, thermal overhead etc. Worse it cemented the idea that Intel was crap at graphics while slowing down the PC upgrade cycle.


What the hell else do you think Intel should put on those chips?

Note that ~15% of Steam Players are gaming on Intel iGPUs, as awful as they are.

http://store.steampowered.com/hwsurvey/


And I'm pretty happy about that actually. Having indie games on my laptop with good battery life is preferable to no games or really bad battery life.


Exactly. I think the more casual gamers find the iGPU to be "good enough" (even if it doesn't match my personal preferences). So if Intel's customers are seeing the benefit, good for Intel.


A pure blank space would work as they save on manufacturing costs and get faster CPU's due to heat limits and more RAM bandwidth.


No, that's not how it works.

Intel had the ability to put more transistors on the same die size with the same power requirements. This was long after they reached thermal/clockspeed limits (with the P4). They started putting additional cores in there and bumped up the L2 and L3 caches, but there was still space left on the die.

What do you do with those extra transistors? It would be absurd to "leave them blank" as you are basically throwing money down the drain.


Even more cores?


Most non-server systems are barely able to use 2 cores let alone 4, what would they do with 8 or 16? More efficient & powerful graphics built-in are a much better use of the die space. Even more so considering the rise of GPGPU and hardware decoding.


> manufacturing costs

The iGPU needs to be manufactured anyway. And with memory controllers integrated into the CPU, it seems rather complicated to push the iGPU off the chip.

> and more RAM bandwidth

Cheap iGPUs on Motherboards would recycle the CPU's memory controller anyway. Did you work with computers in 2006 or so? Its definitely cheaper and more efficient to just integrate everything into the die in contrast to the designs of the past.


You realize motherboards had shitty integrated graphics long before Intel decided to put them on-die? How did they slow down the PC upgrade cycle?


Intel basically pushed the integrated graphic chips out of the motherboard like they did for the north bridge. I think that was a smart move.


Significantly reducing how powerful the next generation of CPU's where.


Weird I've seen figures contradiction this in specific cases.


> Intel's largest mistake was integrating graphics on their CPU. This cost them more than the entire cellphone CPU market is worth.

Er… you mean like every single mobile SoC?


These chips are 3D stacked with ram, but produced seperatly.

Which means if one part is defective you can use a different part boosting yields. But, they also get much lower latency.

Now recent Phone Chips have included on die GPU's but phones are so power limited it's not an issue.


Mobile SoCs use DDR3 or DDR4 RAM, just like Desktops and Laptops.

HBM is only being used on AMD's R9 Fury Graphics Card, as far as I'm aware. A rather high-end solution that simply doesn't exist in the phone market right now.


It's not a question of ram type. Physical distance is a large chunk of desktop RAM latency.


No it isn't. Physical distance change line capacitance and limits clock rates (or requires higher voltages, same thing). DRAM latency has always been completely dominated by precharge time inside the array; the whole idea of synchronous DRAM was to take advantage of that to stream data across many clock cycles (that is, get a very "wide" interface to that can transfer a whole cache line at once on a few dozen wires) as it's happening.


You get ~1/2 the speed of light propagating though copper or ~150million meters per second. It's around two feet from CPU to edge of last chip if you trace the wires. So, round trip adds ~1nanosecond to latency. https://en.wikipedia.org/wiki/CAS_latency Shows latency's in the 6.3ns to 13 ns range for DDR3. Thous ~10% of latency is from distance.

PS: I could get more accurate numbers, but with such a wide range of latency number that's pointless.


Do you not know what HBM memory is? You're talking about "stacked memory" and stuff, so I thought you'd know.

As I noted before: Mobile CPUs are using standard DDR3 or DDR4 RAM, on a motherboard drawn with copper wires to the CPU. The ONLY use of "3d stacked memory" that I know of is the HBM RAM of the AMD Fury X.

http://www.legitreviews.com/wp-content/uploads/2015/05/HBM-D...

I'm laying counter to your fact-claims. Only high-power GPUs have stacked RAM right now.

Anyone who has opened up a phone can see the DDR3 or DDR4 RAM on the phone's motherboard, with the phone SOC designed to go "inefficiently all the way" to the DDR3 RAM over "long latency" copper wires.

Read the spec sheet yourself if you don't believe me.

http://system-on-a-chip.specout.com/l/1107/Qualcomm-Snapdrag...


Yes, it's a similar idea. These terms get repeated SoC used to refer to a single chip CPU, and they keep reusing the term as other components get integrated.

Anyway, I am referring to packaging like: http://cdn.arstechnica.net/wp-content/uploads/2013/04/ASIC_+... Yes, HBM also does more or less the same thing. The issues are twofold heat which is vastly less of a problem in the mobile world so with HBM it's a stack of ram chips next to a processer, with Mobil the CPU is under what might only be one RAM chip and they call it SoC.

http://arstechnica.com/gadgets/2013/04/the-pc-inside-your-ph...

Note: CPU's are already 3d with many layers depending on how you count.

Anyway after die size, heat, latency, and manufacturing costs are really the largest factors. Some PC ram chips run hot enough they really can't be stacked. You can slow them down and trade latency for total bandwidth. ed:Less of an issue with active cooling and a heat sink. However, they can call this HBM, because you can have more chips so even if each chip is slower overall you get higher speeds.

PS: Don't forget HBM is designed for a GPU/CPGPU without a large cache. CPU's have a huge cash which let's them have very different approach aka lot's of cheap but slow RAM. Benchmarks show faster ream really does not add much for a modern CPU with most workloads. You could use this for a CPU but it's harder to swap in more ram.


In your opinion what should they have spent that die space on? Moore's law was about transistor count too. They didn't have a lot of architecture changes they could work with and the on-die memory was pretty much maxxed out.


Nothing was a real option. Thus significantly reducing manufacturing costs, boosting yields and with a better thermal envelope processing speeds. Remember the last few die shrinks have run into huge heat issues.


> significantly reducing manufacturing costs

I am pretty sure that if keeping the memory controller, northbridge, and iGPU separate, Intel would have kept these things separate.

My bet? I think the cheaper manufacturing costs associated with motherboard construction beat-out the slightly increased cost of tighter integration. But I'm not an Intel Engineer. Just a hunch... considering that Apple A9, Qualcomm Snapdragon, AMD A10 "APUs" , and Intel have all decided to integrate everything together.

> better thermal envelope processing speeds

What, are people actually using iGPUs for difficult tasks all the time? As far as I can tell, most people surf Facebook all day and ping the CPU.

iGPU isn't slowing down anything with Facebook surfing. Besides, a crappy integrated GPU on the MoBo would still use up RAM bandwidth.

If anything, integrating the GPU has made the process cheaper for Intel. The Northbridge, Memory Controller, and iGPU are all integrated into modern CPUs, because Intel has done the number crunching and has determined that its cheaper to do it this way.


"Nothing" (i assume you mean "blank space on the die") does not reduce manufacturing costs. Boosting yields has nothing to do with leaving "part of the die bank". Thermal envelope is inherent to the process size, not the use (or unuse) of transistors on the die.


It's more complex than that as you want to avoid local hot spots etc. Also, you can change the layout so a void on one chip is used as part of a second chip. https://en.m.wikipedia.org/wiki/Tessellation. They can use lasor cutters which can trace out complex shapes.

It's much easer to cut rectangular chips though. Still, even if the used the same overall design and just ignore any defects over X area would boost yield.


They should instead have integrated a CPU into their graphics card like AMD do with those APUs, which they sell to every consoler maker, and NVIDIA do with their Tegras, for high spec smartphones.


I think Intel was hoping to be able to license (virtually for free) Nvidia technology at some point, during the legal battle they had with Nvidia.


This can really help CPU-RAM latency which often slows things to a crall. However, modern Intel CPU's have rediculus amounts of cache.


Could you elaborate on this?


Graphics cards have a small amount of very high bandwidth dedicated RAM close to the GPU. PC ram is further from the CPU which increases latency due to the speed of light lag. CPU's get around this by using a lot of L2/3 cache. Which means PC's don't really care as much about RAM bandwidth. Instead they want lots of RAM.

Thus main memory is much slower than GPU bandwidth. So, now your on chip GPU is memory starved and reducing the bandwidth avalabel to the CPU. There are some advantages if you want to pass stuff back and forth, but the standard rendering pipeline is designed for GPU's.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: