Hacker News new | past | comments | ask | show | jobs | submit login
Intel EOLs Atom Chip Used for Microsoft HoloLens (anandtech.com)
94 points by msh on Aug 10, 2017 | hide | past | favorite | 33 comments



This isn't necessarily a death knell. If Microsoft wanted to continue with the product, they can do what is called a "lifetime buy" where they buy up enough chips to carry them until their next iteration with a different CPU.

This is especially common in industrial electronics where volume is lower and product life is longer. Some lifetime buys can cover production builds for over ten years or more.


Is it possible that Microsoft might have HL-2.0 which perhaps does not rely on this Chip?


It literally says that in the article. This is in no way a death knell.

>The next-generation Microsoft HoloLens will be different compared to the existing augmented reality platform, Microsoft revealed recently. While the device will run Windows 10 and will be equipped with an HPU, it will also feature an AI co-processor integrated into the latter that will use neural networks to enable object and voice recognition skills without the need for an Internet connection. The HPU 2.0 with the programmable AI co-processor will be a self-sufficient device that will run on battery power of the next HoloLens (hence, its power consumption will not be too high). The HPU 2.0 and the AI co-processor were designed entirely in-house and therefore are tailored for usage model of the HoloLens.


I also would not be surprised, given what I perceived to be Microsoft's roundabout passive aggressive PR complaints about the Atom family, if Microsoft might make a play for the HoloLens 2.0 to run on ARM instead of x86.


TFA suggests Microsoft initiated this EOL. Gonna guess that MS is way ahead of things here and is quite fine.


Technically, Intel continues to sell similar products. For example, i5-7Y54 consumes same 4W of power, the CPU is about 2 times faster, GPU is about 3-4 times faster.

The main downside is the price. These atoms were sold for $20-40, the newer ones for $280. Not sure whether that’s a huge problem for MS Hololens, but everything cheaper than that gonna switch to ARM for sure.


For a moment I thought this kind of massive price increase on a core component would matter for HoloLens and then I remembered the dumb thing costs $3,000 USD anyways.


The x7-Z8700 and x7-Z8750 have RCPs at $37, can address 4x more memory, have twice as many cores and consume 2W.

If Intel made a 16-core version running on 8W, I'd want a laptop using it.


> can address 4x more memory

x7-Z8700 8GB, i5-7Y54 16GB.

> have twice as many cores

Yeah, but each core of that i5 is significantly faster.

> and consume 2W

Intel invented a new metric, SDP, for these atoms. 2W is SDP < TDP. Cpubenchmark.net says typical TDP of x7-Z8700 is 4W.

For i5-7Y54 it’s 4.5W but can be configured to 3.5W @ 600 MHz.


It'll be interesting to see what they come up with to decrease rendering latency. From what I can see, that's one of the bigger challenges recently with GPUs; instead of focusing on raw throughput (to improve image quality), latency is now more of an issue.

I personally would welcome a AR HMD that sacrifices whatever is necessary to get a low-latency system that very smoothly tracks the user's movements.


Have you tried the Hololens? It's exactly what you describe in the last sentence.


Exactly. The quality and FOV of the original hololens is lacking, but the latency and tracking are mind blowing.


It had some rather jarring failures when I tried it in an office (lots of high-contrast edges everywhere, you'd think it'd be near-ideal...), but overall yea - quite good. When it works it's remarkably stable.

And agreed on the FOV and image quality - it's pretty terrible. And it doesn't help that the press around it has been incredibly misleading about it. That, and the abysmal gesture-detection have broadly left me at the "neat tech demo, why did you build more than a dozen or so?" level.


I wonder if it would be possible to move some processing into the display. It's not the same, but in particle physics pixel detectors, which are basically CCD sensors, people have been working to move the readout electronics into the pixels. There, the motivation is radiation hardness and cheaper production. Here, it could be decreased latency. What if you could put a super small shader in each pixel, and do a final processing step like 'timewarp' right there? Or a part of your graphics memory is directly your display, no bus in between?

That being said, I tried a couple of VR solutions and I don't really have a problem with latency. What I find more irritating is that I cannot focus well on different planes. You do have the effect when you cross your eyes that you put images from different planes together - although badly, because even if you don't see the pixels, you do get some kind of moire pattern when shifting planes (I can't describe it better). And what you don't have at all is depth blur. I think both can only be solved by true light field displays (if that is even possible). I hope there will be a breakthrough with holographic projection at some time.


> What I find more irritating is that I cannot focus well on different planes.

You might be interested in this Oculus VR research then:

https://www.oculus.com/blog/oculus-research-to-present-focal...


AFAIK all these VR/AR devices implement a workaround for that.

Specifically, they render stuff to a texture larger than the displays. When that’s done, they cut a rectangle from that texture to account for the head’s rotation since when the rendering was started. If the head didn’t rotate, they show the rectangle from the center of the texture. Otherwise they show off-center and/or rotated portion of that texture.

While it would be nice to not cheat this way and truly improve rendering latency not just head rotation latency, it’s overwhelmingly harder to do so, at least on current generation GPUs.


This whole comment section is basically people that didn't read the article, and then people copypasting article contents as replies.


It was a shitty chip anyway. No SIMD of any kind. Good riddance, I hope Microsoft picks something more respectable next time.


The Cherry Trail Z8x00 chips are listed as supporting SSE too, are you saying the Z8100P is a custom design with the SIMD units removed and MMX disabled from the x86 FPU? Interesting idea, but would call for a reference I think.


I mean real SIMD, the kind you need these days: AVX, AVX2, FMA. And to add insult to an injury, MS was running the chip in 32 bit mode, which further restricts the usable instruction set.


Sounds like what happened to the BeBox back in the day. They put their bet on the Hobbit chip initially (multiple hobbits iirc, hence the emphasis on multi-process/multi-thread) and had to delay quite a while to revamp their architecture.


The article quite clearly suggests that the discontinuation of the chip was initiated by Microsoft because they're not going to use it any more...


> The article quite clearly suggests

Are you asking too much?


not sure where you're getting that from, x86 chips are swappable with little design considerations apart from socket, TDP and in very edge cases - IO.

As for hololens, the current version is approaching EOL anyway.


x86 is not easily swappable by any means. From voltage supplies/rail timing, crystal requirements, different device drivers and FW rewrites, etc. Changing CPUs on a modern design is a huge reset.

Source: PM for a hardware product line


The Hobbit was _not_ an x86 chip, hence the delay its discontinuation introduced.

I was not suggesting that an architecture change was necessary for Hololens, but are there a large range of low-power/mobile x86 SoCs out there to choose from? I know ARM has sucked up most of the oxygen in this particular room, but surely someone other than Intel is working on a similar line?


> The Hobbit was _not_ an x86 chip, hence the delay its discontinuation introduced.

Exactly my point.

> I was not suggesting that an architecture change was necessary for Hololens, but are there a large range of low-power/mobile x86 SoCs out there to choose from? I know ARM has sucked up most of the oxygen in this particular room, but surely someone other than Intel is working on a similar line?

ARM and x86 have readily available implementation for Windows OneCore.


> As for hololens, the current version is approaching EOL anyway.

Does it make sense to call it "EOL" for a product that never took off? When was the "L"? Wikipedia says released March 2016 for a dev version, and an unspecified time this year as a consumer version. That is a rather short lifetime.


Products that are not Laptop, Desktop or Server x64 CPUs at Intel are cut with extreme prejudice when the bean counters look at them roughly 6 months to 1 year after initial announcement and realize "hey these are not i7 margins".

It's incredibly damaging to everything they do. You can't cut a chip platform that your customers literally solder into embedded products with an announcement 3 months before last order.


> Intel asks its customers to place their final orders on the Atom x5-Z8100P SoC (belonging to the Cherry Trail family) by September 30 and says that the final shipments will be made on October 30. Given the fact that Intel seems to have only one customer using the microprocessor, the short amount of time between the announcement of the product discontinuance and the actual EOL was probably negotiated before. Moreover, since we are talking about a semi-custom chip, Microsoft was probably the initiator of the EOL, which indicates that the company is on track with its next-gen HoloLens.


From the article: Given the fact that Intel seems to have only one customer using the microprocessor, the short amount of time between the announcement of the product discontinuance and the actual EOL was probably negotiated before.

They worked it out with MS a whole ago, and are only now announcing this product, which they will sell to literally nobody, is discontinued.


For designers who need to be able to order products for a long time, Intel does offer an embedded design roadmap. But, ultimately you're just paying higher prices to get them to carry inventory instead of you. ;)

IIRC that roadmap is definitely available for super high margin Xeons. I'd be kinda surprised if they didn't have some Atom SKUs on there though.


Yeah, they did the same with their IoT products. My former roommate had to use one for an university project and he was seriously pissed because it was of not a very high quality, so there may be a connection...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: