Hacker News new | past | comments | ask | show | jobs | submit login
Intel Launches 8th Generation CPUs, Starting with Kaby Lake Refresh (anandtech.com)
121 points by satai on Aug 21, 2017 | hide | past | favorite | 111 comments



> Intel’s big aim with the new processors is, as always, to tackle the growing market of 3-5+ year old devices still being used today, quoting better performance, a better user experience, longer battery life, and fundamentally new experiences when using newer hardware. Two years ago Intel quoted 300 million units fit into this 3-5+ year window; now that number is 450 million.

Yep, Intel's problem is that most folks don't need a new CPU, especially for a computer that's always plugged in.

I'm refurbishing a 6-year-old system with a Pentium E5800 for a friend, and initially it felt dog slow. However, once I swapped the mechanical hard drive with a solid state disk, it instantly feat like a zippy little machine. It already had enough processing power for everything they wanted (browsing, office, youtube, etc.)


>> It already had enough processing power for everything they wanted (browsing, office, youtube, etc.)

Today's Javascript-packed web pages and HD YouTube content are pushing people to upgrade from their Core 2 Duo and early i5 machines.


My $130 lenovo Chromebook runs everything such as Cloud 9 IDE on the web except Facebook, that doesn't work.


The big grief I have with "general computing" platforms is their insistence of sticking with the traditional form factor.

ATX, ITX, PCI-, DDR ...outdated, overboard, clunky designs for most people.

Take a Mac Mini-like design, and make modules that can stack or otherwise attach to expand capabilities. IMO, this is what Apple should do and be done with the whole "But Mac Pro users ...!"

A project Ara-like desktop, both its size and modularity, would probably offer more than enough computing power for most users (browsing, office, youtube).


Plenty of PC makers do that, even Intel:

https://www.intel.de/content/www/de/de/products/boards-kits/...

No idea what any of that has to do with ATX, PCI (long deprecated technology), ITX (form factor), DDR (that's like USB).



Cute! Shame the pricing is "elite" too.

Slice >£1000 inc tax: http://store.hp.com/UKStore/Merch/Offer.aspx?p=b-pc-hp-elite...

Comparable spec small PC £520: http://www.misco.co.uk/product/2688486/HP-280-G2-SFF-Desktop...


Those photos are so fake it isn't even funny. Check out that array of plugs at the back and then where the cables ought to be in the picture.


There's probably a PC-104 [1] updated standard which might fit.

Being designed for industrial slash embedded environments, you typically will not get the latest and greatest chips slash chipsets.

[1]https://en.m.wikipedia.org/wiki/PC/104


Still LPDDR3 with 16GB RAM limitation. What an embarrassment, all phone SoC today use LPDDR4(x) and technically support more RAM than the desktop Intel CPUs.


Does anyone know if Ryzen will support LPDDR4 in it's mobile chips? I tried Googling around for it but couldn't get an obvious yes or no.

Seems sort of unlikely, but if they do support it a lot sooner than Intel, that would be a big win for AMD.

Even more unlikely would be a Ryzen powered MacBook Pro with 32GB of LPDDR4 RAM...but I'd be willing to pay a lot of money for that. I know Apple tends to prioritize single core performance on their own chips, but almost all of the desktop software their pro users are using would run better on Ryzen than on Intel's current offerings.

Plus, with Intel's Iris Pro gone, Ryzen might allow them to have better integrated graphics, and bring back a 15" model with no dGPU.


Wait, they aren't making chips with Iris Pro graphics anymore? That seems like an odd decision.


That's a bummer. The upside is that I will continue to have no reason to upgrade my current MBP.


Good to see lower power usage but the one thing I still feel is missing is widespread support for ECC on the desk(lap)top.


Funny thing is, back in the days of the PC XT (8086/8088), AT 286 and 386, all (or most?) computers had parity check (9 bits RAM). I rather prefer a halt error than silent corruption.


Curious what your use case is that entails ECC? Are you currently being held back without it?


It's insane that we're still using systems without ECC RAM. As memory shrinks bit errors get progressively more common. The more memory you have the better chance of corruption as well of course.

Literally everything else that holds "data" has been using some form of error correction forever. Hard drives, SSD's, USB flash drives, file systems, databases, even network packets. Even HDMI uses error correction, and how important is momentary pixel corruption on a screen???

It's totally insane that we're not using ECC with such large amounts of RAM built on tiny processes. Its definitely just a cartel artificially maintaining a situation that's bad for everyone not selling server chips.


Exactly. A "one in a billion" event now happens with great regularity on a system with over a hundred billion of bits of memory.


Integrity of your data. Without ECC data in memory can become corrupted at any point.

It's usually just a single bit but say you are working with images, do you care if a single pixel changes its RGB value because of a memory error? A character in the metadata?

I do.

Unfortunately there is a hardware cartel which deliberately limits ECC to enterprise / server products so that they can inflate the price / their profit margin.

ECC RAM is more expensive to manufacture than non-ECC RAM but the price difference would be fairly minimal if ECC RAM were used everywhere - as it should.


Also any form of file that is easy to corrupt to a non-decodable state. Such as any binary save/config file, or any sort of file conversion or transferance. The data you transfer from location to location will always pass through memory, and in the case of converting that data to another format, it may not be possible to validate the destination format in relation to the original data.

Hypothetically, even with hash checks when transferring files, if the chunk of data read from the source file changes, that data will be used to calculate the hash sum, along with being written to the destination file. Meaning the hash sum would match the destination file anyways. You could even get wrong hash sums and think the transfer was wrong.

Really when memory can just 'change', anything can happen and there's no real good ways to get around it. ECC memory should just be everywhere.


Well I just had data corruption on my Intel NUC due to a stick of RAM failing. Had I had ECC the fault would most likely have been spotted right away. Instead it dragged on for a few months. Glad I kept multi-month backup sets.

Firefox crashed every now and then, but that's not something which raised any flags with me. Other than that the box seemed just fine. Then one day I couldn't boot anymore as the filesystem had been severely corrupted.

Ran memtest86 and sure enough, a span of addresses invariably generated errors in all tests.


This shouldn't de downvoted, it's a fair question. Just a few years ago ECC was widely considered an unnecessary belt-and-suspenders thing that made enterprise hardware expensive. I guess the general perception changed with the Rowhammer attack.


I don't think so. Well before Rowhammer, Google published their paper showing the high amount of memory errors they get.

What's changed is higher memory densities, making it even more important.


It only made things expensive because of low segmented volume. Otherwise it's a 10% bump on RAM and free on everything else.


Is the lack of a secure door to your property something that would hold you back? I doubt that. Still, secure doors are important.


Do you have blast doors installed home?

Anyway, not sure how exactly ECC relates to security. Is there any specific attack vector where it helps?


A blast door protects against explosions – a very unlikely scenario, that will happen to practically nobody. ECC protects against bit flips in RAM which happen significantly often ( https://www.cnet.com/news/google-computer-memory-flakier-tha... )

Your comparison is inadequate.


rowhammer, though it apparently can work around ecc.


I'm kind of disappointed in this. While they are upping the core count, the overall clock speed is being decreased across the board. This means that single threaded processes will theoretically perform slower (I know it still turbos up).

Honestly, I just upgraded to Ryzen from a 3770k and. My 3770k ran all cores at 4.2GHz (overclocked, obviously) and the only reason I updated was becasue I wanted to upgrade to NVMe and DDR4. That was 4 years old and I had no CPU-bound performance issues. I really think Intel needs to start innovating more rather than being complacent or AMD is actually going to steal the show.

Super happy for the competition though!


I wonder how companies like Apple, that have quite stagnant and stable release cycles (compared to other brands) will handle that situation. Does it mean their customers will have to sit on 'old' CPU's again for another generation or two? Latest MacBooks were released ~80 days ago and their release cycle is ~300 days on average. Obviously I wonder, because I was about to order a new Apple machine for myself and now I'm not sure if I shouldn't (the same problem over and over again) just wait a bit longer.


Considering how recently the Apple Kaby Lake bump was, and that the 8th generation Coffee Lake parts for the "real" Touch Bar Pros won't be released for several months, I'd be shocked if this wasn't one of the better times to buy.


I don't think the regular MacBook have those chips. These are the ones for the 13-inch MBP entry level models.


I thought of MacBook as of 'MacBook family' and not as 'The MacBook 12"'. To be more precise, when I said I wanted to buy a new machine, I was thinking about The MBPR 15".


After seeing those internal emails from Microsoft regarding the failures they had on the Surface products caused by problems in the then-recently launched Skylake chips, my guess is Apple is just fine on their delayed release schedule.


Er, if you read those closely you'll realize that it was microsoft's fault and they were trying to blame skylake. No other manufacturer had problems like microsoft with the same chips.


Oooh moving to a baseline of 4 cores; this means we'll see quad core Lenovo X1 Carbons / Ultrabooks soon. :)

I for one am excited.


Even if you don't use multi-threaded applications much, the doubling in L3 itself should help a lot.


Maybe I'll finally be able to buy a Mac Mini that is faster than the 2012 model I use for grinding up data.

Edit: maybe not. Looks like the Mini is about a 45 watt CPU and these are the 15 watt line. Oh well, it's been 1040 days since the last update (downdate? Maybe that is the term for a product update that releases a slower computer). I can wait for the 45 watt CPUs. Probably, I am past my half-life.

(The current 2 core and 4 core processors that make sense in a Mini have different footprints, so Apple just did the two core to keep costs down. There hasn't been a quad since 2012.)


I believe these are also the chips that will come to the 13" MBP w/o TouchBar. We'll see how the thermals look, but call me excited.


Do you happen to have any idea how long we'll have to wait for that? Might they be out in a month, or might it take a few more?


I'm guessing January next year with February availability, going by previous releases. (This is purely my own speculation, considering CES/previous releases.)


Thanks! :) Do you think this would be the case for the first quad-core ultrabooks/notebooks as well, or just these particular Lenovo products would take that long? I haven't been keeping up with CPU releases so I don't recall how long it takes for them to reach the portable market...


This article [1] contains some info: "Intel tells us that we should start seeing laptops using the new CPUs hit the market in September."

[1] http://www.anandtech.com/show/11738/intel-launches-8th-gener...


Oh, awesome, thanks!


I am really looking forward to see how next generation Intel Ultrabooks compare to the upcoming Ryzen APU laptops.


But look at how slow the GHz are. Crazy slow chips.


What does the term "lake" represent in these family of CPUs?

Apparently asking this makes me an idiot to some... while I'll admit to simply laziness...

I assume that it would tie a technology together as a code name for this family of procs, but in the case of "lake" they use it in multiple differing technologies...

So was curious if it meant something else non-obvious to me.


It doesn't mean anything; Intel has a bunch of unrelated products that all have lake codenames.


Similar microarch.


Wait isn't 7th gen already essentially a refresh of 6th gen?


Yeah Intel are not following the tick-tock pattern anymore, they are having more revisions on the same node, and some of the revisions are more slight. Anandtech had an article on it a while back.


Correct. About a year and a half ago, Intel announced they were ditching tick-tock for a three step model of process-architecture-optimization.


Sure, but this time AMD made Intel to show a real progress with 8 cores ULV CPU (yes to show, as I guess Intel had it ready-made, but was not going to present it for the time being). Before this year Intel didn't have to show the real progress as there was almost monopoly on the CPU market.


> with 8 cores ULV CPU

4 cores obviously


How about some consumer desktop ones with ECC RAM support?


Never gonna happen. ECC is "pro" feature for Intel, reserved for Xeons only. They have to justify high price of Xeons somehow.


You mean server chips. The server Atoms also support ECC. For eg. https://ark.intel.com/products/97927/Intel-Atom-Processor-C3...


If I'm not mistaken there are a few Pentiums and i3 chips that support ECC. For example the G4560.


Even AMD think of it as such. Ryzen doesn't have it disabled on the desktop, but AMD haven't validated it either.


Didn't AMD employees confirm it a few times on the web already ?


ECC is working with AMD if the motherboard supports it, but you can't always be sure that the motherboard does support it correctly. You need to rely on user reports/what the motherboard producer promises, instead of it being a default feature that always works.

Still a lot more than what Intel offers in that space.


Thanks


I didn't say ECC doesn't work. It is enabled on Ryzen. But it's not something AMD go to the effort of validating to make sure it works properly and it doesn't get support. It's left in there as a footnote for enthusiasts.


What's the price difference between consumer (no ECC) and workstation (ECC support) (mobo+cpu+memory)?


Xeon E3-1230 varies by generation (v1 to v6 or so), but is typically slightly cheaper or slightly lower clocked then the top of the line i7.

The motherboards are about another $50, and the dimms are another 25% or so more than non-ecc dimms.


Why is ECC sought after on the desktop?


Some people run >32GB RAM with long uptime and there the chance of a random bit-flip might not be acceptable. Imagine working on some Deep Learning model, training it for 30 consecutive days and then hitting a memory bug during computation.


Not your everyday requirement, also ML is somewhat tolerant to small faults like that.


Depends. If a bit is flipped in a dataset you are likely fine, if in code your computation might crash. If you use enterprise-grade software like ZFS filesystem that keeps a lot in memory, it's much better to have ECC and accept a bit slower memory access for a bit better protection.


ZFS without ECC is pretty useless ...


ZFS is no more vulnerable to corruption when running without ECC than any other file system.

The developers of zfs suggest ECC because ECC is a worthwhile thing for those who care about their data.

You should stop spreading misinformation.


Rowhammer protection, for one thing.


ECC does not protect against Rowhammer attacks.


It seems you are correct, but surely it must make a practical attack much harder?


I'm not so sure. But it's not an area I know much about. But practically thinking: If you try to change the memory content of an area, that means you have software running on the target machine. Does it matter much then whether you need more time because of ECC?


I remember seeing claims of 15-30% improved single-threaded performance. Does anyone know how legitimately I should take these? They sound way too good to be true...


They have pretty freaking high turbo frequencies, up to 4-4.2ghz. I don’t think they had 15W processors going quite so high before.


Typically the claims of heavily improved single threaded performance are "up to x% faster", and the only time you see those peak performance improvements are during uncommon benchmarks.

Until full reviews come out it's hard to say how much of improvement across the board we'll see, but recently 2-3% ipc improvement on average plus whatever boost to frequency seems to be standard per release.


Maybe per watt?


That's not the impression I got, but I'm not well-versed in the marketing terminology. Is that the impression you get from here? https://arstechnica.com/gadgets/2017/05/intel-claims-30-perf...


I might miss something in that article, but this 15-30% performance increase when pitting a dual core against a quad core is pretty bad. I don't see the mention of single thread performance. It talks about overall benchmark performance.


I'm pretty sure they mean single-threaded, but I've seen different numbers floating around. Here I see between 11-29% depending on the model: https://videocardz.com/72112/intel-claims-i7-8700k-to-be-11-...


Okay. Well, you should wait for benchmarks. If like in the anandtech article mentioned the clock rate gets decreased, and that would be very normal when adding more cores, then a single thread performance increase is very unlikely. In the last launch Intel did not get close to those numbers, and that was without a core increase.

Also, there seems to be some confusion whether those processors now are a kaby lake refresh or the new coffee lake architecture. The videocardz article mentions Coffee Lake (and some other news articles call those processors that as well), but the anandtech article defines them as a Kaby Lake Refresh. A new architecture would make a single thread performance increase more likely.


The table in the article shows a ~5% increase in boost clocks for the high end models. Those are what matters for single-core performance, not the base clocks.


I think that would be correct for the Desktop, but in laptops the turbo clock normally(?) does not work for a sufficient long time to give it any meaning.


It does in well-designed machines, although usually not in the ultraslim ones. The ThinkPad T470 can sustain full turbo indefinitely according to notebookcheck. Lenovo's premium line (X1 Carbon/Yoga) cannot, though, as they're too thin and light for a sufficiently capable cooling system, and will throttle after a while.


Would you have a link to the page you're referring to? I'm wondering if that's also true for the T470p.


The T470 review is at https://www.notebookcheck.net/Lenovo-ThinkPad-T470-Core-i5-F... - but keep in mind that's a 15W i5. The 35W CPUs in the T470p produce a lot more heat. I'm sure notebookcheck has a review of that, too.


Makes sense, yeah. Skeptical here as well.


That claim is for Coffee Lake. Intel have recently taken the opportunity to make their line even more confusing; _these_ 8th generation CPUs are "Kaby Lake Refresh". Coffee Lake will be along later.


I wonder how many Programmers here using Macbook Pro need an Iris Graphics? Compared to this newest UHD 620 ( Which really is just HD 620 with HDMI 2.2 support ), the Skylake Iris Graphics is rougly 50% to 60% faster. But with Kaby Lake Refresh you get Quad Core instead of Dual Core.

I wonder how many would prefer to have a Quad Core Macbook Pro 13" instead.

* These 15W parts can be TDP Config up to 25W. Which Fits the Macbook Pro uses.


Integrated graphics are great for the power savings but a 2015 Macbook Pro can not drive a 4K display higher than 24 frames per second.


That's an HDMI limitation. With DisplayPort, my early 2015 MBP drives my Dell P2715Q at 60Hz.


My 2013 15" MBP drives my 4K display at 60Hz...


I'm assuming this is with DisplayPort?


Yes.

My comment was just an extra anecdote to the grandparent comment... if you want 4K at 60 fps, why aren't you plugging in via DP instead of HDMI?


Plug the monitor into the right port! (the mini display port/thunderbolt one)


Just not in time for back to school - good for intel's margins, bad for all the students stuck with dual core lie5's/7's.


The Turbo/base ratio is getting interesting. The previous generation saw a 1.6x Turbo max but this generation now sees 2.2x -- a clear testimony how the four cores, alas, are for show. Obviously there will be a little improvement but I wouldn't expect earth shattering results.


You get four cores at less than half speed. Sort of think, can you just get 2 cores at 4/5th speed and it would be able the same?

Very strange scaling this chip has.


Isn't it better to have more powerful single thread performance for developing in single threaded languages? Looks like a step backwards than? Double the core count and more l3 cache sounds good, even though they crippled the base clock speed.


They lowered the base clock speed, yes. That's the minimum clock you can count on, assuming a correctly designed laptop, even if all four cores are going flat out.

In practice, the clock is set to limit power usage and thermal load. A better-cooled system will automatically run faster (not really applicable to laptops), and if you're only using a single thread then you'll see the same clock rate you did before, or a bit above.


Cooling limitations are extremely applicable to laptops! You can easily have two different machines with identical CPUs and 10%+ performance difference because one has a proper cooling system while the other doesn't. Check the notebookcheck rankings if you want to see some specific numbers.


Sorry, I meant that in the sense that no laptop is "properly cooled". There definitely can still be variations. :P


That's not actually true! From https://www.notebookcheck.net/Lenovo-ThinkPad-T470-Core-i5-F...:

"Our stress test with the tools Prime95 and FurMark (at least one hour) on mains is not a big challenge for the ThinkPad T470. Thanks to the increased TDP limit, both components can maintain their maximum respective clocks over the course of the review. [...] The two CPU cores maintain the full Turbo Boost at 3.1 GHz and the graphics card 998 MHz."


Also, this will only further increase the value of maintainable machines. A machine with good and accessible/serviceable cooling means that redoing the thermal paste after 3-4 years will be both feasible and helpful.


At this time I think there shouldn't be single threaded languages (I'm not sure which ones you're thinking about since it's mostly about libs and OS primitives). Even Python is multithreaded (even if the GIL makes it better to just use multiprocess), I'd say if you're after 10% improvements - the level those kind of CPU upgrades can offer on single thread, you'd better change languages if you're stuck to single thread. If your problem is difficult to parallelize, well that's another story.


Even if the language supports threads, that doesn't mean your application magically parallel. No language will give you free parallelism. Besides, most software you run was written by somebody else.


sure, that was my point about the problem being parallel or not. Of course the program make use of it and be multithreaded and cpu-bound or not. That was not the point. The OP talked about "developing in multi-threaded languages", which is 1) about new developments 2) about language being multithreaded or not. I believe we both say it shouldn't be a problem of language as in 2017.


According to the article, single-core turbo has increased from 4.0 GHz to 4.2 GHz.


aaand with linux, baytrail is still an issue, even though 4+ intel engineers are working on cracking that nut.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: