Hacker News new | past | comments | ask | show | jobs | submit login
Intel Core i9-7900X review: The fastest chip in the world (arstechnica.com)
139 points by hvo on July 6, 2017 | hide | past | favorite | 116 comments



Another issue beyond the fact that the X299 boards are problematic in terms of how many, and/or what slots you can and can't use seems to be excessive heat and stability in some of the reviews I've seen.

I'm still running a i7-4790K at home, and though I'd like something with more cores... nothing is compelling enough to bring me to switch given the costs involved. If I were building new, would most likely go with an AMD solution.


CPU gains for the last few gens have mainly been in perf/watt and, now, core count in lower-class chips thanks to AMD. The biggest benefit of a system upgrade for the last few gens, from a user perspective, has been for the chipset's gains - NVMe, USB3, USB-C, etc. more than raw performance. That's slowly trickled up but the real-world gains in many tasks haven't been enough to really justify it.

I know many people still sitting on anywhere from a 2xxx gen to a 5xxx gen and just don't feel compelled to upgrade from a CPU perspective. Those that eventually do, do so for the motherboard features more than the CPU - that's just a necessary cost for a small benefit.

This is marginally different for laptops where perf/watt becomes more important, of course. However for desktops, I certainly wouldn't be troubled by a 4xxx gen. I upgraded last year and it was from a 920 to a 6600K. Even the 920 did much of what I wanted, honestly, it was more a luxury upgrade.


> NVMe

While NVMe benchmarks are stunning I really am curious whether in an ordinary setting the difference to SATA can be felt at all. Not measured -- felt. When the era of swap ended because CPUs / chipsets finally could handle enough RAM and when we went from HDD to SDD, those could be felt for sure.

> This is marginally different for laptops

In laptop world, the problem is that with the spread of thin craze most laptops are now running 15W CPUs instead of 35W like in the old age and so there is little performance increase, no core count increase.


I switched from a 2012 15'' MacBook Pro to a 2017 15'' Macbook Pro and the difference is staggering. The older laptop had a regular SSD, while the newer one is using NVMe. Although it also has a considerably newer CPU, which likely plays a huge role in the performance difference.

I sometimes tweak stuff on both laptops at the same time, and doing a side-by-side restart or even waking up from sleep makes the older model's lesser performance painfully obvious


Additionally "thin craze" matters - I remember having an old 15" laptop in the mid/late '00s that would actually give me backache after carrying it in my backpack for a while.

Today I can't even tell if my 13" dell xps is in it or not.


Interestingly, I cannot really tell if the notebook is in the backpack, but a water bottle with about the same total weight can totally be felt. Probably the very beneficial weight distribution of a flat slab that's affixed in an upright position directly against my back, vs. a cylindric bottle that's either moving around in the backpack, or stuck to its side and therefore exerting a sideways torque on my back.


NVMe definitely feels faster.

I have an NVMe and a SATA SSD in my workstation. I do most of my work off the NVME, but occasionally work on projects that live on the SATA disk. I definitely notice a difference in things with a lot of disk access.


About the only things I've noticed is in some of my node projects so far.. for general use, not nearly so much.


Just upgraded to an NVMe for my primary drive... my mb had a x2 nvme slot, but it didn't work... luckily I got a pciex4 adapter, frankly, I know it's a bit faster than the SSD, but in general use, I haven't noticed the difference so much. Aside from node builds are a bit faster (which tends to touch a lot of random files on disk). General use hasn't been noticeable.


As you say, the thinness craze was enabled by lower-power CPUs that performed no worse than previous gens. They need smaller batteries and less extreme cooling which means smaller sizes. Alternatively, keep the same size and you can enlarge the battery. Either way, perf/watt gains have enabled that in a way that's visible to consumers. With a plugged in PC, that's not visible.


Last year I went from a Corsair Neutron 240GB to a Samsung 950 Pro.

Not much difference in daily usage under Windows 7.


Similarly, I only just upgraded from an i7 860 (2009) to a 6700K at the end of last year. In performance benchmarks, it's only twice as fast as my old one, and that's over seven years. Big difference to how things felt in the past. Imagine if your 66Mhz Pentium still ran well enough that you weren't sure it was worth upgrading to a 500Mhz Pentium III. Same time span.


This was mostly only true in the 90s tho, when we saw the hockey stick part of the S curve: between 1981 and 1987 your comparable choice would have been upgrading a 5mhz 8088 to a 10mhz 286.

a 386 machine would be ~$10k (in 80s $s!) and more like a high end multichip Xeon system or something today - they actually had a compaq 386 of that vintage in my first job and it was still in active service in 2001.


In that case I guess my problem is we bought our first computer in '95, so my kind of baseline experience is that super fast world.


Try and imagine since 1980. Kind of hard to explain to my boy actually. He has my old computer from about 7 years ago. At least i think it was 7. Intel 2600k? Still works a treat. It's almost as old as he is.


I also got Intel 2600K. Yes, it's still going very strong. No reason to switch until my motherboard breaks. Speed gains would be minimal, yet getting a new processor would be quite expensive.


I've a 2600K with 32GB DDR3 ram, and its the cost of DDR4 ram that is that stopping me from upgrading. Well that and the fact that that i've no compelling reason to upgrade beyond keeping up with the joneses.


i had an i5-2500k that my nephew got a couple years ago simply cause we moved to a different country and i didn't want to lug a big-ass desktop with me. I suspect most people wouldn't notice the difference between it and a brand new CPU.


I just replaced my work desktop i5-2500 with an i5-4570. Same SSDs, 16GB RAM instead of 24GB. (Not a voluntary move.)

There's no perceivable difference.

If you have enough RAM to avoid swapping and cheap SATA SSDs, all the requirements for fast/large CPUs or graphics cards are special applications of some sort. Browsers and editors and terminal windows and video playback don't count.


Even with a very intense overclock, Sandy Bridge is starting to fall behind the pack in gaming. Not obsolete by any means but 5 years of even incremental IPC gains do add up, and Sandy Bridge is still running PCIe 2.0. The 2500k is starting to come out 30-50% behind the 7600K in many titles let alone a 7700K, which is a very perceptible difference.

http://www.gamersnexus.net/guides/2773-intel-i5-2500k-revisi...

http://www.eurogamer.net/articles/digitalfoundry-2016-is-it-...

For "work" workloads, Sandy Bridge is still quite potent though. Honestly in that segment the gains have mostly been coming from moving to more cores. A hexacore or octacore with hyperthreading/SMT knocks the stuffing out of an i5.


Yep. It was that reason that i upgraded from the 2600k. My entire workspace is virtual machines. The 2600k started falling down there. Vmware without hyperthreading is kinda non optimal. Sub normal. Frfrrl. Everything else, it is still a cracking pc.


Oh it's a common (& valid) comparison :)

Just wanted to add a little historical context.


I'm still on my i7 860 with nothing compelling me to upgrade. Intel's biggest competitor in 2017 is Intel from 2010.


It really depends on your applications/field. If you are in machine learning or data analysis, AVX can give a welcome performance boost. Also, a maximum of 16GB RAM is quite limiting.


If you're in ML, can't you just stick a new GPU in the PCIe slot? A GTX 1070 can be had for below 500 €, which would still be a bit less than a new motherboard, new RAM and a new i7.


My i5-750 has run 32GB RAM for years (4x8GB). I haven't had any issues.


> I upgraded last year and it was from a 920 to a 6600K. Even the 920 did much of what I wanted, honestly, it was more a luxury upgrade.

The 920 is a solid performer, but oh my does it need efficient cooling. At launch it almost felt like a return to the Pentium 4 era in terms of power consumption.

If I were to upgrade my 920 anytime soon, it would be to reduce the need for cooling fans and to get a quieter system. As a programmer, I don't need more ultra-cores or jigga-hertz.


When I upgraded to my 4790k (from amd FX-8350) the main driver was my electric bill, I actually had a couple hotter computers running, my htpc was similar power usage... went to the newer i7 on the desktop, and a brix (similar to nuc) for htpc, recently to an nvidia shield tv in place of htpc.


Indeed, especially with Intel still trying to upsell users to the highest level X chips to get more PCI-e channels. Fortunately AMD has more than intel available from their cheapest chip to their most expensive.


I'm still running on a i5-3550k. I really don't feel compelled to upgrade because I haven't seen a performance drop in gaming yet.


Just curious, what do you do that will make a difference with more cores?


Software dev.. usually have one or more DBs and/or services containerized, vm or direct running at once. Mainly it's a matter of the number of background processes that will run as intently as the front/dev side depending on what I'm doing.

I also have multiple SSDs and an NVMe I recently put in... that seems to have made the most difference. I'm running 32GB ram now, and don't tend to bump into that as a limitation.


... until ThreadRipper.

I'm not a fan of either megacorp (though I prefer Intel because their stuff works much better historically on GNU/Linux), but it should be painfully obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD (which is still going to be more powerful and far more affordable than i9 when it launches).

EDIT: To be fair, they do mention this in TFA:

> That these chips are currently little more than a product name and a price [...] is a strong indication that Intel was taken aback by AMD's Threadripper, a 16-core chip due for release this summer.


> it should be painfully obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD

This is incorrect. As Ryan Shrout from PcPer notes,[1]

> In some circles of the Internet, the Core i9 release and the parts that were announced last month from Intel seem as obvious a reaction to AMD’s Ryzen processor and Threadripper as could be shown. In truth, it’s hard to see the likes of the Core i9-7900X as reactionary in its current state; Intel has clearly been planning the Skylake-X release for many months. What Ryzen did for the consumer market was bring higher 4-count core processors to prevalence, and the HEDT line from Intel has very little overlap in that regard. Threadripper having just been announced in the last 60 days (even when you take into account the rumors that have circulated), seems unable to have been the progenitor of the Core i9 line, isn't its entirety. That being said, it is absolutely true that Intel has reacted to the Ryzen and Threadripper lines with pricing and timing adjustments.

[1] : https://www.pcper.com/reviews/Processors/Intel-Core-i9-7900X...


The ≤10-core stuff was planned already, yes, aside from perhaps the pricing.

But it's the new, very-high-core-count CPUs Intel hastily announced with no real details and without even warning their partners first that look like a reaction to Ryzen, specifically Threadripper.

Those CPUs might be a problem for Intel. They can only make so many high-core-count Xeons and now some of the best chips will be headed to the HEDT market rather than servers. Also, thrusting chips with almost twice as many cores as originally planned onto motherboards which aren't designed for them and which already have seen overheating problems might not end well.


I was not referring to the R&D, rather the way it was announced and is being marketed. I would be surprised if they didn't rush development and went straight to marketing as a reaction to AMD. Intel have been resting on their lorels due to the lack of serious competition, ARM is struggling from what I've seen.

I probably should've been clearer, but I assumed it was obvious you can't do R&D on new silicon in 60 days and already have an announcement for it.


> I would be surprised if they didn't rush development

I thought the development cycle for a new CPU was between 2-3 years (hence the 2 teams, and tick tock thing).

It's like adding new features late in the game for a webapp. Some things would be very easy (we need a new page for X). Some things would be very hard (I want you to rewrite the entire app from angular to react). My guess is something like changing core count is very very hard to do late in the game. Changing price, maybe changing overclock settings, those would be fairly easy.

So I suspect if Intel didn't see threadripper coming (which I doubt) the thread counts were set years ago. However threadripper is making Intel drop the prices a bit.


Intel already makes these high-core-count CPUs for the Xeon market. What's different here is their presence in the HEDT lineup - but that is basically just blowing different feature-fuses in the chip and slapping them in a different box. There is no actual R&D required.

Intel is afraid of these HCC chips cannibalizing their sales of more expensive Xeons, so they're holding back as long as possible and crippling key features. If you want ECC, for example, you have to buy a Xeon - or more realistically for many people, a Threadripper.


> it should be painfully obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD

Maybe the branding but the HEDT lineup has been around since 2010. Shockingly, this lineup escaped the notice of AMD loyalists, or was assumed to be equivalent to Bulldozer's CMT (which is much more like a SMT/hyperthread than most people would admit at the time) - however they have always been excellent at gaming, particularly compared to Bulldozer's miserable IPC.

The high-core-count lineup are probably a reaction to Threadripper but these chips have been around in the Xeon lineup since forever and really should have stayed there. These chips (including Threadripper) are really multi-socket-in-a-package and don't perform particularly well in gaming workloads (despite AMD advertising them for such). Games don't scale well given the latency. They will be nice for things like CAD rendering workstations but that's a much narrower niche.

The i9 branding also slices off ECC, which is a significant feature for many things these will actually be good at. Threadripper will be at a significant advantage for actual server usage (even home-server) as a result. Intel is trying to avoid cannibalizing sales in their more expensive Xeon lineup but it does kill a bunch of the utility of the processor as a result.


The HN title feels like an editorial by admission. The full title is

> Intel Core i9-7900X review: The fastest chip in the world, but too darn expensive

> When eight-core Ryzen costs £300, do any of these new Intel chips make sense?


s/admission/omission ?


Between the Intel Core i9-7900X and AMD Ryzen 7 1800X there is a $540 price difference.

But for that extra cost you get four more threads at a higher clock rate, twenty extra PCIE lanes, a 500 MHz higher turbo clock speed, and double the memory bandwidth.

Even with the lesser Intel Core i7-7820X you will get the same thread count but with a higher clock rate, four extra PCIE lanes, still a 500 MHz higher turbo clock speed, and double the memory bandwidth for only $140 more.

Now, of course, the AMD Ryzen Threadripper 1950X comes much closer to the i9-7900X price point.

However, you will sacrifice single core performance to gain twelve more threads at a lower clock rate. But, you will receive twenty more PCIE lanes, over twice as much L3 cache, and the same memory bandwidth as the i9-7900X.

So if your plan is to build a 3D render farm, the Threadripper seems quite appropriate.

Although, if you plan to build a workstation on which to model 3D assets and to perform preview renders--the Intel i9 series seems more apt.


Not to take away from your numbers, but you should consider the cost of the motherboard, too. The HEDT Intel motherboards tend to be more expensive than what the Ryzen ones are going for and much harder to find replacements for down the line. Microcenter seems to only sell one X299 and it's at $310. Depending on your desired feature set, you can get a Ryzen motherboard for < $100.


The MSI X299 RAIDER is the cheapest X299 board I can find at NewEgg and costs $219.99.

The GIGABYTE GA-AB350M-Gaming 3 is the cheapest Ryzen board I can find at NewEgg and costs only $94.99 by comparsion.

Therefore, there's an obvious savings of $124. Although, you can easily spend $189.99 or more on a higher-end Ryzen board given NewEgg's offerings.

So while there is a discrepancy, it's not massive.


Fair enough. I listed Micro Center because they routinely have the best deals on CPU prices and often have incredible CPU & motherboard combo prices. For a while you basically could get a Ryzen motherboard for free with the CPU and they took $50 off the MSRP of the CPU. I ultimately didn't go for it, but I was looking at an 1800X + motherboard for $500, which is $100 less than the i7-7820X alone.

But I agree that a $200 - $400 difference may not be substantial for a workstation in heavy use. Personally, I do still have concerns about availability over the life of the CPU. I currently have an X79 and was looking to replace the motherboard in its 2nd year of ownership. eBay is really the only option available and with a scarce 2nd-hand market, the boards don't depreciate much. It's gotten better over time, but I was looking at paying $400+ for a used motherboard. I decided to just deal with the quirks of my current one. The X299 is early in its lifecycle, of course, so I'd hope availability for a few years.


> ... they routinely have the best deals on CPU prices and often have incredible CPU & motherboard combo prices.

Agreed.

> The X299 is early in its lifecycle, of course, so I'd hope availability for a few years.

Same, here. I've seen ASRock Intel boards vanish from the market only a year after they debut.

It's disturbing. And moreover, it's detrimental to the lifetime of the board as driver and BIOS updates cease.

Hopefully this chipset and socket will last a bit longer.


Are there really that many people doing 3d model rendering?

My guess would be PC users by count are

gamers > programmers > 3d renderers

For most gamers, seems like i7 or maybe i9 wins in current benchmarks. For programmers maybe ryzen is a better fit, but I bet it depends on your language.


> but I bet it depends on your language.

Not as much as you would think, I have a ryzen 1700 I use for dev, mostly PHP (not a language that threads well) in an Enterprise environment.

Those extra cores/threads come in incredibly handy for virtualization, both dev environments and things like running windows for testing

Even VS2017 inside virtual box absolutely flies when given 16GB of RAM and 4 cores (8T).

For my day to day the cores are useful even if my primary language isn't using all of them.

I'm currently in the strange situation where my desktop running the main system in virtual box is faster than our production/spare servers.


If you're a gamer, then yes, the i7-7820X is precisely what you'd be considering in opposition to the 1800X.

I invoked the example of 3D rendering as it's the origin of most of these discussions among my co-workers.


You're forgetting the CGI industry, as seen in movies.


I am not forgetting it.

Google right now, tells me there are:

155 million "gamers" in the US

3.6 million programmers in the US

I cannot guess that the CGI industry is higher than either of these numbers.

Now very much they may pay for bleeding edge, and spend more dollars on hardware than programmers. But I am at least 95% confident there are less people in the US running a 3d program on their desktop compared to running eclipse/visual studio/atom/vim.


What about prosumer video editors?


Sure those exist.

1) How many are there?

2) it seems like you are usually not CPU bound. The people I know doing this spend 20 minutes waiting for a video to render than 6 hours uploading it to youtube. In most cases their bandwidth is a limiter 10x over their CPU. (Plus most of those people are on Mac anyway, so they don't even get this choice)


how does AR impact this need? will it by default make everyone a gamer with regards to hardware needs? that is if someone one can make a compelling case for it becoming widespread.


graphic designers are. my friend is moving off a mac and onto a pc just to do 3d rendering at home.


The real problems with Skylake-X are chipset cost, power consumption, shitty partner boards, and TIM. All of these are forgivable given the performance - except the TIM.

Chipset cost will come down in 6-12 months after launch like it always does. This is part for the course, at launch X370 boards for Ryzen were going for well over $250 as well.

Power consumption is a consequence of AVX512 and the mesh interconnect along with raw core count. Everyone wants higher clocks, more cores, and more functional units. There are no easy efficiency gains anymore, and this is the price - power consumption. This is the "everything and the kitchen sink" processor and it runs hot as a result - but it absolutely crushes everything else on the market. This is no Bulldozer.

Board partners with insulators on top of their VRMs was going to come to a head sooner or later. This is the natural outgrowth of form over function, RGB LEDs on everything and stylized heatsink designs that insulate the board instead of actual cooling. The terrible reviews on those boards will sort this problem right out, they will be unusable in their current form.

Intel has been cruising for issues with their TIM for years (since Ivy Bridge), this time they finally have a chip that puts out enough heat they can't ignore it. Intel can get away with making you delid a $200 i5 or a $300 i7, it's not acceptable on a $1000 processor.

There is still a market for a 6-12C HEDT chip that can hit 5 GHz overclocked. This thing absolutely smokes Ryzen in gaming at stock clocks let alone OC'd - single-thread performance is still a dominant factor in good gaming performance and this chip delivers in spades. Combining its leads in IPC and clocks, it's fully 33% faster than Ryzen in single-thread performance. This is just a brutal amount of performance for gaming. Unfortunately without delidding you're not going to hit good OC clocks given the current TIM. And delidding is a dealbreaker on a $1000 CPU.

TIM is the actual core problem with Skylake-X - everything else will sort itself out. Skylake-X with solder would be a winner and Intel would be wise to turn the ship as fast as possible. The 6C and 8C version are priced much more reasonably and will sell great as long as they fix the TIM problem.

Intel claims they have problems with dies cracking, but AMD manages to solder much smaller dies, so IMO Intel just doesn't have a leg to stand on here. This is not something that should be pushed onto the customer with a $1000 processor - you're Chipzilla, figure something out.


I had to google it so I don't think im alone here:

TIM = Thermal Interface Material


Yup. TIM goes on the die to transfer heat to the Integrated Heat Spreader (which is the lid you normally see). Intel has an issue with their thermal paste, it doesn't properly fill the void and doesn't contact the IHS well, so heat just builds up in the die. Check out this amazing chart from the Tom's Hardware review:

http://i.imgur.com/7BIJmxS.png

http://www.tomshardware.com/reviews/intel-core-i9-7900x-skyl...

Heat just is not getting to the IHS properly on these chips, and heavy overclocking just makes the whole thing worse.

It's really been a slow-burning problem since Ivy Bridge, where Intel switched from soldering the lid to TIM + an adhesive. Solder has long been preferred for its superior heat transmission, but Intel says say smaller dies have problems with cracking over time due to thermal cycling. However, AMD has been happily soldering the lid on much smaller Ryzen dies, so apparently it's not all much of an issue in practice.

Well, you can get away with that on a processor that puts out 50 watts during normal operation. It's been an issue for a while on the unlocked/overclockable SKUs, particularly on the latest 7700Ks, but even an OC'd 7700K only puts out ~100W, so it was relatively manageable. Extreme overclockers could delid and replace the thermal paste with something better (often liquid metal like Conductonaut), which does help performance quite a bit.

But, with the higher TDP of Skylake-X, this has become a pressing issue just for normal operation. Things change when we're talking about a $1000 processor that needs to be delidded to sustain boost at stock settings. That's just not acceptable.

I almost would rather have a bare die at this point. Mounting pressures are no longer insane so it wouldn't be as terribe an ordeal to mount as as Athlon XPs were back in the day (god forbid your screwdriver slip on that bracket, with 50+ pounds of force you are guaranteed to gouge something).


"This thing absolutely smokes Ryzen in gaming at stock clocks let alone OC'd - single-thread performance is still a dominant factor in good gaming performance and this chip delivers in spades."

That if you are limiting yourself to 1080p... At the resolution that I game (3440 x 1440) those performance differences disappear very fast. And even at 1080p 150 vs 180 frames don't matter that much for the majority of people.

The cost of this chip alone is the same as some Ryzen builds. There is a point where it financial doesn't make sense (price/performance) even if it's the fastest chip around.


The 6C version is priced directly against Ryzen (~$350), and the 8C version is only modestly more expensive (~$500), and they have a huge lead in gaming performance. That's perfectly affordable for the performance they give - virtually the same prices Ryzen launched at, in fact.

With this much of a lead in single-thread performance (~33%) a 6C Intel is actually outperforming an 8C Ryzen even in multi-thread performance and it's stomping it in games because single-thread performance is still so critical.

And the 8C Intels are just 33% faster than Ryzen across the board.

High-refresh gaming requires excellent single-threaded performance regardless of resolution, and 144 Hz is basically the new standard for midrange/high-end gaming builds at this point. A 144 Hz monitor starts at literally $150 and a very nice IPS 144 Hz FreeSync/GSync monitor can be had for $400-600.

It's not just 1080p - CPU single-thread-performance requirements scale with the framerate, it's just easier to hit higher framerates at lower resolutions. So 1080p benchmarks are a "leading indicator" of future high-refresh gaming performance as GPU tech improves and you upgrade in a year or two.

On the flip side, 4K benchmarks really mean almost nothing for CPUs. A Pentium G4560 is within a stone's throw of a 7700K at 4K because everything is GPU-bottlenecked at a very low framerate that virtually any processor can deliver. But that G4560 will fall behind in no time at all as GPU performance continues to improve and its actual performance (or lack thereof) is laid bare.


Or I guess put another way for gamers, saving $500 on your CPU lets you go from 1 GPU to 2 GPUs. For a lot of games, that is going to be a pretty big performance win.


It's silly to even look at 1080p for high-end gaming, beyond having some "rule-of-thumb" number to compare performance historically.

I remember I used to advice people to spend half on monitor, half on the pc system (and out of that, maybe close to half on the gpu). That used to mean a ~1000 USD monitor(s), and a ~500 USD GPU - for a total system price of ~2000 USD.

Today, most people would probably aim for a lower total cost, but it's still silly to sink a lot of cash into getting a great system, only to have a crappy monitor ruin the experience.

(Another caveat, I'd guess a high-end monitor should be able to survive/remain usable for closer to a decade than to 3-5 years -- which would be more typical for a pc. Of course, part of the reason for getting a pc system would be the possibility of ~incremental upgrades)


27" 144 Hz IPS GSync/FreeSync like the XF270HU or XB271HU is the place to aim for a general-purpose monitor right now ($400-600 depending on model and new/refurb). People inevitably are a bit dubious on them at first given the cost, then they try them and agree it's worth every penny.

Unless you want ultrawide that is - but there are some caveats there with game compatibility due to the aspect ratio.


At the desktop level I don't get why people care that much about power consumption? It means you have to dissipate more heat, okay, that means you can't use a cheap cooler. But it AFAIK even an extra 100w is cheap in the most expensive areas, especially when contrasted against productivity or cigarette breaks, or people sometimes being 20 minutes late to work...


> that means you can't use a cheap cooler.

Actually, air-cooling is marginal with these CPUs, even at stock frequencies. There is no air-cooler which allows them to sustain their frequencies under load.

Water-cooling is pretty much required, and an AIO does not cut it. Still, even with water-cooling you can't really overclock these. They are pretty much at their limit out of the factory.

This could conceivably solved by Intel switching away from silicon TIM to e.g. solder, since the Rth(jc) of the CPUs is much worse at ~0.3 K/W than the thermal resistance of a big CPU air cooler (~0.1 K/W).

The overclocking problems wouldn't be fixed though; there is no easy fix for a CPU that jumps to 400+ watts.

-

An entirely separate issue is that you need to get the heat out of your office. A human dissipates around 50-100 W; you can imagine that a small office crowded with half a dozen people is not pleasant in the summer.


I fully agree, and what's more Intel has the performance to back it up. This chip pulls a lot but it's wicked fast, it's a massive step forward in framerates. It combines the minimum-framerate improvements of HEDT/Ryzen with the single-threaded performance of Kaby Lake. Oh yeah and AVX512 too.

For a sense of perspective here, going from a circa-2012 2600K to a current 7700K is a 40% jump in performance, so it's roughly equivalent to 4-5 years of gains at Intel's usual tempo - only you also have 10 cores on this platform. This thing is an absolute monster for gaming or other tasks that lean heavily on single-threaded performance.

But the power consumption is really the triggering issue for the problems with shitty partner-boards overheating and the TIM. The TIM is really the showstopper right now.


it's more like 20-25% (depending on what you use it for)

https://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_v...


For me its purely a noise issue. Less heat to dissipate means less fan noise(Only speaking about home use here. In an office setting the difference would be unnoticeable).


> In an office setting the difference would be unnoticeable

If you have a hall full of developers, having lots of noisy PCs can be annoying.

Where I work, we optimized for more silent PCs, because all these things do add up, and given the perf/watt-ratio you can get out of modern CPUs, there's no reason for people to need to have noisy PCs.

Even at home, optimizing watt-usage, even for a desktop build, is not completely without merits. All my future projects are planned as fanless as possible. And I know others who do the same. And if Intel can't deliver that, they'll just go buy something Arm-based, like an Rpi 3, which these days are getting good enough to actually do production loads.

I'm not even getting close to wanting a system where the CPU alone can draw 400+ watts.


> an Rpi 3, which these days are getting good enough to actually do production loads

They really aren't, CPU performance is really irrelevant given the RPi's architectural weaknesses. USB was never meant as a system bus and everything has to loop through the kernel stack. Having every single peripheral hanging off a single USB 2.0 bus is crippling for performance. A Pi can't even serve a share at full 100 mbit speed due to bus contention let alone do anything more intensive.

It's very similar to one of Apple's more famous goofs, the Performa 5200/6200 with its left-hand/right-hand bus split that forces the CPU to handle everything.

http://lowendmac.com/2014/power-mac-and-performa-x200-road-a...

Some of the clone boards have USB 3.0, SATA, gigabit ethernet, etc and are much better performers in practice despite having slower CPUs "on paper". Or there are little mini-PCs using 5-15W laptop processors that are really nice and run x86 distros/binaries.

All of these are at roughly comparable TCOs to a Pi (they include things like AC adapters that must be purchased separately for the Pi). The RPi is a bad choice for server usage.


> They really aren't, CPU performance is really irrelevant given the RPi's architectural weaknesses

Obviously "production loads" is an undefined term and as such we can discuss infinitely back and forth exactly how much these cheap ARM machines can actually handle.

I also didn't mean to single out the Rpi3 as a universal performer, optimal for everything out there.

My point was that I'm seeing an increased amount of people who are happy with what these cheap boards can do, who 10 years ago would have been forced to buy a server of sorts to cover the same needs.

So now they don't buy servers. Instead they buy cheap, tiny and fan-less ARM-based machines and they're perfectly happy. They even think running ARM is cooler than running Intel, so it's something they brag about.

I'm absolutely not saying I'm going to replace my company's server-farm or my dev-computer with these anytime soon, but Intel cannot completely ignore the power-efficiency aspect either if they want to keep their dominance in the market.


I bought my Ryzen chip on its release day, I don't need some X370 board, I got myself a B350M board which doesn't affect me anything for my applications. It cost me $100 delivered.

There is no such alternative for Skylake-X - Intel charge you an arm and leg for its half decent products.


Overclockers are vocal but I can't imagine they're more than a tiny portion of the market these days? For most use cases modern CPUs are plenty fast enough stock. Businesses won't do something unsupported. Games will be written to run properly on supported chips. Maybe with a lot of effort you can get your games looking slightly nicer, sure, but for how many people is that worth it?


> Unfortunately without delidding you're not going to hit good OC clocks given the current TIM. And delidding is a dealbreaker on a $1000 CPU.

Yeah, absolutely agree. What's sad is there are fanboys defending this and saying this makes direct die cooling easier through delidding, which saves 1-2 C° over solder, and is the beast idea Intel had in recent years.


What I really wonder is when this chip would have been released if AMD didn't come out with Ryzen line up in 2017.


I'll answer your subtle comment quite explicitly: never.

Intel clearly rushed this out to prevent AMD from having the perception of leading the space, at least in terms of the largest core count. The article even references this.

I just hope that it stays competitive, as this is clearly a win for consumers.


Well, no. A 10-core HEDT Skylake follow up to the 10-core Broadwell was basically a given, and would have shipped right about now regardless. Maybe Ryzen caused them to push the default clocks up, but probably not since it's unlocked anyway. Most likely only thing that Ryzen changed about this chip is knocking $800 off the price (and maybe the i9 marketing name.)

The rest of the i9 lineup, thus far unreleased with only the core count actually known, is Intel's rushed response to Ryzen. Not this chip.


Skylake-X certainly would have existed regardless of Ryzen, Intel wants to sell these chips to Google/Amazon/Facebook as Xeons. The idea that this uarch is a reaction to AMD is absurd, this uarch has been on the roadmap for years and at most was pulled forward a few months.

People will attribute anything one of AMD's competitors does to fear of AMD - 780 Ti, 980 Ti, 1080 Ti, Skylake-X, you name it. It's frankly a little comical given the actual amount of competition AMD put up with Bulldozer and Fiji/Polaris/Vega against Haswell/Skylake and Maxwell/Pascal - which is to say, hardly any. NVIDIA and Intel both have their own yield strategies and release schedules that are largely independent of what AMD does. Intel and NVIDIA are tweaking prices and specific launch dates - that's about the extent of AMD's impact on their competitors so far.

(although Intel is definitely paying attention to Threadripper/Epyc now, and will be pushing core count up on consumer chipsets starting with Coffee Lake and presumably Cannonlake as well)

The real problems with Skylake-X are chipset cost, power consumption, shitty partner boards with insulators on the VRMs, and TIM. None of those have anything to do with a "rushed launch", all of those are issues that have been slow-boiling for years now.


> Intel wants to sell these chips to Google/Amazon/Facebook as Xeons

They have already been selling vastly superior Skylake-SP (and -EP) Xeons to Google/Amazon/Facebook since 2015.


Reading the text I think they talked about the chips with a core count larger than 10, which were never on any roadmap until AMD released Ryzen (12 core Skylake appeared on the Roadmap) and then announced Threadripper (14, 16 and 18 core Skylake X paper launched).


It's amazing what a little healthy competition can do!


I'm ignorantly guessing Intel always keep something in their back pocket ever since AMD's Athlons bested Intel's P4, just in case AMD ever tried to wrest the frown back. Or, I would, anyway.


> just in case AMD ever tried to wrest the frown back.

Intel: "why so serious!?"


Lol. Autocomplete. "performance, performance /watt" _crown_.


Yes, it's amazing what Intel can pull out of the hat when they finally get a little competition again.


I'm all for double the memory bandwidth (and more importantly double the memory channels) as long as it's not too expensive. But I'm holding off on buying a new desktop till the AMD Threadripper hits in a few weeks.

I suspect that benchmarks do a really poor job of measuring worst case performance... which is what users notice. Things like UI lag and audio skipping. I suspect double the memory bandwidth (assuming a nice fast M.2 SSD) is the limiting factor for heavy workloads made up of independent tasks.


UI lag and stuttering audio is still mostly caused by bad IO scheduling.


> fastest chip in the world

Impressive for a x86 as this is, some POWER and SPARC users may disagree with this assessment. In fact, some Xeon users will doubtlessly scratch their heads too.


Only reason they'd scratch their head is to try to find the hair they lost explaining why they are on POWER or (especially) SPARC.

Power9 is competitive in performance/Watt, and in some weird benchmarks which no one cares about. I haven't seen anything competitive in any way from SPARC for a long time.


The article claims "fastest", not "most efficient" or "best per dollar". It all depends on the workload - I remember SPARC has some interesting tricks integrated into its silicon that make things like HANA and Oracle a good deal faster.



But once again they have included a second "processor" on each chip with a bunch of restricted backdoors that can not be removed [1]. There have already been bugs [2] and exploits [3] found and therefore no Intel or AMD chip can be used if you care about security and/or privacy.

If I think Microsoft isn't free enough for me, then I can remove Windows and install Linux (or BSD). If I think Chrome is sending my data to Google, then I can remove it and install Firefox.

But if don't like that Intel can take over my PC at any time, watch my screen, log my keystrokes, prevent me from installing another operating system, manipulate what I see on the screen, and much much more, then there is nothing I can do. I can not remove the second chip or remove the code. I must have a proprietary blob [4] (of who's source code no one can see to audit) running on my Intel PC.

But the worst thing has to be Intel and AMT's complete refusal to provide a clean chip to companies that are trying to provide backdoor free computers. Look at http://puri.sm [5] for example. They are trying to provide a PC that does not restrict what operating system or BIOS you run, and have repeatedly contacted Intel to ask them to provide a batch of chips with no ME or AMT installed. Even Google, which sells millions of chromebooks (coreboot preinstalled) have been unable to persuade them. [6]

As Intel and AMT are the biggest players and arguably a monopoly of the microprocessor market, they have a responsibility to provide safe and clean processors that customers can truly own. Please try your best not to buy these products until they resolve these issues.

[1] https://libreboot.org/faq.html#intelme

[2] https://www.theregister.co.uk/2017/05/01/intel_amt_me_vulner...

[3] https://www.intel.com/content/www/us/en/architecture-and-tec...

[4] http://boingboing.net/2016/06/15/intel-x86-processors-ship-w...

[5] https://puri.sm/learn/intel-me/

[6] https://libreboot.org/faq.html#intel-is-uncooperative


Thing is they could have implemented that in a half-open manner. Sign and hash releases, and post the code openly.

So you can view the code, verify it's what is installed but for security purposes can't change it.

The way this reads it seems deliberately suspicious. Having a HTTP server so low in the hardware stack feels wrong.


Well, it doesn't have to be HTTP but you do need some sort of server low in the hardware to control the machine remotely. Such things have existed for many years in servers, now Intel integrates them in chipsets for corporate PCs too.


Who is AMT? I thought AMT is just Intel's marketing name for certain remote management features.

Also, I'm under impression that [6] refers not only to firmware blobs running on various auxiliary coprocessors but also the machine's firmware, i.e. BIOS, and in particular the CPU initialization component which had been provided only in binary form by both Intel and AMD for a few years now.


Sorry, my mistake. I should take better care of my writing. I should have written AMD instead of AMT. I cannot edit my post.

AMT is the poorly written and proprietary firmware Active Management Technology. This allows you to remotely control a computer.

AMD is a company and competitor to Intel.


What you say is absolutely true, but I don't think there's any reasonable alternative if one wants to have an X86 fast compatible processor nowadays.

I myself am thinking of buying one of these new AMD processors.


Careful with these benchmarks. The 6700K and the 6700T shows the same Cinebench R15 on https://www.notebookcheck.net/Intel-Core-i7-6700T-Processor-... -- click on Show comparison chart below Cinebench R15 - CPU Multi 64Bit. I do not think anyone believes those two CPUs are performing the same -- they are the same Skylake architecture but one is 4-4.2GHz w/ a 91W TDP while the other is 2.8-3.6GHz w/ a 35W TDP. The difference is decidedly not 1%.

It's not that notebookcheck is benchmarking something outlandish: it shows 668 while this Ars article claims 637 for the 6700K, Notebookcheck benchmarked the 6950X to 1859 while Ars has 1786, both are very close.


I think you may have some model numbers and/or benchmark numbers mixed up. I don't see the 6700K or 6700T in the charts in this Ars article.

I see a 7600K with the 637 score, but that lacks hyperthreading and has 25% less L3 cache compared to the 6700T, so it makes sense that the 17% frequency advantage is mostly balanced out (there's little IPC difference between Kaby Lake and Skylake).

You don't have notebookcheck's numbers matched up with the right CPUs either: https://www.notebookcheck.net/Mobile-Processors-Benchmark-Li...

To be fair, Ars has the i5-7600K listed as an i7, and Notebookcheck has the cache sizes wrong: http://ark.intel.com/compare/88200,97129,97144,88195

So there is plenty of confusion to go around.

Edit: Actually the frequency difference may be a bit off from 17%, that was based off the max single core turbo frequencies. I don't know what the all core turbos are.


Out of the loop when it comes to the latest processor technology. How does this chip maintain the same TDP as a 6850K but with 4 more core? Same size (lithography), roughly the same freq, memory size/types.


TDP numbers from Intel don't tend to mean much. They group a wide range of CPUs under the same TDP.

The CPUs will generally use far less power than the TDP might suggest.


Power/frequency scaling is quadratic since voltage can decrease with frequency, so even the 10% difference in base clocks could explain it, depending on where in the voltage/frequency curve they are.

Plus, this is Intel's 3rd iteration on the same process; even if the feature size doesn't change you can extract a bit more power efficiency with 2 years of feedback.


No integrated GPU.


Er, neither has a GPU. The i9-7900X has the benefit of generation of tuning and runs at a slower base clock (3.3 vs 3.6 GHz for the 6850K). Slightly smaller l3, and a much larger but also higher latency L2 helps as well.

I suspect moving from a ring bus to a mesh helps as well.


> fastest chip in the world

what a huge load of biased non-sense! I have a machine pretty similar to the one mentioned in the article below, it is using Intel processors released ages ago, in fact they were from decommissioned servers from some random data centres. I'd willing to bet that machine with "the fastest chip in the world" is significantly slower/more expensive than mine when it comes to my long list of day to day development tasks.

Oh, don't forget to mention the fact that the "fastest chip in the world" can reach >100 degrees when fully loaded. Maybe Intel should pay some review sites to claim it to be the processor most suitable for cooking a meal.

https://www.techspot.com/review/1218-affordable-40-thread-xe...

In case you want to argue that my machine has two Xeon - you can actually order one single Xeon from newegg.com which is more recent, put it into a consumer motherboard and beat the xxx out of i9-7900x. There is no way a 10-core Intel processor could possibly be the "fastest chip in the world".


The 20+ core Xeons run at 2.1 or 2.2 Ghz, while this runs at 4, with a newer architecture. I don't think it's a big stretch to call this the fastest chip, especially since we're referring to consumer chips and not server chips that cost 9000 eurodollars (as is the case for those 20 core Xeons).

Also, have both of these setups run a mixed workload that isn't absolutely parallelizable and watch the Xeon struggle.


1. the Xeon I am using can Turbo to 3.1Ghz, sure, it is slower than 4G, but the sheer core count makes it much faster in the day to day development tasks many people face. consumer grade or not, it doesn't matter when there is no special requirements. you just buy components from your favourite vendors and put it together, that is all.

2. you can buy a pair of such 10-core Xeon for almost the same price of a single i9-7900x.

3. i9-7900x faces the exact same problem when the workload can not be paralleled - you can buy a much cheaper quad core intel processor that overclock well, you can push it to say 4.5 or 5G and beat the "fastest chip in the world".


You are comparing 2 chips to a single chip and trying to argue against the claim that the single chip is the fastest yet.

I'll humour you however.

Your Xeon turbos to 3.1 if only 1 core is stressed, but it's all core boost is 2.4 as per https://www.pugetsystems.com/blog/2015/07/09/Actual-CPU-Spee...

I don't see how your 10core part boosting to 2.4 Ghz is "much faster in the day to day development tasks" given that it's 2 CPU architectures behind and clocks at almost half the 4.0 Ghz achieved by the 7900x (all-cores boost).

But even a pair of those 10 cores is probably slower (caches are not shared, all-core boost almost half while roughly 5% slower in IPC performance due to the jump from Broadwell to Skylake).

So I don't think you're actually trying to argue that this isn't the fastest chip yet, but that it's a bad deal compared to looking around and buying some used server parts.

And yes, that's a better deal, but also a used i9-7900x is a better deal than a new i9-7900x...


The link I provided contains detailed benchmark results of the mentioned system against 6950x. In quite a few workloads, e.g. SPECwpc, it is beating 6950x by up to 42%. See page 4. The maths is really simple here - i9-7900x need to beat i7-6950x by 40% to match that performance.

i9-7900x is _NOT_ the fastest chip in the world, not even the fastest intel chip. Consumer/server difference is purely a marketing thing, my Xeon based workstation running CS:GO on daily basis is not a server.


I'm a bit disappointed - it said "review" in the title, but there are no interesting details or tests inside.


It has multiple benchmark results.


Am i the only person to read that and think "a couple of reasons to buy and a whole bunch to not"?


why does the title says exactly the opposite of the article?

maybe if it ended with "for very specific cases" it would be more true.


"I wanna go fast!" - Ricky Bobby


What do you think this is, Reddit?


:(




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: