Hacker News new | past | comments | ask | show | jobs | submit login
Intel Unleashes Its First 8-Core Desktop Processor (intel.com)
197 points by joaojeronimo on Aug 29, 2014 | hide | past | favorite | 145 comments



Even the 5960X, the $999 8 core part, has a maximum memory size of 64gb, unchanged since Sandy Bridge E.

That's disappointing, because while the CPU will likely remain close to state of the art for quite some time to come, you'll most likely max out the memory on day one and be stung by an inability to upgrade.

Of course, this was probably by design, so that they can sell you another, virtually identical 8 core processor in two more years for another $999.

http://ark.intel.com/products/82930


You could buy a slower Xeon for around the same price if you really needed more than 64 gigs of memory.

http://ark.intel.com/products/75269/Intel-Xeon-Processor-E5-...

And it supports ECC.


The Xeon isn't overclockable, which is a big part of the niche this processor sits in.

If you read my post again, I'm not saying that 64gb is too little right now. It's probably the right match for the processor for most workloads, today. 32gb would seem weak with 8c/16t (I have that much in my 4770 system), and 128gb could be excessive.

But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing.

(Barring an Intel microcode revision, as is being speculated by the sibling commenters. But I'm not holding my breath, as Intel Ark is pretty definitive.)


Idk... I'm struggling to see why an average user in the overclocking/high-end pc market would run into the 64 gig limit assuming the high end market has a relatively short part lifetime. I mean if you're in it for the video editing then the sky is the limit but for an average user? A user could ram-cache 4 hard drives with a 4gb buffer each, power up the entire adobe suite including Illustrator, Photoshop, start a browser session with 100 tabs and 10 video streams, torrent client, email client, backup client, vpn, a couple modest ftp and web servers, a transcoding media streaming server AND crysis 3 and still likely have 10-20 gigs to play with. I think if you need much more than that running concurrently you probably should be starting to think about server hardware.

If you think 64gb will be an easy limit for an average user to hit in the near future I would love to hear your envisioned use case.


I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS. I think this is going to start happening regularly in the consumer space sooner rather than later (and already has in some cases like with XP Mode). This is pretty much modus operandi in the service space today.

There's lots of really good reasons to do this (sandboxing, ease of installation, compatibility, snapshots/state saving, etc.) and VM tech at the consumer level is good enough for most applications. Doing so also enables you to distribute the same application for different host architectures relatively easily (swap out the virtualization core with an emulation core).

VM technology basically will allow consumer software vendors to start treating your computer like a set-spec videogame console instead of worrying about millions or billions of possible complications from how your computer is set up. Once VMs in the consumer space get good enough to really run high-end games, imagine everybody just writes to some Valve defined Linux spec that just happens to match some Steam Box, but you can install the VM for that game on your Mac or Windows or whatever and get to gaming.

If this happens, VMs will chew through RAM faster than just about anything out there.

So instead of installing and running Adobe Suite, you start up the Adobe Suite VM and boom, 8GB of your RAM vaporizes. Fire up your web browser VM and boom, there goes another 4GB. Your e-mail client annihilates 4GB more and now we've eaten up 16GB of RAM to run a handful of applications. Open up an MS-Office component and there goes another 8-16GB. Run a non-virtualized legacy app? Why those all just get sandboxed into an automatic "old shit" VM so the virii keep out.

This isn't inconceivable and I wouldn't be at all surprised if this wasn't already on the drawing boards somewhere.


Containerization could offer close to the same level of isolation as VMs without the insane memory bloat. Plus, VMs might be able to share common memory pages if it becomes necessary.


Funny, this occurred to me more than ten years ago (as a result of seeing knoppix, actually) but it still hasn't come to pass. Given the increasing importance of mobile, i doubt many users will sacrifice battery life or laptop comfort for the dubious benefit of having their applications partitioned into VMs.

Using VMs for apps does make sense for some pro apps, especially those with idiotic system requirements and/or copy protection. And obviously for testing.


I can see it being spun pretty hard as an anti-virus initiative at some point, or a "guarantee you software investment into the future" kind of thing.

Nobody (consumers) really care that it makes it easier for app developers or most of the other benefits, but consumers can be scared into all kinds of weirdness.

Bonus for PC makers, it would give a huge boost to kick off the upgrade cycle again for home PCs. More cores, more RAM, more disk space needed for all these dozens of VMs (each with their own multi-GB OS and software install).

Heck, I know of at least half a dozen people who do a variant of this right now in order to run a single Windows only application on their Macs.


If this gets popular, I can see them stripping the OS and other cruft down so that the application almost runs on bare (virtual) metal. A complete desktop OS with user software and drivers for all the unused hardware sounds unlikely.


This is basically what OSv is. It's stripped down virtualization environment meant to only run a single application on bare (virtual) metal.


Proof of concept viruses are already out for this architecture, so it just becomes a bigger management headache.


The primary reason you are correct about this assumption is the fact that the going trend is to package up applications and run them as a SaaS service. Those 'appliances' you are talking about will be web applications running a more highly decentralized hosting model, occasionally hosting it on the user's computer and more frequently on a neighborhood's deployment as a whole. This newer model of hosting will likely resemble true cloud compute than what we consider it today: 8-9 data centers running 80% of the public cloud in an offering called AWS.


>I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS.

Full VMs for regular desktop apps? I don't think so. We already have sandboxes.

And in any case, this wont happen in the life span of this professor being relevant (say, 5 years), so, it can't be an issue that necessitates 64 bit.


Isn't the OP referring to what happens after the 'lifespan' of this processor?

I'm still happily running a Mac Pro I maxxed out in 2008 and expect a couple more years out of it at the least.

It would be nice if this kind of machine could last a similar 6-8 years instead of entering (and I think that was the OPs point) 'engineered' obsolescence in 4/5 years?


>Isn't the OP referring to what happens after the 'lifespan' of this processor?

No, he's reffering to what will happen in "2 years". I quote: "But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing".

And there's just no way that normal users will run their desktop apps in VMs in two years -- which is what was said a as a justification for needing > 64gb.


I feel though consumer hardware is going to be a lot more standard from now on. The wild west hardware age may be coming to an end, so VMs-everywhere would be trying to solve a problem of the past rather than to be a solution for the future.


This was what was revolutionary about Quake III, no? It ran inside some iD VM...


He is talking about using VMs for real arquitectures (SO + apps).


Um, no.


Your use case would not use more than 64gb of memory, no; but it would also run on the CPU side just fine with a $339 4c/8t 4770k. A user with that workload wouldn't need 128gb, but they'd also not need an 8 core CPU.

Put it this way: a $120, dual core Core i3-2100 released in 2011 has support for 32gb of ram. But a $1000 eight core processor, released more than three years later, for nearly ten times the price, supports just twice as much memory.

I believe this is imbalanced. And expecting to be able to upgrade a workstation tomorrow, when purchased today for likely well over $2000, is not unreasonable.


I doubt many people pair an i3 with 32gb RAM.


Adobe products (looking at you After Effects) consume way more memory than you are giving them credit for. 64GB is not enough memory to support the computation this part is capable of. Next year, this chip will support at least 128GB.


many people today still use 8GBs, pro users might use up to 32GBs but its not that RAM usage has been rising rapidly in the last 4-5 years. In 2009 8GB was pretty standard for high end desktops like 16GB is today. This is a desktop cpu mind you, workstation and server cpus obviously support much more ram.


If someone needs more than 64 gigabytes, buying a new processor should not be a huge deal.


No overclocking support though which is a huge deal if you're interested in top end performance right now for cheap. If it's anything like the 6 cores from the previous generation it can go up to 4.5 ghz reliably and possibly higher. Having only 3.4ghz turbo also leaves you with pretty poor single threaded performance in general.


Samsung's stacked die modules enable 64GB per channel. i.e. 256GB per processor.

http://techreport.com/news/26985/samsung-ddr4-modules-for-se...

I'm not sure why you wouldn't get a Xeon version if you wanted to work with so much RAM?



According to the AnandTech article, they expect 16GB DIMM's to be certified soon, which would allow 128GB.


Considering the haswell parts don't support ECC, how many random bitflips per hour are your 128GB of DDR ram going to be taking and how many instances of unrecoverable data loss per year does that translate into on average?


That's it, I'm running my RAM in a ZFS pool from now on! Now I just need 128GB of RAM to run ZFS... /s


You joke, but running ZFS on disk w/o ECC RAM is not a good idea. See https://groups.google.com/forum/#!topic/zfs-macos/qguq6LCf1Q...


I've heard this from several sources, but is it worse than any other FS? I mean it seems obvious that data corrupted in RAM will be corrupted when you write it to disk. I think people are worried about software RAID vs hardware RAID, since all hardware RAID platforms have ECC cache and software RAID should too.


It is worse as ram corruption can cause a loss of the entire zfs pool. there are no zfs recovery tools available, so data recovery can be next to impossible or very expensive.


5


ummmmmm, twice the number I get from my current 64GB non-ecc system?


The Workstation Xeon parts are almost identical, and allow much more ram, and don't cost a whole more.


I agree that buying Xeon over i7 is often a good choice, but despite the similar names, the available workstation chips are not directly comparable to the newer chips. Intel's naming scheme intentionally makes it hard to decipher, but all the available Intel processors that allow more than 64GB of RAM are from a previous (Sandy Bridge/Ivy Bridge) generation rather than the current (Haswell) generation.

The clock speeds are similar, but there are lots of differences under the hood. Whether these matter depends on your use case, but in many situations with integer compression algorithms, I'm finding that I can get 50%-150% better performance per core with Haswell. Partially this is due to the AVX2 instruction set (which adds integer operations for 32B vectors), but more than I'd expected this is due to the BMI/BMI2 instructions and improved memory throughput.


The equivalent Xeons to these i7s, the E5-16xx v3 models, should be out very soon[1][2]. v3 means Haswell. If you need ECC, wait a month or two.

[1] http://www.chiploco.com/intel-haswell-ep-e5-1600-v3-35072/ [2] http://www.cpu-world.com/news_2014/2014080502_Xeon_E5-2600_v...


WOW. That is a huge improvement. Most uses will never see any more than 5-15% more for Haswell.


I agree, it's certainly higher than it would be in most cases. I have the luxury of being able to design the algorithms to match the strengths of the processor. You rarely see this much improvement unless you are optimizing specifically for Haswell with intrinsics or assembly, although occasionally Intel's compiler manages some magic vectorization I never would have anticipated. And you almost certainly won't see this amount of gain if you are running an older precompiled binary.


Yes, but that kind of thing always happens with new gens and won't last long.


I would love to see more than 32 GB RAM in regular desktops and 16 GB in laptops while we are at it too!


Lenovo supports 32 GB of RAM in Laptops since the W510, that model was out like 5 years ago.


people use desktops with 64GB of ram this days?

I haven't owned a desktop PC in a decade but I seem to recall by brother still got by with 8 GB.


It is interesting how tepid the default specs on desktop machines are. I bought a high spec work station a few months ago and the default spec for the big suppliers were <8gb of RAM and no SSD.


"People" rarely use $1k CPUs either.


Intel's disclaimer says at the end of the page: "products do not contain conflict minerals (tin, tantalum, tungsten and/or gold) that directly or indirectly finance or benefit armed groups in the Democratic Republic of the Congo (DRC) or adjoining countries."



and the recently discovered errata that cripple TSX instructions.

http://www.anandtech.com/show/8376/intel-disables-tsx-instru...


And where did they unveil this new processor?

At Penny Arcade Expo.

Times really have changed.


I thought to myself "No way PAX* is related to those other PAXes." Turns out it is. That's pretty funny.


To my knowledge PAX starter out because E3 lost its way. It's cool to see that the effectively solved the problem and made PAX what E3 was supposed to be, at least for gamers and game journalists.

Perhaps a note for those who don't know, penny arcade here stands for the web comic penny arcade. In the comic the main characters play and criticise video games as they come out, as well as going through general life stuff. For the past decade its extreme popularity has made it one of the industries biggest media.


as an ignorant of such matters, what does "E3 lost its way" mean?


In 2006 E3 was open to press and the public, with 60,000 attendees. In 2007 it was invited-press-only and had 10,000 attendees. In 2008, only 5,000 attendees. [1]

Meanwhile, PAX had 9000 attendees in 2005, and 58,500 by 2008 [2]. The most recent attendance figures I can find are 2011, with 70,000 attendees reported.

E3 later changed back to being open to the public; in 2014, their attendance was 48,900. So far from completely dead as a show.

[1] https://en.wikipedia.org/w/index.php?title=History_of_E3&old... [2] https://en.wikipedia.org/w/index.php?title=Penny_Arcade_Expo...


Too bad this processor is pointless for gaming. Maybe they should have announced it at IBC.


Depends on what your're playing. The flight-sim I play (Prepar3d) will use every core you throw at it.


Thanks for mentioning that; I see there's an oculus plugin available. Might get that under the academic license :-)


Developer license here ;)

It's a monthly fee for the pro version but it's about a 2 yr bnreak even with just buying it at $199

If you're unfamiliar with the history, Prepar3d is an evolution (by Lockheed Martin) of the old FSX codebase - but they've updated it with multi-core support, DX11, etc. Most (but not all) FSX addons are compatible, and it gives better frames at MUCH better visual quality than FSX.


I would love to hear about that, if you don't mind following up. Seems like Prepar3d is my killer app for Occulus ..


Elite: Dangerous also uses every core.


Yep, it's multithreaded. And it helps, too - things like the galaxy map are somewhat CPU-bound because of the large amount of procedural generation. (Not to say that it could be impossible to do with compute shaders helping out too, but then, ED is still in Beta 1.0 and I wouldn't expect to see really huge, complex optimisations like that at this point.)

Actually, I'd expect most new 3D engines to be prepared for heavy multithreading by now, but many new games at the moment are still based on older engines. 2015 and on, maybe not so much, when people start releasing things backed by UE4, CryENGINE (4), Source 2, FOX Engine, etc in earnest. I think the majority would target a quad-core, and might be surprised to find themselves running on a 6 or 8 core.


It's great to give real-time ray-tracing more potential.


4real dog hardcore gamerz only use arm procs. Angry birds4life.

Seriously though, what are you talking about?


GPUs have been the real source of gaming performance for PCs since the nineties, and this is true even for arm-based smartphones and tablets, which have had their own dedicated graphics chip since the first iPhone.

Even the puny Raspberry Pi (ARM v6 at 700Mhz!) is able to stream video in FullHD thanks to its dedicated GPU.

A slow CPU can still be a bottleneck if paired with an high-end GPU, but in general a cheap CPU with an expensive GPU is a much better set-up for gaming than the other way around.

(Of course there are exceptions, but this is generally true for your average AAA title).


The RasPi is a particularly unusual example here; the BCM2835 has a tiny ARM1176JZF-S taking a piggyback ride on a comparatively huge (and, despite some releases from Broadcom, still only very lightly documented) 2D vector processor, the Broadcom VideoCore IV, and it's the VC4 which actually runs the show (the proprietary firmware uses the commercial ThreadX microkernel, but fully open firmware is being developed) and boots it - it's actually a SoC made out of a GPU, and the ARM is broadly-speaking a coprocessor, posting requests to the VC4's kernel to please do stuff.

The video encoder/decoder has a little fixed-function logic, but the VC4 is rather good at vector operations itself, particularly 2D 16x16 and 32x32 ones, and probably has at least as much general compute muscle as the ARM, I'd say? It is not easy to program efficiently, however, and its pipeline doesn't seem to like branches very much. And trying to do ChaCha20 on it is tricky because I can't seem to find a way to address the diagonals...


Hardly any games are able to use more than 4 cores and most can not even do that.


That's not true anymore. Most of the AI "thinking" can be multithreaded, physics can be multithreaded, rendering can be multithreaded, even core systems like loading of ressources is heavily multithreaded. It's not easy, but it's a reality for most gamedev now. However, there is a limit of how much can be multithreaded and how well it'll scale. There's a lot of inter dependencies between objects and systems that force some level of serialization, just like any other multithreaded application.


This does not contradict what the previous commenter said.


if you can find a game developer who cares enough to split their game's logic down more than just "render thread" and "logic thread", then maybe an 8core would be useful


Actually the problem isn't developer laziness, it's just common sense. Most games are GPU and/or bandwidth bound, and cpus don't factor in beyond a certain point. Furthermore, if you wake up too many cores, intel cpus slow down so if you do have a monolithic render thread, threading everything else is counterproductive.


I seriously doubt if this is still true. Multicores have been common for more than a decade now.

Edit: first Google hit to satisfy parent, since replying is disabled. Valve goes multicore [2006] http://techreport.com/review/11237/valve-source-engine-goes-...


And yet I notice you didn't provide an example. Honestly I can't think of one.


DICE is one such developer. BF4 scales pretty well across cores, and so should any game that uses that engine unless crippled artificially. There are other developers as well but most make console only games (Killzone, Uncharted, etc... are all heavily multithreaded).


Look at the reviews. It's a few percent faster than a 4790K but three times the price. It's not for games.


PAX hasn’t stood for Penny Arcade Expo for some time now.


Is this akin to how IBM doesn't stand for Internation Business Machines? That is, is it no longer the same crew?


It is probably mostly the same crew, but the point is that they did the rebranding because they wanted to distance themselves from the Penny Arcade site and comic:

“[…] You’ll notice that it is no longer the Penny Arcade Expo. It’s outgrown us and it belongs to the gaming community at large now not just PA fans. Someday I expect to attend a PAX and not even be recognized. […]”¹

Therefore, I believe it is incorrect to say that Intel revealed this product at “Penny Arcade Expo”.

1) http://www.penny-arcade.com/news/post/2014/01/01/resolutions


Ah.. I see what you mean. I think the main point of the original poster was more that this is at a gaming convention, though.

That is, I doubt many of us make a huge distinction between PAX pre and post the direct affiliation with PA.

Edit: Also, thanks for that link. Really nice essay by Gabe that I'm glad to have read now.


> I think the main point of the original poster was more that this is at a gaming convention, though.

And my point was to simply correct the name used; PAX used to stand for Penny Arcade Expo; now it doesn’t, so we shouldn’t call it that. I’m still not sure what all the downvoters think I meant.


I think it's fine to still call it that descriptively; I don't agree that an official rename means we "should" do whatever the corporate branders desire. It still is the Penny Arcade Expo in the sense that it is an Expo, and it is owned/operated by Penny Arcade. It's true that the Penny Arcade corporate marketing team has officially redefined the initials as not standing for anything, but that's only really binding on their own marketing materials, not on everyone else.


My guess is that they think you were implying it is a different expo. Not just in name, but in content and such, as well.


It's still the same crew with the same attachment to Penny Arcade.


Also, Parallax has just open-sourced theirs!

http://www.parallax.com/microcontrollers/propeller-1-open-so...

8-core microcontroller in 2006, not bad. They're releasing a better one later this year, so they've opened the verilog design for the current one.


This is awesome, but off topic. Give me a few days to refresh my Verilog, and I can design you a 16 core CPU; this of course says nothing of the quality and performance of that CPU.

I don't mean to bash Parallax but to make a point that making an N-core processor is itself not impressive. Making an N-core architecture that performs like this is.


8 cores yes, but this is about desktop cpus


Could someone give me a simple explanation of what exactly hyperthreading does? They tout 16 logical cores and 8 physical cores in this new chip. I've read the Wikipedia page on it, but it gets too technical.

I do molecular dynamics simulations with LAMMPS, and I've noticed performance on my laptop is best with 4 cores. Using all 8 "virtual cores" is actually quite a bit slower.


What cpu? some of the earlier hyperthreaded systems were notably less effective than the current stuff.

The simple explanation is that you have a core with its set of execution resources. Instead of using those resources to satisfy just a single execution context the processor has two execution contexts, which run independently of each other sharing the resources. This can potentially result in large gains when you have a workload which often leaves a execution stalled waiting on ram, though less than you might guess because there are overheads and because modern processors are already able to extract a fair amount of parallelism out of a single thread.

It works out less well for software that sees non-trivial overhead when running more threads, or when more threads increase cache pressure too much.


A Haswell core can execute four instructions per cycle, but sometimes a thread doesn't have four instructions that are ready to execute because they're waiting for something (like a cache miss). In that case, SMT allows the processor to use that idle capacity to execute instructions from a different thread.


A core is a mostly independent processing unit within a larger package. Some hardware resources (like the memory controller, at least in non-NUMA devices) are shared between all cores, but many are duplicated for each core. Some examples of core-local resources would be their separate integer, floating point, and sometimes vector execution units (boxes that you can stick some data into and get a result out some number of cycles later), and some (but not all, depending on the chip) of the various layers of caches that sit between each core and main memory.

In hyperthreaded processors, each core can be further split into two "threads". These threads share most of their hardware resources; you can think of them as a thin veneer over a single core. These threads execute simultaneously, making use of whatever resources their partner isn't using at the moment.

Some examples (assume a single core processor with 2 hardware threads for each): Imagine you're running a thread, and it needs to access main memory before it can continue. Depending on the chip, this will take hundreds or even thousands of cycles before the thread can continue. Hyperthreading is one way to make use of this time; the other thread can run at full steam while the first is waiting to get its results back from memory.

Another positive example: you're running some floating point DSP code (perhaps your music player's equalizer) at the same time that you are compiling a new build of a program. The DSP code will make use of a mix of integer and floating point resources, while the compiler will probably not need to use the floating point units at all. Hyper threading allows the music player to use those resources that would otherwise be idle while the compiler is running. The DSP code will slow down the compiler because it is competing for things like integer resources (which are needed for pointer arithmetic, for instance), however there will still likely be an improvement over normal multitasking on a single hardware thread.

Now, for a negative example: you are running two very demanding threads. These threads are painstakingly programmed to use almost every resource they possibly can at any moment, they very rarely need to stall to access memory, etc. In this case, the two threads will only waste time fighting over the same resources, kicking each other out of cache, etc, and it would ultimately be more efficient to disregard hyper threading and run each thread sequentially.

Another negative example: you are running two instances of the same thread. This will result in good utilization of some resources (such as code cache, because each thread is executing the same program) but practically guarantees contention over the execution units, even if the program isn't that demanding.

To sum it up, hyperthreading is usually a net positive for desktops where you have a very heterogenous (and often not anywhere close to optimally programmed) mix of programs that need to run at once, and usually a net negative for high performance computing programs like your molecular dynamics simulation where every thread is executing the same extremely demanding program at once.

EDIT: And to go a bit further and explain what makes GPUs special, they're basically the inverse of a hyper-threaded CPU, great at running a lot of homogenous threads. Instead of having independent threads sharing the same resources, they have the same logical thread (many designs sharing the instruction pointer amongst many hardware threads, thus causing each to execute the same instruction at any given moment with different inputs) shared across cores that have their own indepedent execution units.


That is a great explanation, ill be saving tht one next time someone asks me! Thanks!


This is a great explanation, thanks.


Hyperthreading is basically a way to emulate multiple cores, but sharing the more rarely used units (like floating point) between them. This way, a normal application can use the multiple cores, and actually run in parallel most of the time. You save a lot of silicon area, but when both threads try to execute the same rare instruction at the same time, they can't run in parallel.

The problem of molecular simulation is that it's almost entirely composed of floating point instructions. Thus, hyperthreading can't run them in parallel at all.


If you want the academic answer, here's a survey of the literature that I wrote on this topic back in 1996. (Section 3 is where it starts to talk about hardware).

http://oirase.annexia.org/multithreading.ps


Yeah, I suppose that for your case, HT does nothing at best

The more CPU-bound and similar (between them) the tasks are, the less HT is going to make a difference

It made sense for single-core P4 and Atoms, but for an 8-core processor, the efficacy of HT is debatable.


Why just "client"? Why not use it in a server? What am I missing?

Cost per operation? Can get an AMD 8 core processor, 125 Watts, 4.0 GHz clock, for about $180. So, $1000 for an Intel processor with 8 cores with hyper threading stands to be cost effective? In what sense?


An Intel 4-core processor for about $180 is going to be roughly comparable to the AMD 8-core, given sufficient threading; for single threaded, the intel will probably be faster because of higher IPC, and the power/heat will be much lower. This site has a pretty reasonable comparison: http://cpuboss.com/cpus/Intel-Core-i5-4430-vs-AMD-FX-8350

OTOH, AMD will let you use ECC, which would be nice; the Intel processor includes a video card on board, if that's of interest.


The server version is called Xeon and it's the same chip with ECC uncrippled.

One Haswell core is equivalent to two AMD cores. But yeah, AMD is dramatically cheaper than Intel for equivalent performance.


That leads me to a question that I was going to ask: what is it that justifies the massive markup on Intel chips vs AMD? Is it just the name? Is there an advantage in intel performance or power usage? If so, does it really make up for the price? Because as someone considering building a computer from scratch (I haven't in quite a while), that AMD price tag is very appealing.


I think many people would agree that the price discrepancy does have a lot to do with branding and marketing strategy.

Intel is a larger wealthier company and they assumedly pour a lot more money into R&D than AMD. If you go purely by market capitalization, Intel is about 50-60 times larger and AMD. That's not necessarily a fair measure and ignores a lot of variables but it does help shed a little light on the situation. In addition, AMD's business model has them focusing a lot of their attention on niches that intel doesn't seem as interested in. For instance AMD continues to develop new ARM technologies that could provide a very important market edge for them in the future as small "internet of thing" like devices start to emerge and become a part of people's daily life.

Here's a decent article on the subject: http://analysisreport.morningstar.com/stock/research?nav=no&...


Intel isn't performance per dollar, it is absolute performance. Performance per dollar I think the 8 core AMD parts still fail to mid range CPUs and maybe even ARM chips, just due to their insanely low cost, but it would take a bunch to get enough performance to use effectively in a workstation or server environment.


Well, AMD's power usage also affects reliability (motherboards fail way more often). That is, if you're buying a top of the line chip, which you should because it's almost as fast as Intel's midrange Haswells :-)...

But IMO, just go with Intel.


Thanks.

Last night I went shopping and for a mobo looked at the Asus M5A97 as at

http://www.asus.com/Motherboards/M5A97/specifications/

and that Web page says that the mobo will use ECC (error correcting coding, as I recall, use 72 bits for each 64 bits the programmer sees).

Thanks for confirming that the AMD processor also supports ECC.

Next I'll wonder what Windows Server does in case of ECC detected errors, correctable or not! Is the OS willing to disable blocks of memory that are giving ECC errors and report the situation?


Not in any benchmark I have ever seen and that not counting the performance/watt aspect which is where the new x99 with DDR4 an i7's have a nice improvement.


Equivalent being the key word there. You can get much more performance in a single Intel chip than you can in an AMD, but you're going to pay for it.


AMD chips support ECC RAM even in the desktop models. My FX-8320 supports ECC.


And usually IOMMUs and so on that Intel disables on many models (even servers!).


Name that $180 AMD processor before you imply objective superiority. The $180 Intel processor is probably as fast or better. Keep in mind, Intel's $1000 processors aren't meant to be 5x as fast as the $200 ones.

Intel has had better performance per watt for years, and an "8 core" AMD processor and "8 core" Intel processor are not equivalent.


I've had a vishera 8 core for several months and am happy with its performance in some ml tasks I'm doing for fun.


A 4 core intel processor running at the same clock rate as a 4 core AMD processor will be wayy faster and efficient per watt and overall.


I really don't keep up on this stuff much, but why is this still Haswell based? Why not just do this on Broadwell?


Intel has a pattern of releasing variations of microarchitectures with higher core counts after their immediate successors have been announced/launched. Think Sandy Bridge-E after Ivy Bridge, Ivy Bridge-E after Haswell, and now Haswell-E.

They probably do this because it's probably to easier to manage the larger die size and greater complexity as the process has matured and yields have improved.



Also, these are essentially rebadged Xeon's. Xeon's have higher requirements for maturity, testing, stability, et cetera.


And a lot of the time, many of these CPUs are almost identical to one another. They will take a single CPU and brand it in 10 different ways depending on how it does during manufacturing testing.

For a given CPU that comes off the production line, the max frequency it can run at will vary from chip to chip. As well, if there are dead cores on that chip they can just disable them and sell the chip at a lower price (though, intel may not do this; nvidia has in the past with graphics chips).


"Extreme Edition" is normally what they've done at the end of a lifecycle. Broadwell's been delayed it appears, due to process difficulties possibly? This may fill the gap... a little.


> Intel's first client processor supporting 16 computing threads and new DDR4 memory will enable some of the fastest desktop systems ever seen.

Not necessarily -- as AMD fans (I'm one) have seen, the entire "more cores is better" is not always true -- it heavily depends on the workload, and frankly, most games and programs are not utilizing these cpu's fully (yet). Now, put something like a 2 x 16 core Opterons in a server and you have yourself quite a powerful virtualization platform.

With that said - I'm interested in seeing it's price point and performance compared to AMD's offerings.


Price point (or at least MSRP) is given in the article: $1000 USD for the 8 core model.


How well commonly quoted benchmarks (passmark, geekbench, cinebench etc) measure a processor as a VM host? Obviously single core benchmarks are somewhat representative, but miss things like cache sizes at different levels and hyperthreading. Are there benchmarks that would take those into account or would otherwise be good for planning VM host use case?


For VM hosts -- the number of cores (plus their respective resources like cache, etc) are more important than how performant each core is individually. Usually for hosting companies, density is more important than raw performance, making 32 cores in 1 physical host very attractive.


This thing supports 8 DDR4 slots. Finally we are moving beyond 32GB RAM limit.


X79 Socket 2011, which this is the successor to already supported 64GB with 8 DIMMs so nothing has changed yet. They've yet to announce a socket 1150 successor, most likely it will have 4 DIMM max., however, I expect 16GB DIMMs to appear in DDR4 soon.


Can't wait to ditch my hard drives and run a RAM Drive only plugged to a portable RTG.


> Finally we are moving beyond 32GB RAM limit

We sure are living in a sci-fi millennia


I'm both excited and not. This is more power in a CPU and that's great progress, but for a desktop? I mean servers, games and graphical applications would be faster but the majority of our waiting time when using a computer is on a single-threaded calculations. As someone who doesn't game a lot and uses GIMP only for the most basic of purposes, I would much rather have an improved dual core CPU that produces less heat in total (compared to 8-cores) and can be clocked higher because of that.


Well, yeah. This is their very highest-end processor, and costs more than the entire desktop+monitor+peripherals that most people need. Not sure what point you're trying to make. Do you think companies shouldn't continue pushing the envelope of what's possible?

Edit: "very highest-end processor" should read "very highest-end PC processor". I'm excluding the workstation-class Xeon.


Haswell-E is Intel's high-end consumer CPU series and there's nothing wrong with that. However, only buy it if you regularly use software that is able to utilize that many cores/threads.

The dual-core CPU you are looking for is called the "Pentium Anniversary Edition: Intel Pentium G3258" and was released in July.


The tradeoff between fewer faster cores and more slower cores has been mostly solved by turbo mode. If only a few cores are in use they will turbo to a high frequency and the others will turn off. Unfortunately in practice the turbo isn't very good on Haswell-E.


I'd still rather have 6ghz 4-core but I guess that isn't going to happen (anytime soon for a reasonable price).


Why didn't they do this sooner?

AMD already has an Operton 16-core processor. I'm not saying that AMD is any better, but I thought Intel would have started selling these from long ago, judging based on the pace of the computer industry.


I take that back, I see that Intel has ten-core commercial processors on the market already.


More, in fact.

http://ark.intel.com/#@ServerProducts

But your original point is a valid one: AMD introduced desktop 8 core processors a couple of years ago, while this is intels first 8 core desktop processor.


Intel is getting disrupted by the book (they keep moving upmarket now). The funny thing is they know it. But they can't stop it at this point. So they just go along with it.


this is a pretty naive comment, but it's really intended to be totally serious: what's up with cores? like, why do we really need cores? is it really fundametally better architecture to have a RISC sitting at the front of the instruction pipeline to distribute x86 instructions to some internal set (particularly wrt. to power consumption), or do we in fact just have cores in order to increase fab yield [/ootbcomp.com-bootcamping]


Are you proposing a 32-issue processor instead of eight 4-issue cores? One problem with that is that most software doesn't have enough instruction-level parallelism to use such resources. Such a complex processor would also be likely to become bottlenecked on central control structures like the issue window (which can be solved with clustering, but then you're almost back to multicore). But check out EDGE/TRIPS/E2 for some ideas in this area.


I'm talking about something I don't understand....


I wonder if Apple will announce anything that uses this processor in the Sep. 9th event? I could possibly see it being used in a refreshed Mac Pro or iMac.


This alongside with two high end NVIDIA chips (Geforce Titan) in a Mac Pro would be insanely good. Not sure whether it's possible thermally though.


Haswell-EP will be used in a Mac Pro refresh, but I would bet against Apple using any preciousss keynote time to discuss such a minor refresh.


How does this compare to a 3.0GHz 8-core Xeon E5?


About 5% faster than a E5 v2.


Of course HP will now include it in a desktop with half the features disabled, and no option in the BIOS to enable them.


And the FX 8120 eight core CPU ??


5yrs late


Finally!


Title Caps And "Unleaches". Intel Unleaches 8-Core Paralel Marketing On News.Ycombinator




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: