I am... excited. Sadly, three grand is a lot of money for prgmr.com right now, so I'll have to consider carefully if I want to wait for production or if I want to buy a dev kit and start experimenting ahead of time, but ARM servers are something we have been talking about selling for a while.
Even if their performance per watt turns out to be not as great as expected, there is a fair bit of interest in the architecture, as far as I can tell.
These things look beefy enough that I might even be able to virtualize and sell Xen VMs. There is some dev work involved with that, but srn has expressed interest, so maybe .
I'm pretty sure Xen on ARM has been a thing longer than KVM on ARM.
But in general, why Xen over KVM? the primary reason is that I understand xen better.
I have a secondary reason, and it's a long-term bet against KVM.
KVM has a lot of features that while they are interesting in the corporate space, make oversubscription too easy in the hosting space. Set up a KVM host with a bunch of ram and a bunch of swap, and start handing out "memory" to your kvm guests? By default it will get ram or swap as availability and usage dictate, just like any other process.
To be absolutely clear, as far as I can tell, none of the current KVM VPS providers are using these over-subscription methods right now. I'm not accusing anyone of anything.
However, the primary reason why Xen beat OpenVZ and the other containerization systems was that it is really quite difficult to oversubscribe ram on a Xen host, and your pagecache was in your ram. It was not shared with others.
Sure, if the manager of an OpenVZ host allocates resources responsibly, it can be more efficient than Xen. And some hosts did that. But, because some hosts did not do that, Xen developed a reputation. Buy a xen host, and you are getting honest ram. Buy an OpenVZ host and you are relying on the reputation of the host.
KVM, now, can be configured to run in the same way Xen is, with no sharing of memory, pagecache or otherwise.
However, it is simple to give KVM guests memory that is ram /or/ swap, and other than performance, it would be impossible to tell, from the guest, what I gave you.
Now, of course, right now KVM enjoys the same reputation as Xen, because all the current KVM providers seem to be doing the responsible thing and only handing out ram, not ram/swap mix, but at some point? someone is going to change that.
The irritating thing is that the xen developers seem to be running as hard as they can to put in kvm-stlye features to share more memory. Isolation, when you are in a multi-tenant environment, is often more important than overall efficiency.
You can use either. Both KVM and Xen developers have had this hardware available for quite some time, and before that we'd been developing on the Foundation Model (very slow) simulator for over a year.
By the time this hardware is widely available, all your favourite Linux distros will Just Work, with ~99% of the functionality/packages/languages of x86. In fact that's pretty much the case already.
Finally, an ARM platform with ECC support. Well, and reasonable performance networking one presumes. And PCI-Express!
I think ARM has a lot of potential in the "smartphone priced" server market, particularly in an age where physical isolation (vs VMs or containers) and legal ownership of the server might theoretically provide some advantages against state surveillance. At least post-facto in the sense of "one could conceivably build a lawsuit on this" vs. "now I need DigitalOcean/Amazon/Google to sue the govt for me."
That may be correct, but I chose the word "platform" very carefully. Most ARM platforms to date, whether dev kits or final product have been aimed at smartphones, tablets, consumer devices (wireless routes, NAS etc) or fully embedded systems (SD cards?, hard disk controllers, etc). Apparently in those applications, nobody cares enough about ECC to bother.
Well, that and the fact that the memory is soldered on in most cases and the total device cost extremely low (compared to servers). So you are expected to simply replace your system entirely in the event that there are memory errors you notice. And hopefully nothing goes wrong that you don't notice.
Also, I suppose technically if you were only worried about bit flips due to solar activity, then a tiny amount of RAM on a tiny chip has less surface area and fewer bits TO flip than your standard full sized DIMM.
ECC doesn't typically yield that much of an advantage in the nominal case until you get into very large amounts of memory - it's just a more expensive part. I think it was calculated that cosmic rays are likely to cause single bit errors in systems of 8GB of memory only once a decade, and most embedded ARM situations (also carefully avoiding the "p" word) have far less than 8GB of memory. So if single-bit errors are causing your embedded device to crash, it's okay, it'll just roll over and restart... once every 50 years...
There haven't really been any usable ARM server chips. There was supposed to be the Armada XP, but by the time it came out it was obsolete. There was supposed to be Calxeda but they went out of business. Maybe the nth time's the charm.
AMD has the size such that making it happen may be easier for them. But they are also hungry for something, anything to give them an edge against Intel. Fingers crossed.
No. The Red Hat announcement is a variant of RHEL. However you can just run Fedora 21 on ARM64 -- I'm running it on my dev machine (which is not AMD, but X-Gene based).
It's a footnote to the larger thing, but it mentions "compression and crypto co-processors". I'm familiar with hardware crypto acceleration, but I'm sort of curious about the compression part--is it gzip or one of the fast algos like Snappy/LZO/LZ4 or something proprietary? How fast?
Besides compressing network traffic, hardware compression could be interesting for applications like zram--somewhat expands what you can store in RAM with (perhaps surprisingly?) less random page-read latency than even an SSD.
Of course they'll tend to pick cases where it helps a lot, but they claimed 50% wall time and 25% power savings on a Hadoop job, and they emulate the zlib API.
If I'm reading right this was with a Sandy Bridge-based Xeon; time saved by the coprocessor could be greater when the CPU is slower.
Its a dev board; not really something you'd use in production. Its a small run of reference hardware for devs to get started on; by the time its ready for release OEM's will have much more affordable kit for end users to buy.
That's my point of comparison. Best case, if Seattle can be slightly faster than Avoton with better I/O and the customers get high on ARM hype, AMD could charge at most the same price as Avoton. Most likely it will be cheaper.
Interesting, except this is "head to head" with AMD. It is part of the reason Intel has been making massive investments across their portfolio to fend off ARM incursions.
AMD showed with the original AMD64 architecture that they could "out compete" Intel, it was fascinating at the time to see Enterprise level folks having to choose AMD over Intel because Intel was insisting that if you wanted 64 bits you went with Itanium. But it also showed how futile it is to try to compete with them using an ISA and eco-system where they have all the advantages. Chip sets and front side busses and a million other ways that Intel keeps a lock on their bread and butter.
ARM from AMD, means that to compete with them Intel either has to cut margins on their server chips to ARM levels, or make x86-64 server chips competitive. For a lot of places CPUs per cubic foot of rack space is an important number as the cost per month of a rack in a Colo (can be) fixed, the more stuff you can run in it while running at 80% power the lower your monthly cost per instance, and higher profit per instance. Getting a penny per core extra per hour per day can be a huge difference.
So this is an opening salvo in the next battle. I am predicting it will be just as interesting as the time AMD showed the world you could do 64 bits in a 'commodity' processor.
Maybe this is what you're alluding to, but Intel used a whole host of anti-competitive practices to prevent AMD from gaining a foothold in the market at a time when AMD had faster and better CPUs. And this was part of the reason why AMD ended up buying ATI (IIRC they were after NVIDIA for a while) - they wanted another angle from which to attack Intel.
But it seems like they overpaid for ATI, and then Barcelona happened and Bulldozer was delayed beyond belief and so the last few years have been very tough for AMD. Let's hope this gets them back into the game.
It's not that they can't/won't compete (actually, when it comes to large scale virtualization, a 16 or 32 core Opteron is much more preferable over an 4 core or 8 core Xeon, and it's less expensive sometimes too).
AMD is seeing the writing on the wall -- ARM is coming into the server market in a big way. It's way better power consumption to performance ratio. You can pack a lot more cores into a server and still consume less power, and at scale, they can compete with traditional dual CPU x86 machines. They are also cheaper to manufacture and purchase.
A great many server companies are getting into ARM -- most notably HP. AMD has said this is a move for the 2016 market (at the earliest). I think it's the right move and welcome it.
It really depends on the workload. For some use cases like databases or analytics, ARM is often less efficient than the latest Intel chip almost anyway you measure it. The increased operational throughput per CPU, which can be integer factors, offsets any differences in cost or power consumption.
ARM's sweet spot has traditionally been for CPUs that are otherwise going to be underutilized, saving both power and CPU costs in those circumstances. As cloud platforms become better at ensuring maximum utilization of CPUs, and a larger portion of workloads become throughput intensive, it moves things a bit more in Intel's direction. ARM will probably find a better market in the Internet of Things than in the data center.
If that's true, not only is AMD wrong, so are all the other "datacenter ARM" entrants: HP, Dell, and Samsung to name a few.
It's also unwise to cite the companies who tried and went bust (like Calxeda) since the tech is still very early. A large entrant may be able to establish a niche some time before a startup could do the same.
Assuming these chips come in TCO competitive for a decent proportion of workloads (and in theory they should be clearly better for enough) Intel will have to respond either with new technology, which they either have on a back burner or isn't there, or cutting costs, which will destroy their margins and R&D capacity.
It's very hard to see how Intel can maintain their position here at all. Once that spiral starts they're stuffed, and normally the only outcome of such situations is to belatedly combine the competing product with your historic strength, so it seems inevitable Intel will be making ARM chips before too long, again.
Intel's data center strategy for ARM is greater integration density.
For example, in less than a year Intel will be selling a Xeon socket compatible chip with 72 Atom cores with the same power envelope as a normal Xeon server chip. And each core will have two 512-bit vector pipelines (AVX3?), so the throughput for compute intensive operations should be impressive. It will undoubtedly be expensive but if you look at the overall dollars per compute, and how many VMs and similar you could run on it simultaneously, it starts to look very attractive in terms of data center economics.
It's impossible to really compete head-on. It's ridiculously expensive and even if your processors are better people will still buy Intel because of brand and relationships. AMD wants to swim in a pond which doesn't have Intel in it, and I don't blame them.
Seems like a bad idea to price your dev kit this high. If I were in AMD's position, I'd heavily subsidize the dev kits -- maybe $500-1000 -- to try to get people to build on my platform.
At $3k, it's only going to be appealing to hardware OEMs or big teams. At $500, random people within companies would buy them (i.e. me), or people might get them for personal projects.
The idea of something with working TrustZones (vs. the abortion which is TCG/TPM) is intriguing, on top of ARM power savings.
These surely aren't ramped up on a full production line, more likely being made in prototype houses that can't handle the volume, and have extensive testing of each unit.
You need to get the OEMs and big teams on board first, and can get the developer-centric experimental products out later.
"The whole thing has an expected power usage of 25W." - from an Ars article from Jan this year.
On the desktop front, the recent Kaveri A10s (low cost quad core CPUs with built-in Radeon R7 graphics) coupled with 2.4Ghz memory gives very playable frame rates for almost all the big releases.
ARM Opteron development kits are targeted at server- notice the Opteron moniker- which means RedHat is a natural partner, and Fedora is RedHat's testing OS.
(Why not an old stable OS? Because ARM64 is brand-new)
Red Hat tests new things in Fedora before they move into RHEL/CentOS. This makes Fedora a fantastic development platform as well as a "preview" of what may be coming in the future RHEL/CentOS releases.
Erm... a good Xeon Dell with Dual 10GBe extension is $2000... and the Xeon can scale up basically indefinitely.
Don't believe me? Go configure a Dell R220 Poweredge with Dual-10GBe ports. Its only $2,021.37.
Hopefully, the real hardware will be significantly cheaper. AMD has an issue competing against itself (ie: Opteron 4360), let alone against Intel if these are the prices they're looking for.
Even if their performance per watt turns out to be not as great as expected, there is a fair bit of interest in the architecture, as far as I can tell.
These things look beefy enough that I might even be able to virtualize and sell Xen VMs. There is some dev work involved with that, but srn has expressed interest, so maybe .