The biggest take-away for me is this gives you an oppertunity to (somewhat) rebalance the performance equation of the server.
Currently, your typical dual-socket, quad/six core CPU, is quite top heavy: LOTS of CPU performance, only a satisfactory amount of RAM bandwidth (and in certain cases not limited by bandwidth but by the DRAM command rate), along with piddling disk and network performance.
You could create a dual or quad-core ARM Cortex A-9 chip with a dual-channel memory controller (more memory bandwidth/ops per degree of CPU performance), dual or single gigE (more relative bandwidth per node), and finish it with a single SSD (lots of IOPs). Since these nodes are fairly affordable (probably $500) and relatively low power (probably 10-15 watts, you're left with nodes that 1) offer 1/3rd the performance but are 1/5th the price and use 1/20th the power 2) can be scaled cheaply and predictability, particularly as it pertains to laying out your datacenter power and cooling infrastructure.
Also, for people who don't have complete control of their data center (they're renting rack space), it eliminates a significant amount of headache finagling your datacenter to provide enough amperage to your rack, and eliminates surprise deficiencies in their cooling infrastructure.... lessons I've learned the hard way. Micro-servers let you scale at a much more granular level so that you aren't rolling out several kilowatts of servers at a time, and let you avoid carrying excess capacity as you wait to grow-into it (only to scramble to scale after you're full-up again)
An aside reminder to everyone whose system shows spare CPU and bottlenecked IO: are you compressing everything you could?
It can be hard to get over the sense that always compressing and decompressing everything that goes to disk is wasteful. But available CPU cycles left unused are marginally free and can't be stored us for use after the moment passes. They should be "wasted" to achieve even a slight improvement in overall throughput.
Unfortunately, the transparent compression built into WinXP and, I assume (perhaps wrongly) in Vista and Win7, makes everything slower, even on a system with lots of CPU and a slow disk.
This is so not my area of expertise. However, I wonder, wouldn't more small servers imply more hardware failures? Exchanging one HD in a big server might be less work than exchanging 10 HDs in small servers. Of course the HD in the big server might take down more "applications", but I think that is taken care of in other ways (hot swappable HDs, redundance?).
Overall I wonder if maybe a new OS for the cloud is called for? It seems inefficient to have seperate VMs running full OS for every tiny application. Maybe in the future not only storage will be a service (like S3), but also CPUs and RAM could be plugged together at will? Like there wouldn't be lots of small or big server instances, there would be farms of CPUs, farms of RAM, farms of storage, that could be combined at will. Maybe networking would be too much of a bottleneck, though :-/
The number of disks you have should be determined by your workload, not the number of servers. Think of it more like 12 disks in a big server vs. one disk in a small server -- the total number of disks (and disk failures) is the same.
I'm hoping that these will shortly a) exist and b) be produced in enough volume that they unit cost will come down to <$100 levels. I'd like to buy a small, low-power cluster for hobbyist use but the current low-water mark is about $200 plus memory (which is limited to 2G). Hopefully there would be some standard on DC-DC distribution but that would just be gravy.
I doubt they'll be that micro; the optimal point looks closer to $500. Also, the minimum order tends to be a full rack. You should look at mini-ITX boards.
$50 for ram and <$100 on storage for a HDD or $200 for a small SSD. Bulk prices would take that down by probably 10-20% today so you could probably do this for $300 today. I believe Dell is selling systems like this as private integrations just not retail, I'd imagine their costs are substantially lower.
Bump the CPU to a 330 or whatever, bump the ram sure but you're still talking $300 when a normal 1U HP server is <$1k list and is vastly more bang for buck. The cooling and power density arguments aren't going to be very persuasive (IMO) unless the price is there as well, hence my hope for very low per-unit costs, even if there are chassis/bladecenter components as part of the package.
Traditionally, Intel and its partners have prevented the microserver market from taking off by limiting their Atom (and other low power) platforms to small amounts of RAM, usually 2-4GB.
It sounds like they're finally willing to open this up and allow a decent amount of ram (8GB and beyond) on low-power, tiny motherboards.
The upcoming ARM Cortex-A9 processor will own the microserver market if Intel doesn't let Atom compete. So if this is coming, then it's coming no matter what Intel does.
I need to do more research, but you could build one of these for about $200-$300 depending on volume. Assume that the storage would be an SSD connected a larger storage block.
Currently, your typical dual-socket, quad/six core CPU, is quite top heavy: LOTS of CPU performance, only a satisfactory amount of RAM bandwidth (and in certain cases not limited by bandwidth but by the DRAM command rate), along with piddling disk and network performance.
You could create a dual or quad-core ARM Cortex A-9 chip with a dual-channel memory controller (more memory bandwidth/ops per degree of CPU performance), dual or single gigE (more relative bandwidth per node), and finish it with a single SSD (lots of IOPs). Since these nodes are fairly affordable (probably $500) and relatively low power (probably 10-15 watts, you're left with nodes that 1) offer 1/3rd the performance but are 1/5th the price and use 1/20th the power 2) can be scaled cheaply and predictability, particularly as it pertains to laying out your datacenter power and cooling infrastructure.
Also, for people who don't have complete control of their data center (they're renting rack space), it eliminates a significant amount of headache finagling your datacenter to provide enough amperage to your rack, and eliminates surprise deficiencies in their cooling infrastructure.... lessons I've learned the hard way. Micro-servers let you scale at a much more granular level so that you aren't rolling out several kilowatts of servers at a time, and let you avoid carrying excess capacity as you wait to grow-into it (only to scramble to scale after you're full-up again)