Hacker News new | past | comments | ask | show | jobs | submit login

This is an interesting announcement for a few reasons:

1) It's true that VMs generally provide bad performance for high i/o applications, particularly databases. There are various ways to mitigate this such as using high IOPS drives like AWS EBS PIOPS or SSD backed storage from Google/Digital Ocean.

2) This is using OpenCompute and is all open source and going to be released, so they can take advantage of the efficiencies of that hardware architecture.

3) It gives you quick access to physical servers connected to your cloud environment. Benefits of flexibility/scalability with the benefits of hardware. This seems to be how they're differentiating against Softlayer who have bare metal servers available via API, but they're billed monthly and take a few hours to provision (which is still pretty impressive).

However, there is no pricing announced - this will be key.

Also, will this start to eat into their managed services perhaps? Anticipating disruption?




Brute force is a pretty expensive way to get more I/O. Coincidentally the other day I measured 50,737 mixed random IOPS under KVM and 104,518 IOPS under Docker using the same hardware. Could I have bought even more SSD to improve performance under KVM? Maybe, but at what cost.

Rackspace is definitely disrupting themselves with their cloud product and today we can see that they're not doing it half way.


Checking IOPS of a fully-virtualized system (even with virtio, and it's unclear from your post whether you were using it or not) is not at all a relative comparison to something which is literally a process on the host system.


ime non-vm servers are an enormous win over ec2, and probably vms in general. A previous employer moved a large data processing pipeline from ec2 to softlayer; we saw costs fall from ~$100k/mo to high $40k/mo while tripling the throughput for roughly a 6x perf boost.

The one area that this announcement doesn't address, however, is contended networking. Rack local servers with a 2G (or ideally 10G) full port-to-port simultaneous switch did amazing things for our app's performance. When you're looking at whole-app performance, you're sensitive to contention on all of cpu, io, and networking. Hadoop is particularly sensitive to contended i/o, but secondarily sensitive to networking.

edit: s/big data/large data/ -- people who say bigdata are tools


As we talk about here: http://developer.rackspace.com/blog/how-we-run-ironic-and-yo..., each instance is provisioned with redundant 10Gbit network links with minimal network over-provisioning in each cab.


oh awesome; did I miss that in the article? Reading is hard...


This is why I just say bigger than last week's data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: