Hacker News new | past | comments | ask | show | jobs | submit login
Comparing Filesystem Performance in Virtual Machines (mitchellh.com)
123 points by Sevein on Jan 10, 2014 | hide | past | favorite | 45 comments



There is a lot of caching involved and it looks like the VM writes are not synchronous - they do not wait for the actual disk to be written. Normally nothing can beat the native access, but in a VM the "disk" is actually a sparse file that can be efficiently cached in RAM. I see the same behavior/speeds in my VMs if the virtual disk has a lot of free space and I have a lot of free RAM on the host. The speeds get "down to earth" if you fill up the host's RAM.


3GB/s writes on a single SSD should raise more eyebrows. I dunno what was actually benchmarked, but there's a problem somewhere...


You're assuming the writes are actually going to a physical disk. As I mentioned in the post, the hypervisors are very likely just writing to RAM and not ever committing it to disk. Even when you `fsync()` from a VM, there is no guarantee the hypervisor puts that to disk.

If you look at the graphs, they corroborate this. The "native" disk never really exceeds 500 to 600 MB/s, which is about as fast as my SSD goes. The hypervisors, however, are exceeding multiple GB/s. It must be RAM.

Also, re: "I'm not sure what was actually benchmarked" The method of benchmarking is covered in the bottom of the post. I realize it isn't extremely detailed. If you have any questions, I'd be happy to answer.


The problem is that the tests are flawed and the native speed being slower as virtual is a pretty big red herring for that. That's OK because benchmarking that tests what you are after is actually very difficult. Small assumptions can create big differences. If you can't guarantee that the data has actually been written to the disk then you're testing caching mechanisms, something you already point out in your article, but then you're no longer testing file system performance as the article claims it is benchmarking. The problem is that we don't even know which caching mechanisms (guest OS, hypervisor, hard disk driver) or that the conditions are always the same.

A typical thing that performance benchmarks do to negate guest OS caching is to process significantly more data as what the available RAM is set to. For example, if your guest OS RAM is set 512MB, process 10GB of random data. Of course then the question is how-to get random data as you don't want to end up testing the random generator ;) or your host OS caching.

Another way to make sure you test data committed to disk could be to include a "shutdown guest OS" part and measure total time until the guest has been fully shut down.

I know that at least VMware has the ability to turn off disk caching (in Fusion, select Settings, advanced "Hard Disk Buffering" <- disable). I am not aware of a similar feature in Virtual Box, it might exist though.

Even while you tested the same guest OS, we don't even know if the hard disk adapters where both using the same hard disk drivers. Performance differs between IDE/Sata/SCSI drivers. SCSI drivers have queue depths, IDE drivers have not.


While the article uses only 64 KB/MB files for the analysis, the full Excel workbook contains data that had up to 512 MB files. The VMs only had 256 MB of RAM, so I did indeed test the RAM-spillover cases. The results were very similar, though unsurprisingly the VMs didn't perform quite as well (though they did still beat over native).

I never tested going over the native's RAM.


The amount of memory allocated to the vm does not include the host fs cache. So it is still easily possible for a 500M file to fit in that.


I meant to say you've not benchmarked disk access, and we have no idea what each filesystem is actually doing. Caching performance is "nice", but it says nothing about the actual performance we'd get in real use. Maybe the "slow" fs just exhibits less aggressive caching, which might prove just as efficient depending on the workload. It's definitely interesting to note the huge differences, but I'd really like to see how it goes in "real" conditions...


how are these not real conditions?


I don't know what you are doing in real conditions but i am for sure not only running everything in my RAM (that'd be great, i wouldn't even need a harddisk anymore). The benchmark needs to be revised to account for the VM caching or it's just useless. Or rename it to "benchmark of memory access and caching algorithms".


Writing to /dev/null is even faster I bet.


  >dd if=/dev/zero of=/dev/null
  ^C32646111+0 records in
  32646110+0 records out
  16714808320 bytes (17 GB) copied, 17.881 s, 935 MB/s
Running on the host OS of my laptop.


Hint: always use a non-default block size (unless you have a reason not to)

   $ dd if=/dev/zero of=/dev/null bs=1M
   ^C55034+1 records in
   55034+0 records out
   57707331584 bytes (58 GB) copied, 6.04479 s, 9.5 GB/s

And this is a 7 year old laptop. I could get blistering fast speeds writing to /dev/null on my latest 6 core i7 ;-)


This benchmark is bogus because the iozone -I flag is missing. -I uses O_DIRECT to avoid the page cache.

Due to page cache usage it's hard to say what this benchmark is comparing. The I/O pattern seen by the actual disk, shared folder, or NFS may be different between benchmark runs. It all depends on amount of RAM available, state of cache, readahead, write-behind, etc.

Please rerun the benchmark with -I to get an apples-to-apples comparison.


It will avoid page cache in the VM, but will not avoid cache on the host, right?


Depends on configuration, but most setups propagate O_DIRECT all the way through - otherwise it would be impossible to run many apps, such as DBs in VM.


Is it just me or do the graphs not match up to the text in several places? For example in the 64MB random file write graph (http://i.imgur.com/iGxn2H1.png) green is the vmware native according to the legend, which is clearly the highest bar graph across the board, yet he says "VirtualBox continues to outperform VMware on writes"


He's probably talking about the Shared Folders performance.


That's the conclusion I drew, but it's really unclear. He said at the end that VMWare blows Virtualbox out of the water. The graphs show that for shared folders but as far as disk access in general, they look fairly evenly matched (at least, that's what the graphs I saw depicted).


Yeah, and who cares about shared folder performance anyway?


It's the nicest way to keep and edit your work on the host and run it in the isolated environment of the guest. VirtualBox shared folders have completely unacceptable performance here [1], and VMware handles itself much better.

[1] 10-15 second delay on returning a response in a Rails app, in my experience.


It would have been interesting to see a comparison with Xen etc. too.


It's primarily a development environment test, where the host runs OSX. It would be interesting extending the test to Parallels on Macs and adding a Linux host where KVM and LXC could be used.


just a note: Mitchell Hashimoto is the mastermind behind Vagrant and Packer.


Would love to see in there KVM and Xenserver; you know, stuff that actual clouds run on.


This is pretty clearly a test of developer related tools, not production cloud server infrastructure. I'm not even sure there's an equivalent of VirtualBox/VMWare shared folders in KVM or Xen, because guests and hosts don't usually share folders in the same way that you do with these workstation virtualization tools.

...

Spoke too soon. A Google search shows there are some methods [1], but their use cases are different.

[1]: http://www.linux-kvm.org/page/9p_virtio


bradleyland is correct: This test was focused primarily on using VMs for development tools. This test was done on a local machine with desktop virtualization software. The opening paragraph mentions I was investigating performance for development environments. This post should not be used for any production applications, since it would make no sense.


I think you mean KVM and Xen. Xen hypervisor is open source project just like KVM while XenServer is a product that uses Xen hypervisor.

Just think of Linux kernel and Linux distributions.


I actually meant Xenserver (which is open source, too), but you do have a valid point. Xen's in use a lot.


Interesting and timely article! On an Ubuntu guest (Windows host), I install the Samba server and then use the native Windows CIFS client to connect to the Ubuntu host. This gives me the advantage of vm (virtualbox) native filesystem and letting me use my windows machine to open files on the guest

Perhaps this support can be added to some later version of vagrant


This is what I would do when I was on Windows. The biggest (really big) downside is that the files live inside the VM and are only accessible when the VM is up and running.


How is it that native is slower then virtual i/o in his tests? I don't get it... if it's only reading some cached data, it's not a real test scenario, isn't it?

So i suppose, the host system caches the reads. Also, how could it possibly be true that native writes are slower then virtual writes?


From the article:

It is interesting that sometimes the native filesystem within the virtual machine outperforms the native filesystem on the host machine. This test uses raw read system calls with zero user-space buffering. It is very likely that the hypervisors do buffering for reads from their virtual machines, so they’re seeing better performance from not context switching to the native kernel as much. This theory is further supported by looking at the raw result data for fread benchmarks. In those tests, the native filesystem beats the virtual filesystems every time.


It doesn't explain a thing on why. He just measured the performance of memory access and different caching strategies. From my point of view the "benchmarks" say nothing at all about actual I/O disk performance in virtual and native environments.


You wouldn't expect the actual disk I/O to be different. The VM has overheads in transporting the data; that's what the article is trying to measure.

And with that in mind, the "native" bars are pretty much useless in this article. They should always be higher than the VM bars, unless specifically trying to test fsync - and since it seems fsyncs are ignored, the test isn't being achieved.


That says exactly why.

You read from a file and tell the OS to not buffer it(or you tell the OS to flush the caches before you start) on a native system, and it does exactly that.

You read from a file and tell the OS to not buffer it (or you tell the OS to flush the caches before you start) on a VM, and the OS thinks it does that, but the hypervisor living below the VM buffers some of the data anyhow, so you're really just reading from memory.


On write, the VM probably reports that data is written to disk when it's written to an in-memory cache, then writes it to actual disk. So write is faster because the application is being lied to, not actual performance. That wouldn't explain the rads, though.


Benchmarks are flawed. Combine that with 'virtual' devices and you're bound to get amazingly weird results.


It could be because native is running OS X but they're running Ubuntu inside the VM.


In the past, industry threw hardware at things. Virtualization reduced this wastefulness somewhat, but now developers are fighting back against unreliable performance. If you are developing a performance-sensitive system, executing similar tests routinely but with real workloads should be part of your test process... and certainly occur before deployment. Third party tests on some hardware with some version of some code on some kernel, such as what we see here, are really neither here nor there.


With our team, we also found shared folders performance to be too low. Our Python framework/app is very read-heavy and stat() a lot of files (the Python module loading system isn't your friend)

We ended up using the synchronization feature in PyCharm to continually rsync files from native FS into the VirtualBox instance. Huge perf improvement but a little more cumbersome for the developers. But so far it has been working good, PyCharm's sync feature does what it is supposed to.


I would love to see MS HyperV added to this benchmark or similar.


No KVM / Xen ... :/


On big repository, if you want to use zsh -- you will have to use NFS, otherwise my VirtualBox just hangs for 30 seconds until it can show me "git status" in a prompt. So only option for me is NFS (for VirtualBox).


Another user on lobste.rs posted this photronix article of Virtualbox vs QEMU-KVM.

Thought it might be of interest on HN as well.

http://www.phoronix.com/vr.php?view=19551


I thought it was a test of different filesystems performance within the client os. Like fo example btrfs lzo vs ext4.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: