In the midst of Jaunty testing, I decided to try out paravirtualized disks with KVM. I switched to virtio for networking a while ago with good results. I already built the guest I’m testing with, so I wanted to modify the libvirt xml file, but didn’t see anything of note related to virtio and storage. I did however deduce that the trick is to change the bus attribute in the target element from ‘ide’ to ‘virtio’. Because Ubuntu uses UUID’s instead of paths, the change from ‘sda’ to ‘vda’ didn’t affect startup. I was confused at first though as mount still showed ‘/dev/sda1’ but the dmesg output clearly lacked an ‘sd’ device but had a newly acquired ‘vd’ device.
Bonnie++ was run on a single cpu guest with 768MB ram. The guest is Ubuntu Jaunty 9.04 with all of todays packages, using virtio for network + disk. Otherwise pretty standard with everything that might matter. The Host is also Jaunty, running on a Dell 1955 blade with a couple Xeons and about 7GB of RAM. KVM is ‘1:84+dfsg-0ubuntu7’, and libvirt is ‘0.6.1-0ubuntu1’.
The numbers aren’t all that interesting. Virtio was a little bit faster. I’m not that familiar with using bonnie. Who knows if caching or anything negated these tests, I didn’t try to research turning it off. The performance testing was mostly an afterthought of setting it up, as I see no reason not to use it now.
With ide disk driver:
Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP it-util01 1496M 37162 94 56108 20 39997 14 42209 89 247590 59 4706 93 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ it-util01,1496M,37162,94,56108,20,39997,14,42209,89,247590,59,4705.6,93,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
With virtio disk driver:
Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP it-util01 1496M 39831 88 56480 13 41427 14 45489 86 291109 57 7915 90 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ it-util01,1496M,39831,88,56480,13,41427,14,45489,86,291109,57,7914.5,90,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
The backend disk image format you use can make a difference – lvm, raw or qcow2.
I’ve put up some testing I did done: http://stateless.geek.nz/2009/10/13/kvm-disk-performance-with-different-backends/