I’ve switched my production infrastructure from VMWare server to KVM and libvirt recently. I’ve been working on moving from ubuntu-vm-builder to python-vm-builder (now just vm-builder). Nick Barcet made a tree while we were talking about the lack of a bridging option that adds a ‘–net-virtio’ option. So I started using virtio on a new libvirt guest for networking.
On the guest, lspci will show this device when using virtio:
00:03.0 Ethernet controller: Qumranet, Inc. Unknown device 1000
From the host, in simple tests (‘ping -c100 guestname’) aren’t all that different and are pretty statistically useless.
with virtio:
100 packets transmitted, 100 received, 0% packet loss, time 99012ms rtt min/avg/max/mdev = 0.113/0.387/0.821/0.065 mswithout virtio:
100 packets transmitted, 100 received, 0% packet loss, time 99007ms rtt min/avg/max/mdev = 0.143/0.200/0.307/0.032 ms
Running iperf with the server on the guest and the client on the host produces:
with virtio:
------------------------------------------------------------ Client connecting to virtio, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.0.1.172 port 54168 connected with 10.0.1.33 port 5001 [ 3] 0.0-10.0 sec 1.34 GBytes 1.15 Gbits/secwithout virtio:
------------------------------------------------------------ Client connecting to novirtio, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.0.1.172 port 34414 connected with 10.0.1.13 port 5001 [ 3] 0.0-10.0 sec 375 MBytes 315 Mbits/sec
So that’s better. Both guests are Ubuntu 8.04.1 running 2.6.24-19-server SMP x86_64 with 1 vcpu and 768MB of RAM. Host has 8GB of RAM, same kernel and distro, with 8 CPUs (Xeon’s with HT and all that crap).
I assume the improvements exist even when communicating with another machine via the bridge?
@Scott:
virt04 -> ops02, mon02 as guests. mon02 has virtio.
I ran that a few more times and ops02 is consistently just under 400 Mbits/sec, while mon02 is consistently just over 600 Mbits/sec.
Which is half of that 1.3 Gbits/sec figure. Host to host is about 940 Mbits/sec:
ethtool reports:
So there is some performance loss there still by using virtualization (instead of not, not specifically talking about virtio). I haven’t done any tuning really. It is interesting though, to consider running guests that have a lot of data transfer between each other on the same host.
Between two debian etch hosts (running vmware) on the same switch:
From said debian/vmware host, to a debian/vmware (i686) guest:
It’s interesting to compare those numbers:
780MB : VMWare Host -> Guest
315MB : KVM Host -> Guest w/o virtio
1.15GB: KVM HOST -> Guest w/virtio
Of course I could be missing something, but it’s interesting data at least, and I can’t find any sort of data anywhere out there. Moral of the story? KVM Networking sucks without virtio, but kicks ass with it.
Ya, the moral is that full virt I/O of any sort is slow, HVM Xen suffers the same problems. virtio and para-virt drivers solve this, I imagine kvm uses virtio for block i/o as well, you should see similar improvements there.
Yes, kvm networking should be used with virtio, other options is if you have some broken system that you have to feed “real” interface with “real” driver.
The realistic test would be between virtio and VMXNET because both of them using paravirtualized NIC and you should change the guest OS
Right, well, there are a lot of things one could test. I didn’t switch to KVM because of the virtio speed. Primarily the change was because it is open source. While testing kvm with and without virtio, it seemed prudent to try some vmware and debian combinations since I still had them lying around.
I believe at the time I couldn’t find any numbers, so my goal was to produce some data more than write a complete article covering virtualized networking on many different platforms.