KVM Virtio network performance

I’ve switched my production infrastructure from VMWare server to KVM and libvirt recently. I’ve been working on moving from ubuntu-vm-builder to python-vm-builder (now just vm-builder). Nick Barcet made a tree while we were talking about the lack of a bridging option that adds a ‘–net-virtio’ option. So I started using virtio on a new libvirt guest for networking.

On the guest, lspci will show this device when using virtio:

00:03.0 Ethernet controller: Qumranet, Inc. Unknown device 1000

From the host, in simple tests (‘ping -c100 guestname’) aren’t all that different and are pretty statistically useless.

with virtio:

100 packets transmitted, 100 received, 0% packet loss, time 99012ms
rtt min/avg/max/mdev = 0.113/0.387/0.821/0.065 ms

without virtio:

100 packets transmitted, 100 received, 0% packet loss, time 99007ms
rtt min/avg/max/mdev = 0.143/0.200/0.307/0.032 ms

Running iperf with the server on the guest and the client on the host produces:

with virtio:

------------------------------------------------------------
Client connecting to virtio, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.1.172 port 54168 connected with 10.0.1.33 port 5001
[  3]  0.0-10.0 sec  1.34 GBytes  1.15 Gbits/sec

without virtio:

------------------------------------------------------------
Client connecting to novirtio, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.1.172 port 34414 connected with 10.0.1.13 port 5001
[  3]  0.0-10.0 sec    375 MBytes    315 Mbits/sec

So that’s better. Both guests are Ubuntu 8.04.1 running 2.6.24-19-server SMP x86_64 with 1 vcpu and 768MB of RAM. Host has 8GB of RAM, same kernel and distro, with 8 CPUs (Xeon’s with HT and all that crap).

7 thoughts on “KVM Virtio network performance

  1. btm Post author

    @Scott:

    virt04 -> ops02, mon02 as guests. mon02 has virtio.

    bryanm@virt03:~$ iperf -c ops02
    ------------------------------------------------------------
    Client connecting to ops02, TCP port 5001
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 10.0.1.171 port 47372 connected with 10.0.1.13 port 5001
    [  3]  0.0-10.0 sec    469 MBytes    393 Mbits/sec
    bryanm@virt03:~$ iperf -c mon02
    ------------------------------------------------------------
    Client connecting to mon02, TCP port 5001
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 10.0.1.171 port 53974 connected with 10.0.1.33 port 5001
    [  3]  0.0-10.0 sec    737 MBytes    618 Mbits/sec
    

    I ran that a few more times and ops02 is consistently just under 400 Mbits/sec, while mon02 is consistently just over 600 Mbits/sec.

    Which is half of that 1.3 Gbits/sec figure. Host to host is about 940 Mbits/sec:

    bryanm@virt04:~$ iperf -c virt03
    ------------------------------------------------------------
    Client connecting to virt03, TCP port 5001
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 10.0.1.172 port 34888 connected with 10.0.1.171 port 5001
    [  3]  0.0-10.0 sec  1.10 GBytes    941 Mbits/sec
    

    ethtool reports:

    Settings for eth0:
    	Supported ports: [ FIBRE ]
    	Supported link modes:   1000baseT/Full 
    	Supports auto-negotiation: Yes
    	Advertised link modes:  1000baseT/Full 
    	Advertised auto-negotiation: Yes
    	Speed: 1000Mb/s
    	Duplex: Full
    	Port: FIBRE
    	PHYAD: 2
    	Transceiver: internal
    	Auto-negotiation: on
    	Supports Wake-on: g
    	Wake-on: d
    	Link detected: yes
    

    So there is some performance loss there still by using virtualization (instead of not, not specifically talking about virtio). I haven’t done any tuning really. It is interesting though, to consider running guests that have a lot of data transfer between each other on the same host.

  2. btm Post author

    Between two debian etch hosts (running vmware) on the same switch:

    bryanm@vmware02:~$ iperf -c vmware01
    ------------------------------------------------------------
    Client connecting to vmware01, TCP port 5001
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 10.0.1.162 port 46332 connected with 10.0.1.161 port 5001
    [  3]  0.0-10.0 sec  1.09 GBytes    940 Mbits/sec
    bryanm@vmware02:~$ uname -a
    Linux vmware02 2.6.18-6-amd64 #1 SMP Sun Feb 10 17:50:19 UTC 2008 x86_64 GNU/Linux
    bryanm@vmware02:~$ lsb_release -a
    LSB Version:	core-2.0-noarch:core-3.0-noarch:core-3.1-noarch:core-2.0-amd64:core-3.0-amd64:core-3.1-amd64
    Distributor ID:	Debian
    Description:	Debian GNU/Linux 4.0 (etch)
    Release:	4.0
    Codename:	etch
    

    From said debian/vmware host, to a debian/vmware (i686) guest:

    bryanm@vmware02:~$ iperf -c web-stage02
    ------------------------------------------------------------
    Client connecting to web-stage02, TCP port 5001
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 10.0.1.162 port 39826 connected with 10.0.1.62 port 5001
    [  3]  0.0-10.0 sec    941 MBytes    789 Mbits/sec
    

    It’s interesting to compare those numbers:
    780MB : VMWare Host -> Guest
    315MB : KVM Host -> Guest w/o virtio
    1.15GB: KVM HOST -> Guest w/virtio

    Of course I could be missing something, but it’s interesting data at least, and I can’t find any sort of data anywhere out there. Moral of the story? KVM Networking sucks without virtio, but kicks ass with it.

  3. Scott

    Ya, the moral is that full virt I/O of any sort is slow, HVM Xen suffers the same problems. virtio and para-virt drivers solve this, I imagine kvm uses virtio for block i/o as well, you should see similar improvements there.

  4. btm Post author

    Right, well, there are a lot of things one could test. I didn’t switch to KVM because of the virtio speed. Primarily the change was because it is open source. While testing kvm with and without virtio, it seemed prudent to try some vmware and debian combinations since I still had them lying around.

    I believe at the time I couldn’t find any numbers, so my goal was to produce some data more than write a complete article covering virtualized networking on many different platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.