This document discusses using paravirtualized devices in Linux guests running on Xen in HVM (hardware virtual machine) mode. It offers the benefits of HVM like installing the same as native while also providing access to fast paravirtualized devices. Initial support added xenbus and grant table support while later enhancements added ballooning, spinlocks, and interrupt/MSI remapping. Benchmarks show PV on HVM performs close to PV guests for PV-favored workloads and far ahead for nested paging workloads. PV on HVM provides a spectrum of performance between HVM and PV guests.
1. Linux PV on HVM
paravirtualized interfaces in HVM guests
Stefano Stabellini
2. Linux as a guests: problems
Linux PV guests have limitations:
- difficult “different” to install
- some performance issue on 64 bit
- limited set of virtual hardware
Linux HVM guests:
- install the same way as native
- very slow
3. Linux PV on HVM: the solution
- install the same way as native
- PC-like hardware
- access to fast paravirtualized devices
- exploit nested paging
4. Linux PV on HVM: initial feats
Initial version in Linux 2.6.36:
- introduce the xen platform device driver
- add support for HVM hypercalls, xenbus and
grant table
- enables blkfront, netfront and PV timers
- add support to PV suspend/resume
- the vector callback mechanism
13. Benchmarks: the setup
Hardware setup:
Dell PowerEdge R710
CPU: dual Intel Xeon E5520 quad core CPUs @ 2.27GHz
RAM: 22GB
Software setup:
Xen 4.1, 64 bit
Dom0 Linux 2.6.32, 64 bit
DomU Linux 3.0 rc4, 8GB of memory, 8 vcpus
14. PCI passthrough: benchmark
PCI passthrough of an Intel Gigabit NIC
CPU usage: the lower the better:
200
180
160
140
120
100
CPU usage domU
80
CPU usage dom0
60
40
20
0
interrupt remapping no interrupt remapping
15. Kernbench
Results: percentage of native, the lower the better
140
135
130
125
120
115
110
105
100
95
90
PV on HVM 64 bit PV on HVM 32 bit HVM 64 bit HVM 32 bit PV 64 bit PV 32 bit
16. Kernbench
Results: percentage of native, the lower the better
140
135
130
125
120
115
110
105
100
95
90
PV on HVM 32 bit HVM 64 bit PV 64 bit
PV on HVM 64 bit KVM 64 bit HVM 32 bit PV 32 bit
17. PBZIP2
Results: percentage of native, the lower the better
160
150
140
130
120
110
100
PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
18. PBZIP2
Results: percentage of native, the lower the better
160
150
140
130
120
110
100
KVM 64 bit PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
21. Iperf tcp
Results: gbit/sec, the higher the better
8
7
6
5
4
3
2
1
0
PV 64 bit PV on HVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
22. Iperf tcp
Results: gbit/sec, the higher the better
8
7
6
5
4
3
2
1
0
PV 64 bit PV on HVM 64 bit KVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
23. Conclusions
PV on HVM guests are very close to PV guests
in benchmarks that favor PV MMUs
PV on HVM guests are far ahead of PV guests
in benchmarks that favor nested paging