This is a short paper describing and evaluating our work earlier this year on direct device assignment in KVM, using Intel's VT-d IOMMU. Not much new here if you've read our other IOMMU papers, but it does make two contributions. First, it's the best (and only) available description (IMHO) of KVM's direct device assignment code, and second it's yet another data point on the relative performance of device emulation vs. virtual I/O drivers vs. direct device assignment. As always, comments appreciated. The abstract follows.
The I/O interfaces between a host platform and a guest virtual machine take one of three forms: either the hypervisor provides the guest with emulation of hardware devices, or the hypervisor provides virtual I/O drivers, or the hypervisor assigns a selected subset of the host's real I/O devices directly to the guest. Each method has advantages and disadvantages, but letting VMs access devices directly has a number of particularly interesting benefits, such as not requiring any guest VM changes and in theory providing near-native performance.
In an effort to quantify the benefits of direct device access, we have implemented direct device assignment for untrusted, fully-virtualized virtual machines in the Linux/KVM environment using Intel's VT-d IOMMU. Our implementation required no guest OS changes and---unlike alternative I/O virtualization approaches---provided near native I/O performance. In particular, a quantitative comparison of network performance on a 1GbE network shows that with large-enough messages direct device access throughput is statistically indistinguishable from native, albeit with CPU utilization that is slightly higher.