The kvm approach is reuse, reuse, and reuse.
- Reuse Linux code as much as possible. Enjoy the sophisticated IO stack, various management tools, o(1) scheduler with complete fairness, rich device driver base, power management and on. Integrate well into existing infrastructure, codebase, and mindset
- Leverge hardware virtualization features like flex priority, VT-d, nested paging, etc. For example, using nested paging a VM can reach near native performance while having tiny code for virtual MMU management.
- Focus on virtualization, leave other things to respective developers. Benefit from semi-related advances in Linux, there is not need to invent the wheel. Linux supports all hardware, new hardware is transparently supported by KVM, bridging, routing, iptables and vlans, ipv6, SELinux security enforcements, SAN and NAS, etc.
The KVM and Linux combination as a hypervisor (see Figure 4) is thinner and more efficient than older approaches. Since the hypervisor itself is standard Linux, you gain the following:
- There is no additional layer for the hypervisor -> less context switches, reduced overhead, no priority inversion, etc.
- Standard processes and Linux kernel modules can be executed on the host, saving the need to add an additional Linux VM for them like other architectures demand.
- The host can use Linux OS primary for embedded applications while the virtual machines are piggy-backed on the host with best effort priority. KVM does not add significant latency and enable running real time Linux as the hypervisor OS.