While you probably won’t run Vista as a virtual machine on your cell phone, there are many viable use cases of virtualization for embedded applications. The most simplest, cheapest, feature rich is using Linux and KVM.
Servers and desktops are not alone, virtualization is also a perfect fit for embedded devices too.
Virtualization benefits are well understood for the traditional server consolidation purposes. Hypervisors, aka virtual machine monitors (VMM) are common in almost every data center, saving equipment, power and management costs. Embedded applications, ranging from terabit routers, hardware appliances, media-rich set-top boxes to cellular phones and media players can all benefit from virtualization.
At a glance it may looks like an over kill to run virtual machines on embedded systems (see Figure 1). Embedded systems might be resource limited, having limited memory/CPU/latency/scheduling. They are also often tailor-made to match specific hardware/software combinations. Deeper insight reveals many advantages of virtualization for embedded:
- Consolidation–Expensive custom-made hardware increase the motivation to consolidate several physical devices into a single one. Consolidation also helps to reduce complexity for distributed environments–All the virtual machines live on the same physical server, there are no risks of network partitioning, hardware failures are atomic and a common high availability (HA) solution deal with them while collapsing many scenarios.
- Security–Breaking into the cellular phone management/java stack won’t jeopardize the communication stack and the main cell if each of them is run in a different virtual machine (VM). The VM environment is a big sand box for untrusted code.
- Reliability–Isolate privileged code and prevent/reduce entire device failures
- Management and rapid development–Even if running RTOS for managing the hardware, there is no need to settle for its limited management capabilities. A management VM running Windows can run the user interface, making both users and developers life easier.
- Hardware virtualization–VMM is an exact fit for hardware virtualization, dynamically divide and unit physical resources along with their virtual controller. Large, distributes embedded machines such as routers can be split and unite along with the router engine machine, executed as a VM.
- Efficiency–In the multi-core era many physical cores are under-utilized, some not even initialized since the embedded software was uni-processor
- New exciting features–Sophisticated features like snapshots, live migration, external hibernation can enhance embedded products that tend to demand high availability, upgradability (even remote kernel upgrades), etc
- Law–Unlinkage of the GPL code from proprietary code can be easily obtained using virtualization
Linux is one of the most common OS for embedded applications, it is widely available, highly performing, reliable, inexpensive, well supported while having a vast feature set.
It has out-of-the-box support for many architectures including x86, mips, arm, powerpc, etc. Linux has rich networking stack including ipv4, ipv6, firewalls, bridging, routing, wireless, bluetooth and many commercial applications. All possible features ranging from video, and sound, encryption, file systems and MMU are supported. It’s becoming the operating system of choice in a wide array of embedded applications, and it even has support for real-time.
For the past two years, Linux has its own hypervisor.
The Kernel Virtual Machine hypervisor is supplied with every standard kernel today. Its foot print is minimal and mainly composed of a kernel module and some hooks for integrating into the Linux scheduler and the Linux memory manager (see Figure 2). Virtual machines are standard processes and can be managed as such, i.e., They can be prioritized using ‘nice’, killed using ‘kill’, paused using stop signal, etc.
KVM uses virtualization hardware extensions of the modern CPU. In order to virtualize the complete server/desktop/development-board environment all of the hardware components must be virtualized/emulated. Emulated components are fully emulated in software, examples are the various interrupt controllers (pic, apic, ioapic), cdrom, usb bus, pci bus, ide drivers, scsi drivers, real time clock, etc. All of them (expect for some performance critical components) are implemented in userspace. KVM uses the QEMU emulator for that.
The performance critical emulated components are implemented within the KVM kernel module, currently the shadow mmu (Memory Management Unit) and the interrupt controllers reside within the kernel. Virtualized components are such that have specific support of the hypervisor. The CPU itself is virtualized by empowering the physical CPU’s hardware extensions. Except for the cpu other components are virtualized (or para-virtualized) like the network I/O, block I/O and memory (ballooning).
Every virtual cpu (vcpu) is implemented using a Linux thread. The threads are scheduled using the standard, yet sophisticated Linux scheduler. The thread can be in one of the following modes:
- User more
- Kernel mode
- Guest mode
Standard execution consist of userspace preparing the emulated environment and call the ‘run’ ioctl. The KVM module executes the ioctl by switching from host mode into guest mode. Guest mode begins native instruction execution. The guest (Virtual Machine) is automatically switched out of guest mode when certain events occur, examples for such events are: physical interrupt, execution of privileged instruction like port io or changing cr3 register content, etc. Then, the KVM module tests the event nature and decides whether it can continue executing the guest code or need to exit to userspace in order to complete device emulation (see Figure 3).
Virtual machine memory is allocated the by KVM userspace process using standard mmap mechanism.
KVM processes has slightly larger virtual memory range in order to keep the hypervisor device state and memory mapped range for the virtual machine devices.
By using standard mechanism the VM memory may be swapped, shared by other guests/processes, backed by large pages or a disk file and even copy-on-write for memory over commit.
The kvm approach is reuse, reuse, and reuse.
- Reuse Linux code as much as possible. Enjoy the sophisticated IO stack, various management tools, o(1) scheduler with complete fairness, rich device driver base, power management and on. Integrate well into existing infrastructure, codebase, and mindset
- Leverge hardware virtualization features like flex priority, VT-d, nested paging, etc. For example, using nested paging a VM can reach near native performance while having tiny code for virtual MMU management.
- Focus on virtualization, leave other things to respective developers. Benefit from semi-related advances in Linux, there is not need to invent the wheel. Linux supports all hardware, new hardware is transparently supported by KVM, bridging, routing, iptables and vlans, ipv6, SELinux security enforcements, SAN and NAS, etc.
The KVM and Linux combination as a hypervisor (see Figure 4) is thinner and more efficient than older approaches. Since the hypervisor itself is standard Linux, you gain the following:
- There is no additional layer for the hypervisor -> less context switches, reduced overhead, no priority inversion, etc.
- Standard processes and Linux kernel modules can be executed on the host, saving the need to add an additional Linux VM for them like other architectures demand.
- The host can use Linux OS primary for embedded applications while the virtual machines are piggy-backed on the host with best effort priority. KVM does not add significant latency and enable running real time Linux as the hypervisor OS.
Embedded applications have specific requirements that rise from limited resources/proprietary hardware/tight scheduling. KVM answers all of them:
- Memory foot print–Like any embedded Linux, KVM can be very thin. Linux running X was demonstrated running using 1M binary, KVM additions are minimal.
- PCI pass-through–Sometimes the hardware is not virtualized and should be redirected into the VM for its direct control. This feature is called PCI pass through. It is work in progress for KVM and there are several flavor for it, ranging from 1-1 memory mapping, through para-virtual PCI up to hardware support using VT-d.
- Real time–Linux has separate branch for real time. While running virtual machines the Linux host can still provide mer 50 micro second latency. All KVM code paths are preemptive and if the VM is in guest mode, a standard IPI or signal will trigger a host exit and the VM will be preempted in favor of higher priority tasks.
- Virtual machine management–Managing VMs is simple, there is no need for external tools like web servers or python running in privileged VMs or the hypervisor.
So if one needs to cut expenses, reduce complexity, improve security, and increase security, then embedded virtualization is the solution and KVM is the perfect match for his needs.
Dor Laor is a software director for Qumranet. Dor runs the
virtualization, remote desktop and windows drivers teams. prior to
Qumranet Dor managed the core group for BladeFusion. Before that, Dor was part of the key developer of Charlotte’s web networks terabit router product. While at Charlotte’s web networks, he developed industry leading highly available core routing abilities, virtual routers and implemented meaningful part of the router’s internal management.
This article was first published on LinuxPlanet.com.