You are here: Home / OSADL / News / 
2024-12-11 - 07:40

Dates and Events:

OSADL Articles:

2024-10-02 12:00

Linux is now an RTOS!

PREEMPT_RT is mainline - What's next?


2023-11-12 12:00

Open Source License Obligations Checklists even better now

Import the checklists to other tools, create context diffs and merged lists


2023-03-01 12:00

Embedded Linux distributions

Results of the online "wish list"


2022-01-13 12:00

Phase #3 of OSADL project on OPC UA PubSub over TSN successfully completed

Another important milestone on the way to interoperable Open Source real-time Ethernet has been reached


2021-02-09 12:00

Open Source OPC UA PubSub over TSN project phase #3 launched

Letter of Intent with call for participation is now available



2007-09-15 12:00 Age: 17 Years

KVM (Kernel Virtual Machine) Forum in Tucson, Arizona

By: Gerd König

The first year of an amazing success story

KVM Forum 2007 in Tucson, Arizona

The KVM Forum

The fIrst KVM Forum was held in Tucson, Arizona, from August 29 to 31, 2007. The organizer of this forum, the 2005 founded software company Qumranet Inc. with offices in Sunnyvale, USA and Netanya, Israel, is the sponsor, maintainer and catalyst behind the Kernel-based Virtual Machine (KVM) project. This global Open Source project is focusing on integrating robust virtualization capabilities into the Linux kernel. The KVM virtualization solution is officially included in the Linux mainline kernel since version 2.6.20. For the time being, KVM requires CPU hardware virtualization support such as vmx (Intel's virtual machine extension) or svm (AMD's secure virtual machine). Most modern Intel and AMD processors provide this capability.

The forum's keynote speaker and KVM maintainer Avi Kivity looked back to an extraordinary and successful first year and mentioned the two highlights i) when Linus Torvalds agreed to merge kvm to the Linux mainline kernel and ii) when a stable vanilla kernel with KVM was released for the first time in February 2007.

Why was it possible to develop KVM so fast and why was it so successful? The answer Avi Kivity gave is short and evident: KVM is the smallest variant of the available virtualization systems, which makes it possible for a single person to understand and lead the development of the entire project. On the other hand, Avi is also realistic about the weakness of KVM: It arrived very late in the market and, thus, is less mature and has less management tools than other virtualization solutions. The KVM community, however, is working hard to overcome these shortcomings as soon as possible - as was clearly visible during the KVM Forum.

Avi also said that he was much surprised to see the huge interest of the embedded and automation industry for KVM and having engineers from companies such as Montavista, Siemens Automation & Drives and Kontron attending the forum.

KVM Lite: no hardware support required, fewer calories

The well-known kernel maintainer Rusty Russell who is working at IBM's Canberra Linux Technology Center in Australia introduced his idea that it should be possible to use KVM also on processors that do not provide hardware virtualization support. In that case, KVM would use more of the Qemu components according to the formula:

 kvm-lite = kvm-quemu + lguest 

lguest implements a virtual machine monitor for Linux that runs on any x86 processor. A user space utility prepares a guest image in such a way that it includes an lguest kernel and a small chunk of code above the kernel to perform context switching from the host kernel to the guest kernel. lguest uses the paravirt_ops interface to install trap handlers of the guest kernel that branch into the lguest switching code. The required paravirtual drivers are already under development. Rusty is maintaining a blog, where he talks about the progress of his work: http://ozlabs.org/~rusty/

KVM paravirtualized guest drivers

I/O operations of a virtual guest system are usually relatively slow, because emulating an I/O access requires exiting from guest mode which is a fairly time-consuming operation compared to accessing real hardware. A common solution is to introduce so-called paravirtualized devices that establish a direct connection between the guest system and the host hardware. Ingo Molnar provided a patch some months ago that introduced an ad-hoc paravirtualization hypercall API between a Linux guest and a Linux host. This created the basis to write KVM paravirtualized guest drivers.

The Qumranet developer Dor Laor is currently working on a paravirtualized Ethernet driver, which already runs fairly stable in his lab. It uses VirtIO, developed by Rusty Russell. VirtIO implements a network and block driver logic. Like Xen, it uses a backend implementation (running on the Linux host) and a very tiny frontend driver. Through the use of VirtIO, all the virtualization solutions like KVM, Xen and lguest can reuse a common backend similar to the existing VirtIO guest implementation. This is a chance to unify the efforts of getting a common and more efficient virtual I/O mechanism. Dor said that there is some minor work still to do, but the driver and the enhanced VirtIO interface will be released soon. He measured 620 Mb/s network throughput as compared to 55 Mb/s in an emulated I/O environment. As a disadvantage, paravirtualized KVM will only run on a virtualization aware operating systems such as Linux.

KVM Live Migration

One of the most-wanted virtualization features is the ability to move a virtual machine from one physical host to another one without interrupting execution of the guest system for more than a few milliseconds. This is called live migration and allows virtual machines to be relocated to different hosts to better use the load and performance requirements in a server park. Live migration works by copying the memory of a guest system to a target server, while the source system is still working and executing code. A memory page of the guest system that has been modified after being copied, will need to be copied later on again. To ensure this, KVM provides a so-called dirty page log facility to monitor such pages. This is possible through mapping of guest pages as read-only and only mapping them as writable after the first write access, which also provides a hook point to update the modified page bitmaps.

Uri Lublin, another Qumranet developer, presented various algorithms to implement live migration in KVM and presented benchmarks of several use cases, e.g. using TCP sockets with and without built-in SSH support or by saving the image to a file. He used Qemu monitor commands to achieve live migration. He also compared the KVM Live Migration feature with that of other hypervisors and pointed out that the KVM Live Migration has a number of advantages: It is short and simple, has built-in security by using SSH, the guest is not involved, it is hardware independent, supports compression and encryption, and the guest will still continue to run on the source system in case of a failure. Uri announced that he will implement more features in the near future: Support for more migration protocols, implementation of live checkpoints and fine tuning of parameters, to name some of them.

Intel and the future of virtualization

Sunil Saxena from Intel gave a short outlook of the future of virtualization and the role of KVM. He introduced Intel's new technology to virtualize I/O devices, called VT-d which ensures an improved isolation of I/O resources to achieve greater reliability, security and availability. Its key feature is DMA remapping and the use of virtualized interrupts. According to Sunil, Intel is making huge bets on virtualization which is playing a key role in Intel's future strategies. He mentioned the requirement of increased graphics and multimedia performance in guest system as examples of future challenges. But Intel has already contributed to improve KVM's performance. The Intel developer Eddie Dong, for example, undertook a number of benchmark tests and found that the I/O performance of kvm-16 was only about one third of a Xen installation. This lead to an improved management of shadow page tables in KVM. Intel has also implemented a huge test infrastructure and is executing excessive installation tests of the KVM main development branch on a daily base which proved to be very useful. Thus, Intel is working hand in hand with the KVM community to achieve best results.

A KVM friendly IOMMU API for Linux

Jörg Roedel (AMD OS Research Center) introduced a new approach of an IOMMU API for Linux. The currently available DMA mapping API only provides simple functions to map host to bus addresses. But IOMMU supports much more than just address remapping, so he decided to implement a new API. This API will also support protection domains, I/O Transaction Lookahead Buffers (IOTLBs) and a way to handle paravirtualized mappings.

KVM security and virtual machine access control

Hadi Nahari from Montavista wanted to sharpen the audience's awareness to security topics. In particular, he is missing access control mechanisms and secure isolation enforcements to virtualized guest systems along with many other security mechanisms. This led to a very vivid discussion with the audience, in particular with Avi Kivity, Chris Wright, H. Peter Anvin and Rusty Russell. According to Avi Kivity, KVM does not provide any new mechanism; it simply uses already existing mechanisms that are available in the Linux kernel since long time; so there is no need for a special security discussion with respect to KVM.. Security should rather be a topic of the guest operating systems. But Hadi insisted in his view that the KVM hypervisor is, in fact, a new mechanism that can be manipulated, and he cited the Iron Law of Exploits saying that everything that can be exploited, will be exploited. The discussion came to the conclusion that an exploit of the hypervisor by the guest OS is probably not possible, but exploits are certainly possible on the host system. Therefore, it is important to properly protect the host system.

Implementing KVM in embedded PowerPCs

The Book E Specification of the PowerPC Architecture does not contain hardware extensions for virtualization although dedicated mainframe PowerPC processors apparently already have such extensions. Hollis Blanchard, a PowerPC Linux developer working at IBM's Linux Technology Center in Austin, Texas, recognized that virtualization is also of a huge interest not only in mainframes but also in embedded systems. Since PowerPC processors do not suffer from instructions with privilege-dependent behavior such as in the x86 world, it is possible to execute guest kernels at user privilege level. All supervisor instructions executed in user mode are then trapped into the host where they can be decoded and emulated. Hollis pointed out that this project is still in its infancy and that he would like to receive input from the community. This is his email address - in case you would like to provide such input.

The future of KVM

Avi Kivity announced a number of features to be implemented in the very near future, e.g. additional architecture support such as s390, PowerPC and IA64, support of new hardware features such as IOMMU and NPT/EPT, paravirtual drivers and device pass-throughs. As could be seen at the forum, the KVM community is very active and a lot of work is done by contributors from all over the world to reach the desired targets.

Links

Qumranet
KVM Wiki
Agenda and slides of many presentations of the KVM Forum 2007