Kernel space: KVM on the fast track

While Xen hackers are still working on paravirtualization code, KVM seems to be on the fast path

Progress in the virtualization world sometimes seems slow. Xen has been the hot topic in the paravirtualization area for some years now -- the first "stable" release was announced in 2003 -- but the code remains outside of the mainline Linux kernel. News from that project has been relatively scarce as of late -- though the Xen hackers are certainly still out there working on the code.

On the other hand, KVM appears to be to be on the fast path. This project first surfaced in October, 2006; it found its way into the 2.6.20 kernel a few months later. On February 25, KVM 15 was announced; this release has an interesting new feature: live migration. The speed with which the KVM developers have been able to add relatively advanced features is impressive; equally impressive is just how simple the code which implements live migration is.

KVM starts with a big advantage over other virtualization projects: it relies on support from the hardware, which is only available in recent processors. As a result, KVM will not work on the bulk of currently-deployed systems. On the other hand, designing for future hardware is often a good idea - the future tends to come quickly in the technology world. By focusing on hardware-supported virtualization, KVM is able to concentrate on developing interesting features to run on the systems that companies are buying now.

The migration code is built into the QEMU emulator; the relevant source file is less than 800 lines long. The live migration task comes down to the following steps:

-- A connection is made to the destination system. This can currently be done with a straight TCP connection to an open port on the destination (which would not be the most secure way to go) or by way of ssh.

-- The guest's memory is copied to the destination. This process is just a matter of looping through the guest's physical address space (which is just virtual memory on the host side) and sending it, one page at a time, to the destination system. As each page is copied, it is made read-only for the guest.

-- The guest is still running while this copy process is happening. Whenever it tries to modify a page which has already been copied, it will trap back into QEMU, which restores write access and marks the page dirty. Copying memory thus becomes an iterative process; once the entire range has been done, the migration code loops back to the beginning and re-copies all pages which have been modified by the guest. The hope is that the list of pages which must be copied shrinks with each pass over the space.

-- Once the number of dirty pages goes below a threshold, the guest system is stopped and the remaining pages are copied. Then it's just a matter of transmitting the current state of the guest (registers, in particular) and the job is done; the migrated guest can be restarted on its new host system.

As it happens, guest systems can be moved between Intel and AMD processors with no problems at all. Moving a 64-bit guest to a 32-bit host remains impossible; the KVM developers appear uninterested in fixing this particular limitation anytime soon. A little more information can be found on the KVM migration page.

The other feature of note is the announced plan to freeze the KVM interface for 2.6.21. This interface has been evolving quickly, despite the fact that it is a user-space API; this flexibility has been allowed because KVM is new, experimental, and has no real user base yet. The freezing of the API suggests that the KVM developers think things are reaching a stable point where KVM can be put to work in production systems. Perhaps that means that, soon, we'll find out how Qumranet, the company which has been funding the KVM work, plans to make its living.

Join the newsletter!

Error: Please check your email address.

More about AMDIntelKVMSpeedSSH

Show Comments

Market Place