Friday, April 10, 2009

Migrating VMs between Hypervisors – the hypervisor

The key items to be aware of with migration of Linux virtual machines between hypervisors is:

1) What controller does the respective hypervisor present as the boot controller

2) How are other devices emulated (video, mouse, keyboard, NIC)

3) Are virtual machine tools installed from the source hypervisor

a. Do these present custom devices or device drivers

4) Is the Linux virtual machine using a paravirtualization aware kernel.

The most critical of these considerations in the success of being able to recover a virtual machines without reinstalling is the presentation of the disk interface. The operating system can be repaired by modifying the boot devices identified in the boot loader and in the file system table.

The presentation of the boot disk can be very different between hypervisors. This is analogous to removing a hard drive from an IDE controller and attaching it to a SCSI or ATA controller and expecting the installed operating system to boot and run properly. With a windows operating system the usual result is the Blue Screen of Death (BSOD) with a 0x0000007B error. This error is generally associated with critical boot device driver not being available for the bus the disk is on.

With Linux operating systems the issue is not a critical boot device, but rather a path to the /root partition. The most common Linux boot loader is Grub. Depending on the distribution Grub has the path to the root defined in the boot loader entry. Another item used by many Linux distributions is the File system Table or fstab. Fstab is simply a file that serves as a shortcut to the location of operating system volumes such as root, swap, var, etc. All of these entries can be edited after mounting the installation volume.

The most resilient of the operating systems to migration is RedHat. The current RedHat distributions appear to use virtual points to the disk volumes. Instead of /hda for IDE 0 or /sda for SCSI 0 I notice /VolGroup00, a more generic descriptor for the first boot disk.

Virtual machine tools may have secondary impacts and errors related to supporting applications such as XWindows or VNC Server that are running within the virtual machine. There are also downstream situations where a particular set of virtual machine tools has a negative impact on a new envirnonment (ie. The VMware tools have a negative performance impact when they run within a VM hosted on Hyper-V or XenServer).

In the case of a paravirtualized kernel the result depends on the host system being migrated to. The result could be that the VM continues to function, or the VM could fail to boot with a kernel error because the new host does not support the kernel of the previous host. This is a more complicated case and will need to be considered on a case by case basis.

Differences between hypervisors:

From a user perspective all hypervisors provide similar services – they host and run virtual machines. However, the architecture and implementation of each is very different.

Focusing on how a hypervisor presents virtual disks to a virtual machine will be the remainder of this writing. If a virtual machine cannot boot after migration, then the exercise has been considered a failure.


The Hyper-V model is that the booting volume of an operating system must be installed on a virtual IDE controller. For a windows virtual machine this is what we commonly refer to as the C:\ or C:\Windows volume. In the Linux world, this is where the boot loader and the root volume reside.

Therefore, when Hyper-V is the source an IDE interface can be assumed.

VMware ESX

The ESX model has always been SCSI. All disks of a virtual machine appear to the installed operating system as residing on a SCSI controller. Historically, this presented installation challenges as some operating systems, such as Windows XP and Windows 2003, did not include a compatible SCSI driver.

Only in vSphere 4 is this model changed, to allow the option of IDE. However, in the Release Candidate the default controller is still SCSI.


In the XenServer world the disk devices can begin to appear very strange. XenServer virtual machines can see disk devices are IDE, SCSI, or a SCSI-like controller. Much of this behavior is defined by the template that was used to create the virtual machine combined with paravirtualization awareness of the OS.

A Windows operating system will behave as if the boot device is IDE. However a Linux operating system can identify a variety of devices such as /hda, /sda, /xvda, and /QEMU.

Moving virtual Machines in practice

In practice a Windows virtual machine will move between XenServer and Hyper-V rather easily. There is usually the recognition of new devices on the receiving platform, but generally nothing that cannot be repaired, nor prevents booting the operating system.

From or to VMware was the historic problem because of the need to change from a SCSI controller to something else. For Windows operating systems this requires the injection of a critical boot device driver that supports the new platform.


Anonymous said...

So how to convert Windows 7 VM from VMware to Hyper-V.
I converted it using SC VMM 2012 R2 and Microsoft Virtual Machine Vonverter and eveything faild. New VM on Hyper-V cannot load system. I think there is problem wit SCSCI and IDE after conversion.

Thanks for help (

BrianEh said...

What you have hit is the classic hardware change scenario.
On ESX all disk devices are SCSI.
And most likely the conversion utility created you a Generation 1 virtual machine, which is an IDE boot device, not a SCSI boot device.

I would actually suggest that you try running Disk2VHD within the VMware VM.

MSFT has never had a great conversion story. And part of that is due to the technical issues of there are artifacts that get left behind from the previous platform / hardware.

it is far the safest, and longest lived to install a clean OS on the new platform, then migrate any applications.
I can tell you that from years of experience.