Showing posts with label Migration. Show all posts
Showing posts with label Migration. Show all posts

Friday, February 1, 2013

Reincarnating - Exporting - Importing an Azure Virtual Machine and Service

There are times when I really like the fact that I have to figure stuff out.  And then I have to do it again, and then for a co-worker.  Then before you know it I have to do it in a really serious way.

You could also describe this as Exporting and Importing an entire multi-machine Service.

Why might you want to do this?

  • Moving a Service from one Subscription to another.
  • Moving a Service from one Storage Account to another.
  • Rebuilding a Service in place to force the machines to new hardware.
  • Recover a ‘misbehaving’ machine.
  • Fixing / changing configurations of the Virtual Network or subnet (okay, you would have to modify the XML files, but still).
  • Bringing your machine / service configuration down to earth (without some other tool).

In this case there are the traditional IaaS / Datacenter reasons, of which there a few.  But from the PaaS side there might be lots of reasons.  And coming for a datacenter background, the PaaS thinking around this simply seems foreign and strange, but it really is the ‘cloud’ way of thinking about this – when you are working at an abstracted layer.

In my case; this script began with having to destroy and reincarnate Virtual Machines in Azure (over and over again, for various reasons).  Then I had to move my entire Service to an entirely different Account and thus Subscription.  So it evolved into that.

To complicate things I use Virtual Networks (the Azure software defined networking abstraction) and its Subnets.

Why would you need to reincarnate a Virtual Machine?  (and why are you using this reincarnate word?).

I love the term ‘reincarnate’ when describing changing the configuration of an Azure machine.  Due to the PaaS beginnings it is a fitting description of what actually happens.  The virtual disk stays intact, only the configuration and the deployed machine go away.

For example, you want to move a VM to a different Virtual Network, or Subnet.  You need to destroy it and make it again.  If you want to change the Affinity Group, you need to destroy and make again.  If something on Azure is going all pear-shaped – what do they recommend?  Delete and recreate.

What happens when you reincarnate?  Well, for starters you end up on a different stamp.  Your machine will literally be some place else in the infrastructure.  If you are like me and constantly play with Beta releases, this can be a great thing (or a not so great thing).

Any way, lets get to preserving my script where I can find it in the future when I need it again.  The other thing this will demonstrate is walking through various things in Azure and dependency chains.

 

#Azure cmdlets v0.6.9

Import-Module -Name Azure

# Download the settings file.  This only needs to be done once, until the certificates expire.
# Get-AzurePublishSettingsFile # (this opens an IE window.  Logon.  A certificate is generated in all accounts that your Microsoft Account has access to and embedded in the settings file.  Download the settings file.)

Import-AzurePublishSettingsFile 'C:\Users\Public\Documents\MyServices.publishsettings'

###### start Azure Subscription Selector  ######
Get-AzureSubscription | ft
$sourceSub = [string](Read-Host -Prompt "Type the name of the SOURCE Subscription")
Select-AzureSubscription -SubscriptionName $sourceSub

Get-AzureStorageAccount | ft
$sourceStor = [string](Read-Host -Prompt "Type the name of the SOURCE Storage Account")
$sourceStorKey = Get-AzureStorageKey -StorageAccountName $sourceStor
Set-AzureSubscription -SubscriptionName $sourceSub -CurrentStorageAccount $sourceStor

Get-AzureStorageAccount | ft
$targetStor = [string](Read-Host -Prompt "Type the name of the TARGET Storage Account")
$targetStorKey = Get-AzureStorageKey -StorageAccountName $targetStor

Get-AzureSubscription | ft
$targetSub = [string](Read-Host -Prompt "Type the name of the TARGET Subscription")
Select-AzureSubscription -SubscriptionName $targetSub
###### end Azure Subscription Selector ######

###### start Azure Service VM export ######
Get-AzureService | ft ServiceName, Description, AffinityGroup, URL
$sourceSer = [string](Read-Host -Prompt "Type the name of the SOURCE Service")
Get-AzureVM -ServiceName $sourceSer | ft
$myVm = Read-Host -Prompt "what is the VM name you want to export?  (type '*' for all)"

If ($myVm -eq "*") {
    $path = 'C:\users\public\Downloads\azure\' + $sourceSer
    if ((Test-Path -Path $path -PathType Container) -eq $false) {New-Item -ItemType directory -Path $path}
    Get-AzureVNetConfig -ExportToFile ($path + "\vnetconfig.xml")
     Get-AzureVM -ServiceName $sourceSer | foreach { 
         $vmpath = $path + '\' + $_.Name + '.xml'
         Export-AzureVM -ServiceName $sourceSer -Name $_.Name -Path $vmpath
    }
}
else{
    $path = 'C:\users\public\Downloads\azure\' + $sourceSer 
    if ((Test-Path -Path $path -PathType Container) -eq $false) {New-Item -ItemType directory -Path $path}
    Get-AzureVNetConfig -ExportToFile ($path + "\vnetconfig.xml")
    $vmpath = $path + '\' + $myVm + '.xml'
    Export-AzureVM -ServiceName $sourceSer -Name $myVm -Path $vmpath
}
###### end Azure Service VM export ######

##### Start Service and Deployment deletion #####
$vms = Get-AzureVM -ServiceName $sourceSer

###### start Azure Service Removal ######
If ($myVm -eq "*") {
    foreach($vm in $vms){
        Remove-AzureVM -ServiceName $sourceSer -Name $vm.name
    }
    Start-Sleep 10
    Remove-AzureDeployment -ServiceName $sourceSer -Slot Production -Force
    Start-Sleep 10
    Remove-AzureService -ServiceName $sourceSer -Force
}
###### end Azure Service Removal ######

# Remove-AzureDisk to de-register from the VHD Library and remove the lease before the blobs can be copied to another Storage account
If ($myVm -eq "*") {
    foreach ($vm in $vms) {
        $osDisk = Get-AzureOSDisk -VM $vm
        $dataDisk = Get-AzureDataDisk -VM $vm
        If ($osDisk -ne $null){ Remove-AzureDisk -DiskName $osDisk.DiskName }
        Foreach ($dat in $dataDisk){
            If ($dat -ne $null){ Remove-AzureDisk -DiskName $dat.DiskName }
        }
    }
}

##### end Service and Deployment deletion ######

###### start Azure VM VHD copy ######


At this writing this is all hand wavy because the PowerShell cmdlets don’t support it yet.  I used a third party GUI that did the copying using the Azure Storage API.

 

## Start VHD registration with the VHD Library ##
Add-AzureDisk -OS Windows -MediaLocation
http://mydemo.blob.core.windows.net/vhds/myDemo-Foo-2013-1-8-762B.vhd -DiskName 'myDemo-Foo-0-2013010819464B' -Label Foo
Add-AzureDisk -OS Windows -MediaLocation
http://mydemo.blob.core.windows.net/vhds/MyDemo-Bar-2013-1-8-761B.vhd -DiskName 'myDemo-Bar-0-2013010819450B' -Label Bar
## End VHD registration with the VHD Library ##

###### end Azure VM VHD copy ######

 

##### start create new Service / VMs #####
# Query New Service name, and Subscription name
# $targetSer = $sourceSer  # for reincarnation

$vms = @()
$path = 'C:\users\public\Downloads\azure\' + $targetSer
$vmsToImport = Get-ChildItem $path

foreach ($vm in $vmsToImport) { 
    if ($vm.Name -match 'vnetconfig'){
        [xml]$vNetConfig = Get-Content -Path $vm.FullName  # if modification, it needs to happen, but this can be used later
        Set-AzureVNetConfig -ConfigurationPath $vm.FullName  -ErrorAction Continue  #if adding a VNet to a Subscription with an existing VNet, this will falsely error.  Check after this step to be sure the VNet created before adding the VMs.
    }
    else {
        $vms += Import-AzureVM -Path $vm.FullName
    }
}

# Choose the VNet to properly fill the following.  This is at the Subscription level
Get-AzureVNetSite | ft Name, AddressSpacePrefixes, AffinityGroup, Subnets, DnsServers, InUse
$myVNet = Read-Host "What is the Name of your target Virtual Network for the Domain Controller / DNS Server?"
$myVNet = Get-AzureVNetSite -VNetName $myVNet
# no need to query the subnet because the VM configurations contain that.

If ((Test-AzureName -Service $targetSer) -eq $false){
    # Importing to Existing Service is the assumption
    New-AzureVM -ServiceName $targetSer -VMs $vms -VNetName $myVNet.Name
}
else {
    # Creating new Service is the Assumption
    New-AzureVM -ServiceName $targetSer -VMs $vms -Location "You Should Know" -VNetName $myVNet.Name -AffinityGroup "SorryIDidn’tQuery"
}

##### end create new Service / VMs #####

Wednesday, July 1, 2009

Linux vm from VMware to XenServer the videos

If you have been following, you will note quite a few posts related to importing / migrating Linux virtual machines from VMware to XenServer.

I realize that many folks don’t use XenServer – but the basic steps of repairing after migration apply to Hyper-V just as well as to XenServer – the steps are the same if you want your VM to boot. However, PV enablement on Hyper-V does not exist, you just need to install the vm tools.

Here are the links in case you missed them:

I have turned three of these into short (less than 10 minutes) video presentations, just to add a bit more information than in the articles.

Thursday, April 16, 2009

Migrating SLES from VMWare ESX to XenServer

For SuSE Linux Enterprise Server I require a “Helper” virtual machine to mount and repair the file system. This is because SLES recovery console does not include an editor.

After migrating SuSE and booting the boot loader fails at: “waiting for device /dev/sda2.” This is as expected because /sda refers to a SCSI bus and on XenServer SuSE actually sees an /hda (IDE) boot device.

clip_image002

The Helper VM can be created using the XenServer provided Debian Etch template virtual machine (this template includes the media, making it practically ready to go). The included Debian distribution also works with the SUSE reiserFS that is installed by default.

It is also of note that SuSE has a full Xen aware kernel and can be further optimized by presentation of the boot devices as Xen Virtual Disks and by loading a paravirtualized kernel. These optimizations are outside of this article; this is specifically focused on having a running virtual machine.

Import the VM to XenServer:

In my examples I am using XenConvert 2.0 to consume the VMware OVF virtual appliances, however Citrix Project Kensho can also be used.

Creating the Helper virtual machine:

In XenCenter select VM -> New

Choose ‘Debian Etch 4.0’ as the template (this template provides the installation template plus the operating system, nothing to download).

Name the virtual machine “HelperVM” and complete the New VM wizard accepting the defaults, allow the VM to boot, and open the console of this VM.

At the console of HelperVM enter a new root password, VNC password, and a host name (‘HelperVM’ is my suggestion).

Mount the SLES virtual disk to HelperVM:

In XenCenter select the SuSE virtual machine, and then select the Storage tab.

Select the virtual disk (make a note of the disk name) and then click Detach.

clip_image004

*Note: the VM must be powered off to detach a virtual disk.

Select HelperVM, then the Storage tab, and then click Attach.

clip_image006

Select the SuSE virtual disk from the Storage Repository and click Attach.

clip_image007

Return to the HelperVM console.

Note that HelperVM should have auto-mounted the volume (in this example HelperVM was running when I attached the virtual disk). My example added the controller device xvdc with the partitions of xvdc1 and xvdc2.

clip_image008

This can also be seen in the Storage tab of XenCenter.

clip_image010

Return to the console of HelperVM and create a path to mount the volume and mount the first volume.

Mkdir /mnt/suse

mount /xvdc/xvdc1 /mnt/suse

clip_image011

Note the error that this looks like swap. I will try to mount the other volume.

clip_image012

Switch to the mounted file system and list to verify that this appears to be the root volume.

clip_image013

Repairing the boot loader:

From this point forward the process is fundamentally no different than repairing Debian or modifying the Grub menu and fstab of any Linux distribution.

I will begin by repairing fstab.

From the root of the mounted SuSE virtual hard disk (/mnt/suse) change to the /etc directory and open the fstab file in an editor.

It should look similar to my nano editor screen below:

clip_image014

I am going to modify the two entries that point specifically to a SCSI presented boot device to IDE.

Previously, in the XenCenter Storage tab for the SuSE virtual machine I observed that the virtual disk was presented on an IDE controller.

The new fstab should resemble this:

clip_image015

Now, to proceed to the Grub boot loader menu.

One way to approach this is to copy an existing entry to a new entry and make the necessary modifications to the new entry. In this example I am modifying the existing entries for the new hypervisor.

Change to the /boot/grub directory ( cd /mnt/suse/boot/grub )

And open menu.lst in an editor.

clip_image016

Find the entries that refer to /dev/sdaX and change them to /dev/hdaX. In the screen shot above this is /dev/sda2

clip_image017

Then save the modifications.

To safely continue I need to un-mount the SuSE virtual disk from the HelperVM.

Return to the root of the file system ( cd / ) and use the umount command to un-mount the xvdc2 device.

clip_image018

To continue to repair the SuSE virtual machine, the virtual disk needs to be detached from the HelperVM and attached back to the SuSE virtual machine.

Mount the SLES virtual disk to the SuSE VM:

Begin by shutting down HelperVM.

clip_image019

Select the Storage tab of HelperVM. Select the SuSE virtual disk and select Detach.

clip_image021

Select the SuSE virtual machine, select the Storage tab, click Attach

clip_image023

Select the correct virtual disk and Attach.

clip_image024

Open the console of the SuSE virtual machine and power it on.

Additional repairs:

As with other Linux distributions, if X server was used to present a graphical console it will require repair due to the capabilities of new video devices.

clip_image025

X server is then disabled.

clip_image026

To repair X server, logon as root and repair the X server by running SaX2

At the command prompt execute the command sax2 -f

At the completion of the wizard X server can be started by executing startx or rebooting.

The one thing that you will notice is that there is no mouse support. Setting up VNC Server within the virtual machine and connecting to a graphical console using this VNC connection can resolve this situation.

Friday, April 10, 2009

Migrating VMs between Hypervisors – the hypervisor

The key items to be aware of with migration of Linux virtual machines between hypervisors is:

1) What controller does the respective hypervisor present as the boot controller

2) How are other devices emulated (video, mouse, keyboard, NIC)

3) Are virtual machine tools installed from the source hypervisor

a. Do these present custom devices or device drivers

4) Is the Linux virtual machine using a paravirtualization aware kernel.

The most critical of these considerations in the success of being able to recover a virtual machines without reinstalling is the presentation of the disk interface. The operating system can be repaired by modifying the boot devices identified in the boot loader and in the file system table.

The presentation of the boot disk can be very different between hypervisors. This is analogous to removing a hard drive from an IDE controller and attaching it to a SCSI or ATA controller and expecting the installed operating system to boot and run properly. With a windows operating system the usual result is the Blue Screen of Death (BSOD) with a 0x0000007B error. This error is generally associated with critical boot device driver not being available for the bus the disk is on.

With Linux operating systems the issue is not a critical boot device, but rather a path to the /root partition. The most common Linux boot loader is Grub. Depending on the distribution Grub has the path to the root defined in the boot loader entry. Another item used by many Linux distributions is the File system Table or fstab. Fstab is simply a file that serves as a shortcut to the location of operating system volumes such as root, swap, var, etc. All of these entries can be edited after mounting the installation volume.

The most resilient of the operating systems to migration is RedHat. The current RedHat distributions appear to use virtual points to the disk volumes. Instead of /hda for IDE 0 or /sda for SCSI 0 I notice /VolGroup00, a more generic descriptor for the first boot disk.

Virtual machine tools may have secondary impacts and errors related to supporting applications such as XWindows or VNC Server that are running within the virtual machine. There are also downstream situations where a particular set of virtual machine tools has a negative impact on a new envirnonment (ie. The VMware tools have a negative performance impact when they run within a VM hosted on Hyper-V or XenServer).

In the case of a paravirtualized kernel the result depends on the host system being migrated to. The result could be that the VM continues to function, or the VM could fail to boot with a kernel error because the new host does not support the kernel of the previous host. This is a more complicated case and will need to be considered on a case by case basis.

Differences between hypervisors:

From a user perspective all hypervisors provide similar services – they host and run virtual machines. However, the architecture and implementation of each is very different.

Focusing on how a hypervisor presents virtual disks to a virtual machine will be the remainder of this writing. If a virtual machine cannot boot after migration, then the exercise has been considered a failure.

Hyper-V

The Hyper-V model is that the booting volume of an operating system must be installed on a virtual IDE controller. For a windows virtual machine this is what we commonly refer to as the C:\ or C:\Windows volume. In the Linux world, this is where the boot loader and the root volume reside.

Therefore, when Hyper-V is the source an IDE interface can be assumed.

VMware ESX

The ESX model has always been SCSI. All disks of a virtual machine appear to the installed operating system as residing on a SCSI controller. Historically, this presented installation challenges as some operating systems, such as Windows XP and Windows 2003, did not include a compatible SCSI driver.

Only in vSphere 4 is this model changed, to allow the option of IDE. However, in the Release Candidate the default controller is still SCSI.

XenServer

In the XenServer world the disk devices can begin to appear very strange. XenServer virtual machines can see disk devices are IDE, SCSI, or a SCSI-like controller. Much of this behavior is defined by the template that was used to create the virtual machine combined with paravirtualization awareness of the OS.

A Windows operating system will behave as if the boot device is IDE. However a Linux operating system can identify a variety of devices such as /hda, /sda, /xvda, and /QEMU.

Moving virtual Machines in practice

In practice a Windows virtual machine will move between XenServer and Hyper-V rather easily. There is usually the recognition of new devices on the receiving platform, but generally nothing that cannot be repaired, nor prevents booting the operating system.

From or to VMware was the historic problem because of the need to change from a SCSI controller to something else. For Windows operating systems this requires the injection of a critical boot device driver that supports the new platform.

Saturday, March 14, 2009

Migration types de-mystified

Recently I have been trying to help folks out with understanding the infrastructure required for various type of migrations using SCVMM and Hyper-V.

There is Network Migration, Quick Migration, SAN Migration, and soon - Live Migration.

Most people get confused when they start talking about infrastructure and what is required for each to work. Then someone mentions the VDS Hardware Provider and Windows Storage Server, and the discussion usually goes to 'Why do I require Storage Server? I don't get it.'

My quick and dirty response is:

The hitch is SAN Migration. Quick / Live Migration is easy - Failover Clustering does that.

SAN Migration requires that you have a SAN and SAN management software that hooks into VDS (the Virtual Disk Service).

You install the SAN management agent on the SCVMM server.

It is here where most folks being talking about Storage Server - as it has a VDS Hardware Provider (SAN management agent) that is VDS capable.

Like I mentioned - Quick / Live Migration is easy - it is built in, it is SAN Migration that requires infrastructure.

Now, let me get into the greater details of each type of migration and who is performing it. (yes, these are marketing terms)

Network Migration and SAN Migration are specific to SCVMM.

Network Migration is the act of copying a VM over the wire, using BITS between two points (either between the Library and a Host, or a Host and a Host).

SAN Migration is the act of moving a VM between two points by detaching and reattaching a LUN (Library / Host or Host / Host)

The requirements for SAN Migration are: One VM per LUN, the SAN can be managed by SCVMM through VDS Hardware Provider software installed on the SCVMM server, all entities can talk to the SAN. SAN Migration generally involves Fiber Channel SAN connections.

Quick Migration and Live Migration are specific to Hyper-V.

Both use Failover Clustering to move a VM between two Hosts. (The SCVMM Library is not involved at all). All the requirements of Failover Clustering apply (shared storage, similar config, similar hardware, etc.).

In both cases the VM must be manged by Failover Clustering (which is included in all flavors of Hyper-V), also referred to as making the VM Highly Available.

Quick Migration is available with the v1 of Hyper-V. When a Quick Migration is triggered, Failover Clustering saves the VM, moves the ownership to the failover host, then starts the VM on the new host.

Live Migration will be available in the R2 release of Hyper-V. It is very similar to Quick Migration except that the VM is not saved - it is running during the entire operation. I am not going to go into the details in this post.

I hope that helps a few folks with clearing up the confusion between the terms, the high level technicals, and the infrastructure that you might need.