A lot of people don't realize that Citrix XenDesktop supports more than XenServer as a backend.
In fact, it will support any backend infrastructure (or infrastructure-less) environment that has a plug-in. XenDesktop ships with plug-ins for XenServer (of course), SCVMM, and Virtual Center.
Since most folks don't know that SCVMM support is available I took some time to put a little something together. I even have my VMs running on Hyper-V.
One caveot, your XenDesktop Desktop Delivery Controller Server must be 32-bit.
Working with the "back-end" of I.T. systems. Learn. Apply. Repeat.
The things learned as an IT Pro'fessional turned software tester, researcher, and product manager.
Wednesday, April 30, 2008
Friday, April 25, 2008
Hyper-V + Failover Clustering .. the fight for the managed workload
Wow, I just had quite an experience with my Highly Available Hyper-V virtual machine.
First of all, the HA feature with Hyper-V using Failover Clustering works REALLY well.
Second, I have to spend a bit more time thinking about what the heck I am doing before I go nuts again!
Okay, here I am, playing with snapshotting - taking snapshots, reverting, etc. Trying to document how things work and change.
Taking snapshots was a no brainer, everything worked fine.
When I deleted a snapshot I diligently shut down the VM and started monitoring the volume waiting for the merge to happen..I started at the screen for 20 minutes, watching, switching back to the Hyper-V console, watching the volume.
Finally, I tried to refresh the volume - boom! The volume is gone. Oh, cr**! iSCSI must be having problems - poke, troubleshoot, poke some more.
Suddenly, in a fit of frustration - duh, I have failover clustering set up. Check failover clustering - my VM is now running on Host 2. GAAA!!!
Okay, move the workload back. Get the merge to happen properly.
Now, revert.. Do a revert - boom, communication to the VM is lost (since the Host is serving up the console over RDP). I check Failover Clustering again - there is my VM on the other Host again - not properly reverted either.
Wow. The things to think about now.
Microsoft has done a brilliant job at using other Windows Server 2008 features WITH Hyper-V (think about it, Hyper-V is not much at all without all of the WS08 and System Center add-ons.) but the complexities of interplay between these components is not for the admin with a weak constitution.
Keep those skills up to par and be the admin that thinks out of the box.
First of all, the HA feature with Hyper-V using Failover Clustering works REALLY well.
Second, I have to spend a bit more time thinking about what the heck I am doing before I go nuts again!
Okay, here I am, playing with snapshotting - taking snapshots, reverting, etc. Trying to document how things work and change.
Taking snapshots was a no brainer, everything worked fine.
When I deleted a snapshot I diligently shut down the VM and started monitoring the volume waiting for the merge to happen..I started at the screen for 20 minutes, watching, switching back to the Hyper-V console, watching the volume.
Finally, I tried to refresh the volume - boom! The volume is gone. Oh, cr**! iSCSI must be having problems - poke, troubleshoot, poke some more.
Suddenly, in a fit of frustration - duh, I have failover clustering set up. Check failover clustering - my VM is now running on Host 2. GAAA!!!
Okay, move the workload back. Get the merge to happen properly.
Now, revert.. Do a revert - boom, communication to the VM is lost (since the Host is serving up the console over RDP). I check Failover Clustering again - there is my VM on the other Host again - not properly reverted either.
Wow. The things to think about now.
Microsoft has done a brilliant job at using other Windows Server 2008 features WITH Hyper-V (think about it, Hyper-V is not much at all without all of the WS08 and System Center add-ons.) but the complexities of interplay between these components is not for the admin with a weak constitution.
Keep those skills up to par and be the admin that thinks out of the box.
Thursday, April 17, 2008
Determining versions under Hyper-V
[Update: Just ot let everyone know, the method below still works, however what has happened over time is that the various drivers have drifted and the release builds are no longer consistent among the various components on one side. By that I mean that all the Server side components might not be at the same build number and all the client side drivers may not be at the same build number. What should be consistent is tha the build of hte server service and its corresponding client driver should align. So, just think.]
Every now and then we need to determine the release level of a component that we are running.
In the case of Hyper-V, if you have the role installed you can check the version of vmms.exe
There is the GUI way:
Browse to \windows\systm32\vmms.exe and check the properties -> details tab
There is the command line way:
wmic datafile where name="c:\\windows\\system32\\vmms.exe" get version
you should see a build version either way - here is the breakdown:
Beta Version of Hyper-V installed == 6.0.6001.17101
RC0 version of Hyper-V installed == 6.0.6001.18004
Now, you don't have the Hyper-V role installed and you want to check the release level of the Integration Components. (this also works within a VM to check the version installed within a VM)
We just look at different files.
Open Device Manager and find one of the vmBus devices (vmBus Network Adapter, VMBus HID Miniport, vmBus Video Device), open its properties, on the driver tab, check the driver version.
The version will end in 18000 if you are running Beta Integration Components
The version will end in 18004 if you are running the RC0 Integration Components
...
The version will end in 18016 if you are running the RTM Integration Components
If you need to update Hyper-V or its integration components under Windows Server 2008 then install patch KB949219 to move to the RC0 release. To update any other supported guests insert the Intgration Services Setup Disk from the Connect client window and let autorun do its work (assuming a Windows guest).
Every now and then we need to determine the release level of a component that we are running.
In the case of Hyper-V, if you have the role installed you can check the version of vmms.exe
There is the GUI way:
Browse to \windows\systm32\vmms.exe and check the properties -> details tab
There is the command line way:
wmic datafile where name="c:\\windows\\system32\\vmms.exe" get version
you should see a build version either way - here is the breakdown:
Beta Version of Hyper-V installed == 6.0.6001.17101
RC0 version of Hyper-V installed == 6.0.6001.18004
Now, you don't have the Hyper-V role installed and you want to check the release level of the Integration Components. (this also works within a VM to check the version installed within a VM)
We just look at different files.
Open Device Manager and find one of the vmBus devices (vmBus Network Adapter, VMBus HID Miniport, vmBus Video Device), open its properties, on the driver tab, check the driver version.
The version will end in 18000 if you are running Beta Integration Components
The version will end in 18004 if you are running the RC0 Integration Components
...
The version will end in 18016 if you are running the RTM Integration Components
If you need to update Hyper-V or its integration components under Windows Server 2008 then install patch KB949219 to move to the RC0 release. To update any other supported guests insert the Intgration Services Setup Disk from the Connect client window and let autorun do its work (assuming a Windows guest).
Thursday, April 10, 2008
Hyper-V RC0 Patch is on Windows Update
Yea, the Hyper-V RC0 patch is available on Windows update!!
Okay, stop now, relax, and think.
What does this mean to me as an admin?
If you have built a nice fresh and clean install of Windows Server 2008 AND have not put any virtual machines on it - go ahead, patch freely and with reckless abandon.
If you are running Hyper-V as the WS08 RTM and have running virtual machines. Stop!
Read this most excellent post from John Howard (all of it).
In a nutshell..if you have snapshots - delete them now (this merges the snapshots into the base VHD), shutdown the VM and wait for the snapshots to be merged in to the base VHD.
This can take a while, soe you have to monitor the AVHD files of the VM - when they are all gone (poof, they disappear when they get mergd in), then you have a VHD you can walk away with.
The snapshots are not 100% compatable between the RTM and RC0.
Document any non-default network settings you have in your VMs (host too)
The netwoking is upgraded in this release AND you will end up creating new NICs within your VMs which means hardware detection kicks in and gives you a new NIC with default settings.
(And if you have an application installed in a VM that binds itself to the MAC address - make sure you know that MAC address so you can set it again)
After the VMs are ready and shutdown you can patch the Host.
After reboot you may need to recreate a VM configuration file, you may need to recreate virtual switches, you may need to attach a VM Network Adapter to a new virtual switch..
As the hypervisor and the virtualization stack is modified the GUIDs assigned to devices changes. In the Windows world we know that if you move a NIC from slot A to slot B we end up with a new device in device manager (a new device GUID).
The same thing is happening under the hood in this case. Only in this case you are left pullling your hair out.
Oh, and if your VMs won't boot after the upgrade you might want to explore this TechNote:
http://support.microsoft.com/kb/949222/
Okay, stop now, relax, and think.
What does this mean to me as an admin?
If you have built a nice fresh and clean install of Windows Server 2008 AND have not put any virtual machines on it - go ahead, patch freely and with reckless abandon.
If you are running Hyper-V as the WS08 RTM and have running virtual machines. Stop!
Read this most excellent post from John Howard (all of it).
In a nutshell..if you have snapshots - delete them now (this merges the snapshots into the base VHD), shutdown the VM and wait for the snapshots to be merged in to the base VHD.
This can take a while, soe you have to monitor the AVHD files of the VM - when they are all gone (poof, they disappear when they get mergd in), then you have a VHD you can walk away with.
The snapshots are not 100% compatable between the RTM and RC0.
Document any non-default network settings you have in your VMs (host too)
The netwoking is upgraded in this release AND you will end up creating new NICs within your VMs which means hardware detection kicks in and gives you a new NIC with default settings.
(And if you have an application installed in a VM that binds itself to the MAC address - make sure you know that MAC address so you can set it again)
After the VMs are ready and shutdown you can patch the Host.
After reboot you may need to recreate a VM configuration file, you may need to recreate virtual switches, you may need to attach a VM Network Adapter to a new virtual switch..
As the hypervisor and the virtualization stack is modified the GUIDs assigned to devices changes. In the Windows world we know that if you move a NIC from slot A to slot B we end up with a new device in device manager (a new device GUID).
The same thing is happening under the hood in this case. Only in this case you are left pullling your hair out.
Oh, and if your VMs won't boot after the upgrade you might want to explore this TechNote:
http://support.microsoft.com/kb/949222/
Wednesday, April 2, 2008
Networking under Hyper-V (who moved my network settings?)
There is a lot of discussion around networking and network changes on a Windows Server 2008 server after the Hyper-V role is installed.
Part of this issue goes back to what happens (what is done) to your WS08 server when the Hyper-V role is installed.
As I had mentioned in a previous post when the Hyper-V role is installed the WS08 server is fundamentally changed. As an administrator you login to a console and you see WS08 - it looks like nothing changed, however it has.
Ben Armstrong has a quick posting here that gives some basics.
Lets take a more architectural look at the server and what has happened / is happening..
During the process of adding the Hyper-V role the WS08 installation was turned into a virtual machine itself and it is running on top of the hypervisor as the parent partition. Unlike other hypervisors where you see a Linux based console, in this case you see your WS08 server - a nice, friendly GUI interface that really does not look any different than it did before.
Now, what does this have to do with the networking? It has to do with modifications that were made to the network interfaces of the WS08 server when the Hyper-V role was installed.
Quite honestly, the modifications that are made are no different than what happens when modifying any other Windows server in a similar way.
Most likely, if you set a manual IP address (any manual settings) they were lost.
If you look at Network Connections in the WS08 server you notice new Virtual Network Adapters and your original network connections were changed.
Noting back, your WS08 server was turned into a VM (its hardware was changed) - the original network connection (which you can still see) was turned into a virtual switch (the WS08 server no longer owns that NIC).
Your WS08 server (WS08 parent partition - that is what it is now) was given new virtual network card(s). And as with adding any new NIC to any Windows server it gets the default settings (DHCP for example).
Now, there is also talk about performance (the parent partition performance is terrible, but a VM runs great - or the other way around).
Now we have to begin looking at the NIC driver and driver configuration.
First of all, good old TCP Offloading, long a performance issue in many Windows environments might need to be turned off. Mind you, this issue seems to be environment specific.
The other is the NIC driver itself. You are pretty safe using the included WS08 drivers.
Some troubleshooting questions:
Did you install a non-Windows delivered driver?
Did you install a teaming driver?
Did you configure teaming?
Was there management software installed with the teaming driver or was only the driver installed? (some experience has shown that the driver itself might be fine but the management software causes problems - as it is trying to monitor a NIC that the parent partition no longer owns)
My goal with this post was to help an administrator understand what is going on, and with that where to look to solve his/her problems.
Part of this issue goes back to what happens (what is done) to your WS08 server when the Hyper-V role is installed.
As I had mentioned in a previous post when the Hyper-V role is installed the WS08 server is fundamentally changed. As an administrator you login to a console and you see WS08 - it looks like nothing changed, however it has.
Ben Armstrong has a quick posting here that gives some basics.
Lets take a more architectural look at the server and what has happened / is happening..
During the process of adding the Hyper-V role the WS08 installation was turned into a virtual machine itself and it is running on top of the hypervisor as the parent partition. Unlike other hypervisors where you see a Linux based console, in this case you see your WS08 server - a nice, friendly GUI interface that really does not look any different than it did before.
Now, what does this have to do with the networking? It has to do with modifications that were made to the network interfaces of the WS08 server when the Hyper-V role was installed.
Quite honestly, the modifications that are made are no different than what happens when modifying any other Windows server in a similar way.
Most likely, if you set a manual IP address (any manual settings) they were lost.
If you look at Network Connections in the WS08 server you notice new Virtual Network Adapters and your original network connections were changed.
Noting back, your WS08 server was turned into a VM (its hardware was changed) - the original network connection (which you can still see) was turned into a virtual switch (the WS08 server no longer owns that NIC).
Your WS08 server (WS08 parent partition - that is what it is now) was given new virtual network card(s). And as with adding any new NIC to any Windows server it gets the default settings (DHCP for example).
Now, there is also talk about performance (the parent partition performance is terrible, but a VM runs great - or the other way around).
Now we have to begin looking at the NIC driver and driver configuration.
First of all, good old TCP Offloading, long a performance issue in many Windows environments might need to be turned off. Mind you, this issue seems to be environment specific.
The other is the NIC driver itself. You are pretty safe using the included WS08 drivers.
Some troubleshooting questions:
Did you install a non-Windows delivered driver?
Did you install a teaming driver?
Did you configure teaming?
Was there management software installed with the teaming driver or was only the driver installed? (some experience has shown that the driver itself might be fine but the management software causes problems - as it is trying to monitor a NIC that the parent partition no longer owns)
My goal with this post was to help an administrator understand what is going on, and with that where to look to solve his/her problems.
Subscribe to:
Posts (Atom)