Friday, December 24, 2010

Fabric is infrastructure

Azure is this platform. It is a bunch of VMs but it isn't a bunch of VMs. There are roles and instance of roles (which are technically VMs).

Then there is this mysterious thing called 'fabric'. The fabric is the secret sauce, it is the thing that makes it all work.

For those of us who have been in enterprise IT - we know all about fabric. We have built it, we have managed it, we have fixed it.

This is a term that is not Azure specific by the way - you will see it appearing in more and more places - fabric is infrastructure. Azure fabric is a bit more - but you will be seeing this fabric term more frequently in the future.

In the simplest sense, fabric is infrastructure. It is the server, the hypervisor that runs on it. It is the storage and storage management. It is the network; the load balancers, the subnet, the VLANs. It is also the management layer that brings all of these things together.

It is this management layer that provides the real keys - the orchestration of events. Provisioning of storage, a VM on a hypervisor, booting that VM, configuring its firewall and networking, installing / instantiating / injecting / inserting / configuring the application, setting the virtual interfaces of the load balancers, and verifying the "health" of your application.

All of this is what the fabric does and it is really valuable, important, and useful.

If you want to geek out (in an IT Pro sort of way) and want to learn more - here are some presentations by Mark Russinovich describing the entire thing in deep detail at PCD 2010

Thursday, December 16, 2010

Pale Blue Cloud that is Azure part 1 VMRole

Lately I have had my head in the Microsoft Cloud – Azure to be specific.

It is pale blue and pretty.  It scales, it is definitely intended for developers.

I know there has been lots of speculation about Azure and the VM Role and the ability of Infrastructure as a Service.  I can tell you that the VM Role is not intended for a person to run their entire datacenter in the Azure Public Cloud. 

On the other side, once you work things out – the VM Role is actually pretty cool but not always necessary.  In fact, unless you have a total lack of creativity you really don’t need the VM Role for doing much.  The Web Role and the Worker Roles have a great deal of flexibility. 

Now, this sounds like marketing speak as it comes across as sounding really easy – it isn’t – but I will tell you that if you can write code and understand installation and deployment that you can do most of what you might want to with Worker and Web roles.  There are lots of options available – and they are very traditional deployment options – but you have to structure them in code; so you must really understand your application and your application dependencies.

This is really where the VM Role comes into play.  It is designed for the scenario where you have a legacy application that you want to get into the cloud but it is too costly or not feasible to re-write it specifically for Azure.  Now, that being said – Azure has some really cool features that developers can use to make really big and really resilient and really grand applications.  But the reality is that software actually lives for a long time.  And not everyone upgrades to the latest and greatest version.  And, it takes time to re-write an application – it can take over a year for an enterprise application to go through a revision – let alone a major re-write for a new platform.

I plan on going into some of the features of Azure that will be of interest to ITPros – as most likely at some point in the future you will come into contact with an Azure application – Azure is not the exclusive playground of developers, they might think it is, but it really isn’t.

I hope that at some time in the future I can share the interesting things that I am playing with.

In the mean time – if you want an easy Getting Started with the VM Role guide – here is it:

There is no reason for me to write information a second time that already exists (and is pretty decent).

Thursday, November 18, 2010

What is the Private Cloud?

What is this “Private Cloud” thing anyway?

The simple term is that it is a pool of compute resources that runs on hardware that you own (it is in your datacenter). 

But, that just means that I am running virtual machines, right?

Not exactly.  It means that there is some type of automation framework in front of the hypervisors and the virtual machines that is providing some type of orchestration.  This could be XenServer + VMLogix or Hyper-V + SCVMM.

At the same time, virtual machines are not required – but it is easier with virtual machines.

HPC from Microsoft, the old fashioned mainframe, the Cray – these are examples of private clouds that have been around, performing massive calculations, they are orchestrated; they meet the definition.  But they require custom written applications so they work well in very specific situations.

The recent “Private Cloud” term tossed out by Microsoft I think can be summarized in this way:

  • Private Cloud is owned by you.
  • Private Cloud is dynamic (in some way)
  • Private Cloud is at least Hyper-V (but it can be more as you need it to be).
  • Private Cloud has OEM reference hardware designs (these are tested and known to work, but they are not the only option)
  • Private Cloud is NOT all boxed up and sitting on a shelf somewhere. (it is not a SKU that you purchase)
  • Private Cloud is not defined by a specific set of prescribed technologies.
  • Private Cloud must be pieced together by you (the architect, the enterprise, the planner – you must know what your requirements are).
  • Private Cloud is NOT Microsoft only technologies.

Private Cloud is your enterprise datacenter.  Enabled to be nimble and adaptive through technologies that make it dynamic.  Some of these technologies are:  Hyper-V, SCVMM, SSP, Operations Manager, Configuration Manager (basically any of the System Center group added on top of your infrastructure that happens to be running as virtual machines).

In the Citrix world I could describe the same, or I could have a mash-up of Citrix products with Systems Center products.  The end result would be the same – a Private Cloud.

This is where the big disconnect is happening and all of the confusion is currently happening.  Private Cloud is a term, a term that summarizes a conglomeration of technologies that are working together to make an IT shop more efficient, more nimble, more self-service.

Wednesday, November 17, 2010

Cloudy in the Private Public and Hosted worlds

Today there was a thread in the forums that brought up the new buzz word: “Private Cloud”

There is currently a lot of hype, questions, and confusion about this term – it is not a new term by the way – since it was used at TechEd 2010 in Berlin.

The original poster appears just as confused as the majority of folks as to what this term might mean.  And in the industry so far this seems to be the current state of things.  The question:  “What is a cloud?” 

My take on “cloud” is that it is a pool of compute resources that works in some way and it is in some place.  That seems to be the cross vendor understanding of cloud.

Today there are a few “cloudy” terms floating around.  Private, public, and hosted.

Private Cloud is a cloud that your (as an enterprise) own and run.  It is your traditional datacenter.  But set-up and managed in a way where it does not matter where the application runs as long as the application runs.

Public Cloud is a cloud that is “out there” and runs at a facility owned by someone other than you.  And the key is that you purchase compute resources on that cloud.  Azure is a really good example of this.  Amazon EC2 as well.

Hosted Cloud is a cloud that is “out there” and runs at a facility owned by someone other than you as well.  They key here is that it is provisioned and billed more like a co-location facility.  You have “machines” that you purchase – not highly generic capacity.  This is more like the Rackspace or upcoming Azure VM Role models.


The most interesting part of the entire thread that caused me to begin this post in the first place is that HPC was mentioned by the original poster.  You know, you could think of HPC as a private cloud.  Just as a good ‘ole mainframe could be a private cloud.

Here is how I responded to the thread – simply to broaden the thinking beyond a simple hypervisor:

I wanted to jump in to this thread as there are two distinctly different technologies being discussed. And since the OP is coming from the education industry both or neither could apply.

The term "Private Cloud" is simply a set of processing resources that you own and therefore control. It is not remote. That is all that we really know today. Beyond that, it is not clear if this is a combination of technologies, a SKU bundle, a pre-installed hardware package, etc.

HPC, Azure Appliance, and Hyper-V + SCVMM are all options that would give you something that you could refer to as a private cloud.

HPC is a platform designed for distributed processing - many individual nodes that provide CPU and contribute back to the primary job. This can be thought of as very mainframe-ish. I over-simply the concept and think of it as the SETI@Home or Folding@Home model - as it is a large compute. This requires custom written applications.

Azure is a platform - that is making breaks in the education realm right now - but it is not private. It is a pool of processing resources that can be scaled out and back on demand. It is HPC-like in concept, but Hyper-V-like in large scale implementation. MSFT owns and manages the hardware, you own the application - you purchase capacity during a slice of time on the Azure platform. This requires custom written applications.

Azure Appliance - this is the cloud version of Azure, but you own it, it is your datacenter, you house the physical hardware - all the other Azure rules apply. You simply don't pay MSFT for time as you own the hardware.

Hyper-V + SCVMM - this is the current model of using virtual machines to run enterprise applications. This allows you to run off-the-shelf applications installed on to operating systems - just as you would with physical hardware. Hyper-V is the hypervisor, SCVMM is the management stack for the virtual machines. Adding the VMM SSP 2.0 in front of SCVMM provides additional functionality.

Each one of these fits in different places and for different reasons. And they could all be referred to as "cloud" in some way or another. As it is very unclear what "cloud" really means beyond being a pool of compute resources.

In the end, it really comes down to what you require. Large complex calculations would gravitate to HPC or Azure. More traditional workloads would gravitate toward Hyper-V + SCVMM.

Tuesday, October 5, 2010

Migrating a XenDesktop environment from VMware to Hyper-V

I have had a recent need to migrate a working XenDesktop environment from VMware over to Hyper-V. One thing that is important to this process is SCVMM.

Following are some guidelines that can be used by most anyone who is in this situation.

The (fully) Virtual Environment:

  • XenDesktop 4 Desktop Delivery Controller (Windows 2003 R2, x64, 2vCPU, 4GB RAM, 1 NIC)
  • Licensing Server (Windows 2008, x86, 1 vCPU, 2GB RAM, 1 NIC)
  • Web Interface server (Windows 2008 R2, 2 vCPU, 4GB RAM, 1 NIC)
  • Provisioning Server (Windows 2008, x64, 4 vCPU, 16GB RAM, 1 NIC)
  • Master VM (the VM where the desktop image is installed, Windows 1, 1 vCPU, 1GB RAM, 1 NIC)
  • PvS Template (the VM template that is cloned to create new Desktop virtual machines)
  • Desktop VM (the actual virtual machines that the Provisioning Server image is streamed to and runs the users applications)

Manage vCenter from SCVMM:

  1. Use the “Add VMware VirtualCenter server” option in SCVMM to place the vCenter environment under the management of SCVMM
    1. Don’t fear, this does not take full control of the environment, by default it will just enumerate everything.
  2. Update the Security Settings of each ESX host.
    1. Right click each ESX host and add the security settings for the “root” account.
    2. If these are new installation of ESXi the root password is blank by default, SCVMM does not accept blank, so a password must be set.
  3. Wait a bit for “bake time” to happen
    1. This is a bit of time for all of the elements in the environment to be enumerated and fully recorded properly.

Store each VM in the SCVMM Library:

  1. Store a VM in the Library
    1. Power Off a VM
    2. Right Click and Select “Store in Library”
      1. This will delete your VMs from the VMware ESX servers.
  2. Set the MAC address of the VM
    1. From the Library View select “VMs and Templates”
    2. Right click the VM and select Properties
    3. Select the Hardware Configuration tab and select the Network Adapter
    4. Change the MAC address assignment to static (note that the Current address is set) and select OK
    5. If the VMs are diskless and boot from a Provisioning Server be sure that the network adapter type is “Emulated”.

Deploy each VM to a Hyper-V hypervisor:

  1. Convert each VM to deploy it on Hyper-V
    1. From the Library View select “VMs and Templates”
    2. Right Click the VM and select Convert Virtual Machine
    3. Complete the wizard with one of your Hyper-V servers or clusters.
  2. Repeat as necessary.
    1. The conversions will have a job status of “Completed w/ Info” because these VMs do not contain boot volumes, so the conversion process does not find one.

Application Configuration Update:

  1. Double check DNS and IP address settings
    1. The conversion process will install the Integration Components into the VM and thus the OS in the VM will detect a new NIC. If the IP addresses are manually set, then they will need to be set again. If the IP addresses are assigned by DHCP the same IP address should be granted because you carried the MAC address forward (step 5).
    2. The XenDesktop installation uses Fully Qualified Domain Names and therefore relies on DNS, so make sure that all of the machine names are resolving properly.
  2. Double check all Provisioning Server Settings
    1. If the IP address of the Provisioning Server VM has changed be sure to make that configuration change in the Server configuration of your Site.
    2. Don’t forget that your VM images will need an “Emulated” network adapter for the network adapter that Provisioning Server will use to stream to the VM.

Update the Desktops and Desktop Groups

  1. Update the vDisk Image from the Master VM
    1. Be sure to set the vDisk to read / write mode in its settings in Provisioning Server.
    2. A gotcha to be aware of: the PVS driver binds to the NIC driver of the machine when it is installed. Converting the VM to Hyper-V installed the Integration Components and removed the VMware tools; causing the NIC and / or driver in the VM to be changed and therefore breaks the PVS Image driver. To resolve this uninstall and re-install the PVS “Target Device” software on the Master VM.
    3. Run XenConvert to update the vDisk with the latest working Master VM image.
    4. Set the vDisk back to Standard Image mode.
  2. Create new Desktop Groups
    1. New Desktop Groups need to be created using the XenDesktop management console.
    2. Or the old Desktop Groups need to be deleted and created again using the new Hosting Infrastructure.
  3. Verify your Desktop Group is working
    1. Now that all settings are in place; wait for the DDC to refresh the desktop groups and to spin up virtual machines for the idle desktop pool.
    2. Check the console of a few VMs to see that the PVS image is streaming properly and connect to a desktop.

The summary

With one practice run I would feel confident in understanding the process to do this with a production environment – I would probably handle it in stages – but I would know enough to be able to properly plan.

The key to this process is that the VMs undergo a two-step process. First storing to the SCVMM Library and then converting to a Hyper-V hypervisor. The bonus is that the MAC address can be carried forward from the VMware environment. This makes no changes in the setup of Provisioning Server groups.

In Summary:

  1. Manage vCenter from SCVMM.
  2. Store the VM(s) in the Library
  3. Set the MAC address of the VMs to static (to not change settings in Provisioning Server)
  4. Convert the VM(s) to a Hyper-V hypervisor
  5. Update the vDisk Image
    1. Uninstall and re-install the PVS client due to the virtual NIC driver changing.
  6. Re-create / create new desktop pools

Thursday, September 30, 2010

Hyper-V Dynamic Memory as a tool for understanding your Applications

In case you missed it, MSFT is making a bit of noise about the new Dynamic Memory feature that is coming in the SP1 release for Server 2008 R2 / Hyper-V Server R2.

My angle on this is a bit different than most.

The way that Dynamic Memory has been implemented by the MSFT folks is rather interesting.  It is memory ballooning under the hood – but it is hooked into the operating system of the VM in a way that only Microsoft could (as it owns the OS in this case).

What I find most interesting is the way that the Dynamic Memory feature responds to the needs of the application that is running in the virtual machine.

If the application begins needing additional RAM, then Dynamic Memory attempts to give it some, once the application frees up the RAM, then Dynamic Memory takes it back away.

This has the net effect that your machine is always running in an optimal ‘sweet spot’ as far as RAM is concerned.

The most interesting part is that you can perform actions against your VM and you can see how Dynamic Memory responds and in turn understand how your application demands additional RAM.  You can load it to a certain level and get a really good baseline.

Considering that managing a virtual environment is still a bit of a black art this is highly useful in really understanding the demands of your workload in a way that means something.  This directly correlates to a resource – RAM of all things – a very finite resource in most cases.

The end result is that you get a really good indicator of the range of RAM that your workload really needs.

I have said for years that most folks give their servers too much RAM (especially VMs) – and this is a great tool to prove it and to give a really excellent indication of the true RAM utilization of your workload.

Yes, you have to observe the numbers (in the UI or through polling WMI) but I think the return that you get for your time spent doing that is well worth it.

Tuesday, September 28, 2010

Migrating a XenDesktop environment from VMware to XenServer

I have had a recent need to migrate a working XenDesktop environment from VMware over to XenServer.

Following are some guidelines that can be used by most anyone who is in this situation.

The (fully) Virtual Environment:

  • XenDesktop 4 Desktop Delivery Controller (Windows 2003 R2, x64, 2vCPU, 4GB RAM, 1 NIC)
  • Licensing Server (Windows 2008, x86, 1 vCPU, 2GB RAM, 1 NIC)
  • Web Interface server (Windows 2008 R2, 2 vCPU, 4GB RAM, 1 NIC)
  • Provisioning Server (Windows 2008, x64, 4 vCPU, 16GB RAM, 1 NIC)
  • Master VM (the VM where the desktop image is installed, Windows 1, 1 vCPU, 1GB RAM, 1 NIC)
  • PvS Template (the VM template that is cloned to create new Desktop virtual machines)
  • Desktop VM (the actual virtual machines that the Provisioning Server image is streamed to and runs the users applications)

The Export:

  1. Use vApp containers to export OVF packages as groups of machines
    1. Organize the XenDesktop application tier machines into a vApp container named “Infrastructure”. This is so they can be exported as a group of machines in a single OVF package.
  2. Power off the Infrastructure vApp (this can also be done individually by virtual machine).
  3. Export the Infrastructure to an OVF package
    1. Be sure to select the Infrastructure container and choose File, Export, Export OVF Template. Leave the option to export to an OVF (a folder of files) – this speeds up the process since there is no requirement for creating a single OVA file (which takes longer to export and import).
  4. Export the Master VM
    1. To keep the Master VM as uncluttered and clean as possible it is recommended to uninstall the VMware Tools prior to export.
  5. Export the PvS VM Template
    1. If for no reason other than documentation purposes because the OVF contains all of the settings of the original template. With XenServer it is recommended to use an included base template for optimal configuration.
  6. Export the virtual desktops - an additional export consideration
    1. Before exporting the currently deployed virtual desktops there are two options to think about. Do you export your existing virtual desktops (this is highly useful if they have local cache disks on the hypervisor) or do you not export your existing virtual desktops. In both cases you need to create new desktop groups on the Desktop Delivery Controller.
    2. If your Desktop VM images were derived from a template (for example: using the XenDesktop Setup Wizard) the deployed desktop virtual machines probably have dynamic MAC addresses assigned to the VMs.
      1. The VMware OVF does not record the currently assigned MAC address if it is set to dynamic. This means that when imported these machines will be assigned new MAC addresses.
    3. If Provisioning Server is used; the “auto add” feature can be implemented, to automatically add the Imported desktop VMs to Provisioning Server or the MAC addresses of each VM can be updated in the XXX configuration of Provisioning Server.
    4. If you choose to not export the existing Desktop virtual machines you can create new Desktop groups using the PvS Template with the XenDesktop Setup Wizard on the XenServer environment.

In this example I elected not to export the virtual desktops. Primarily because they are pooled and there is little benefit in doing so.

The Import

  1. Import the OVF packages into XenServer using the fix-up option
    1. From XenCenter select a target XenServer, Resource Pool, or folder and launch the Appliance Import Wizard from the File menu.
    2. Browse to the Infrastructure OVF package and begin the import. Be sure to use the “fix-up” option because these machines were built on ESX server.
  2. Install XenTools into each VM in the environment
    1. Before powering on each VM be sure to check that the original VM is powered off (or disconnected from the network). Otherwise there will be two of the same machine on the network at the same time.
    2. Log into each VM and install the XenServer tools, reboot, and proceed to manually fix any network settings. Don’t forget the Master VM.

Application Configuration Update

  1. Make any setting changes to the XenDesktop and Provisioning Server applications
    1. If you didn’t already do so login to the XenDesktop console and disable the Desktop Groups specific to the VMware environment
    2. The XenDesktop installation uses Fully Qualified Domain Names and therefore relies on DNS, so make sure that all of the machine names are resolving properly.
  2. Double check all Provisioning Server Settings
    1. If the IP address of the Provisioning Server VM has changed be sure to make that configuration change in the Server configuration of your Site.

Update the Desktops and Desktop Groups

  1. Update the vDisk Image from the Master VM
    1. Be sure to set the vDisk to read / write mode in its settings in Provisioning Server. Also, modify the machine configuration in Provisioning Server with the new MAC address of the Master VM.
    2. A gotcha to be aware of: the PVS driver binds to the NIC driver of the machine when it is installed. Migrating the VM to XenServer and installing XenTools and uninstalling VMware tools all cause the NIC and / or driver in the VM to be changed and therefore breaks the PVS Image driver. To resolve this uninstall and re-install the PVS “Target Device” software on the Master VM.
    3. Run XenConvert to update the vDisk with the latest working Master VM image.
    4. Set the vDisk back to Standard Image mode.
  2. Use the XenDesktop Setup Wizard to create new Desktop Groups
    1. Create a new VM in your XenServer environment (using a XenServer template)
    2. Convert this to a Template
    3. Run the XenDesktop Setup Wizard to build out a new set of replacement desktop images.
  3. Verify it is working
    1. Now that all settings are in place; wait for the DDC to refresh the new desktop groups and to spin up virtual machines for the idle desktop pool.
    2. Check the console of a few VMs to see that the PVS image is streaming properly and connect to a desktop.

The summary

With one practice run I would feel confident in understanding the process to do this with a production environment – I would probably handle it in stages – but I would know enough to be able to properly plan.

In Summary:

  1. Use vApp containers to export OVF packages that contain groups of machines.
  2. Import the OVF packages into XenServer and use the fix-up option.
  3. Install XenTools into each VM in the environment.
  4. Double check network settings (including DNS) of each infrastructure VM.
  5. Make any modifications I needed in Provisioning Server settings for new VM MAC addresses and Server IP address changes.
  6. Update the vDisk Image
    1. Uninstall and re-install the PVS client due to the virtual NIC driver changing.
  7. Re-create the desktop pools

The Video Companion

For the impatient among us I have outlined the basic steps and gotchas in the following video:

Friday, August 20, 2010

Machine domain accounts and snapshots

Every now and then this particular issue creeps into the forums.  A VM is reverted to a previous snapshot and domain membership is broken.  Or a new VM is created using a base image that is domain joined.  Or some other related scenario.

The first point to always remember is that a snapshot is a moment in time.  When you revert to a snapshot you go back to that previous moment in time and all of the settings that were active at that point in time (as far as the OS in the VM is concerned).

This most commonly manifests itself with machines that are domain joined at the time the snapshot was taken, then the machine runs for a period of time, during this time the machine account password is changed.  Thus when the VM is reverted to a previous snapshot the machine account password in the VM no longer matches the machine account password in Active Directory.

The machine is denied access to domain resources, but runs, it has a machine account in Active Directory.  Nothing looks out of place until the access denied behavior is noticed either by users or through trolling Event Logs.

There is some good background information from the Directory Services blog about machine accounts and Active Directory that you can review here:  Machine Account Password Process

There are two ways to deal with this situation:

  1. Un-join the VM form the domain, delete the computer account from AD, and then re-join the Domain.
  2. Prevent machine account password from changing prior to taking any snapshots: Disable machine account password changes

Keep in mind – the machine account passwords are designed to change silently in the background for a reason.  To prevent an un-trusted, malicious machine from impersonating a trusted machine and thus gaining access to your domain.  So don’t take changing this default behavior lightly.  By modifying this default behavior you are making a conscious decision to increase risk and decrease security in your environment.

Tuesday, August 3, 2010

Installing and configuring Test Agents when not in the same domain as the Test Controller

I think that the title says a lot in this case. 

I have a Team Foundation Server and I have a Test Controller registered to it. 

I also have Test Agents – they hang out in the lab (not in the production domain with the TFS Server or the Test Controller) so I need to have authenticated communication between the Agents and the Controller.

The most likely scenario is; the Test Controller is in the same domain as the TFS application server (assuming the TFS application server is in your production domain) and the Test Agents are not, nor is there a domain trust.

The Agents could be / should be in most any domain and most likely will execute their tests within an isolated test domain.

Installation and configuration of the Test Agent always requires local administrator rights, it is registration of the Test Agent with a remote Test Controller that can involve unique combinations.

If the Test Agent machine and the Test Controller machine are not joined to the same Active Directory domain or they are not joined to fully trusted domains (or the Agent machine is not domain joined) then:

1) Create local computer account on the Test Controller and each Test Agent machine to provide authentication.

2) This local account should have the same username and password across the Test Controller and Test Agent machines.

3) The “password never expires” check box should be checked.

4) When configuring the Test Agent username do not include the name of the local machine (machinename\username) use only the username (username).

Installation using the common local user account:

a. If this local user account is used to install the Test Agent software then the user account should be assigned to the local administrator security group of the Test Agent machine.

i. If this local user account is used to configure the Test Agent and register it with the Test Controller the local user account should be assigned to the local administrator security group on both the Test Agent and Test Controller machines.

Installation using an administrator account (not the common local user account):

1) The local administrator account can be used to install the Test Agent on the agent machine(s) as long as the local administrator user account password is the same on the Test Controller and Test Agent machines.

Monday, July 26, 2010

WSMAN Namespace Handling in PowerShell

For some time now I have been working on handling XML with PowerShell – not XML that I make mind you, that appears to be relatively easy as the plethora of examples out there keeps showing me.

I am handling XML that I get back as a blob from a call to a WS-MAN provider.  It has Namespaces – that changes the game big time.

The best general reference I have found is Dr. Tobias Weltner (he is the brilliant person behind PowerShellPlus – which is an IDE that I simply don’t know how people write complex PowerShell scripts without).  This article; talks about XML and PowerShell, but it misses the one thing that I needed, Namespace handling. 

A bit of digging let me to a C# article about xpath and xml namespaces – that sent me to the real tidbit I needed Select-Xml;

First I needed to workout what my namespace selection problem really was.  Here is the mess that I get back:

<n1:SomeCimMethod_OUTPUT xmlns:n1= xmlns:wsa= xmlns:wsman= xmlns="" xml:lang=""> <n1:ThingOne><wsa:Address></wsa:Address> <wsa:ReferenceParameters><wsman:ResourceURI></wsman:ResourceURI><wsman:SelectorSet><wsman:Selector name="__cimnamespace">root/cimv2</wsman:Selector><wsman:Selector Name="CreationClassName">Image</wsman:Selector><wsman:Selector Name="ID">2c8ba04e-53b8-504d-f616-061a43bb46bf/969f4a72-4a0d-4044-b41e-f3025377d067</wsman:Selector><wsman:Selector Name="CreationClassName">Creator</wsman:Selector><wsman:Selector Name="Name">2c8ba04e-53b8-504d-f616-061a43bb46bf</wsman:Selector></wsman:SelectorSet></wsa:ReferenceParameters></n1:ThingOne><n1:ThingTwo>57702fd0-9e92-43dc-9ac6-537719b73473</n1:ThingTwo><n1:ThingThree>4e4449df-8710-4358-8290-44d7b4264d46=403ef95b-0309-417e-86d8-c75066439419,c735019c-2198-4d53-a6ac-668d38e6a81d=eb15c741-5a05-4377-91b5-7bd95ab21f3d,2b2ad08b-ecdf-42de-9f03-1050862b99fb=e2aae65c-dd64-49f3-a796-e12fecdc2b46,97c47f43-55af-438f-83b9-2d4a01733ce7=fff39bf0-9d21-4475-b7c7-9e96eb35e8d8,ee8e54e2-b499-438f-a62f-67c024e5921a=ebc63996-6399-4ffd-a5ad-6bd0dcf2036f,6fa65d8c-7cbb-438f-a2ea-35e498c525c5=ae960929-4cd6-42b3-9159-f4e0119cae92,80855f0f-1e22-44bf-892c-c8ca1fd7af59=30dd2807-5566-4655-822a-4f6780f0fdaa,57702fd0-9e92-43dc-9ac6-537719b73473=969f4a72-4a0d-4044-b41e-f3025377d067</n1:ThingThree><n1:ThingFour>57702fd0-9e92-43dc-9ac6-537719b73473</n1:ThingFour><n1:ReturnValue>0</n1:ReturnValue></n1:SomeCimMethod_OUTPUT>

If you look into this blob (there is a good reason developers call these blobs) you will see that each element is preceeded by the namespace “n1”.  Howerver, if you simply cast this to $blob = [xml]$blob it looks entirely different and you don’t really realize that each element is part of namespace “n1”.

PS > $blob.SomeCimMethod_OUTPUT

n1            :
wsa           :
wsman         :
xmlns         :
lang          :
ThingOne  : ThingOne
ThingTwo     : 57702fd0-9e92-43dc-9ac6-537719b73473
ThingThree  : 4e4449df-8710-4358-8290-44d7b4264d46=403ef95b-0309-417e-86d8-c75066439419,c735019c-2198-4d53-a6ac-668d38e6a81d=eb15c741-5a05-4377-91b5-7bd95ab21f3d,2b2ad08b-ecdf-42de-9f03-1050862b99fb=e2aae65c-dd64-49f3-a796-e12fecdc2b46,97c47f43-55af-438f-83b9-2d4a01733ce7=fff39bf0-9d21-4475-b7c7-9e96eb35e8d8,ee8e54e2-b499-438f-a62f-67c024e5921a=ebc63996-6399-4ffd-a5ad-6bd0dcf2036f,6fa65d8c-7cbb-438f-a2ea-35e498c525c5=ae960929-4cd6-42b3-9159-f4e0119cae92,80855f0f-1e22-44bf-892c-c8ca1fd7af59=30dd2807-5566-4655-822a-4f6780f0fdaa,57702fd0-9e92-43dc-9ac6-537719b73473=969f4a72-4a0d-4044-b41e-f3025377d067
ThingFour : 57702fd0-9e92-43dc-9ac6-537719b73473
ReturnValue   : 0

In my example I am looking for the element “ThingTwo” which is really “n1:ThingTwo”.  The detail is that it exists within namespace “n1” and because of that $blob.SelectNodes and $blob.SelectSingleNode were totally failing me.

So, how do I find a single element within this?

First, my $blob has to be an XML document, in this case the return from the WS-MAN provider is all formatted properly, I just need to cast it to an XML document (as in PowerShell everything is a generic type of Object by default).

$blob = [xml]$blob

$blob.GetType() should return “XmlDocument” as the Name.

Then i have to make the XML parser aware of the namespace and pass that into the Select-Xml method.

$namespace = @{n1=}

Now I can use Select-Xml to find my element.

Select-Xml -Xml $blob -Xpath "//n1:ThingTwo" -Namespace $namespace

Friday, July 23, 2010

Where has hardware virtualization come and gone

The virtualization model that both Hyper-V and XenServer use is model of paravirtualization.  This, to a certain degree, is dependant on the capabilities of the hardware to provide the ability to run a workload.

This is particularly true in the case of virtualizing Windows operating systems on both platforms.  XenServer refers to these as HVM type virtual machines – hardware virtualized machine.

Also, the ever evolving trend is increased offloading of the work of virtualization to the hardware itself.

I recently viewed a webinar on XenClient (the Citrix type 1 hypervisor that is designed for the mobile user).  The entire XenClient project has been an interesting evolution of puzzles and solutions.

Now, back to hardware virtualization.  What is it?  How does it work?  Where is the enablement?

This webinar that I mention:  You can find it here:  (yes, you must sell yourself to the marketing folks)

Why do I mention a webinar about XenClient?  Because part of this presentation is by Richard Uhlig, Intel Fellow & Chief Virtualization Architect at Intel.  He does some good justice to the evolution of hardware virtualization (yes, Intel’s perspective – but it is interesting stuff).  He does get into some detail pretty quick, if you don’t pay attention you can get lost pretty easily.

I though some of you might enjoy it, and might enjoy a source for this information that really knows it.

In the mean time you also learn a little bit about XenClient – I don’t think they cover the management layer in the presentation though.  That is interesting stuff as well.

Thursday, July 15, 2010

On being an MVP - 2 years later

I got all retrospective today after helping out a fellow MVP.  What does it mean to be a Microsoft MVP?

First, being an MVP is an interesting thing (to say the least).  It is an honor, don’t get me wrong, to be recognized as both a knowledgeable person as well as someone who gives back to that same IT community.

I am not the type of MVP that constantly waves the Microsoft flag and touts it virtues.  I struggle with the software just like everyone else.  I am just willing to help others with their struggles.

Really, that is it.  A total nutshell.

Well, at the same time I get to interact directly with folks (and I hope) to make better software.  I, sometimes, find it hard to keep my opinions to myself.  Whether or not those opinions cause change is not my decision – I only own the viewpoint.

I am now in the software business myself, so I really understand how both good and bad software happens.  It is surprisingly easy to recognize where one feature team stops and another starts when you look at a large and complex software product.

Yea, it is kind of cool.

One other thing that I have fallen into is forum moderation.  Yes, I am a TechNet forum moderator for two products now.  That is totally different.  Some days it is a pleasure and others it is a chore.

My perspective on forum moderation is simple and goes back a long way.  Some of us old folks knew the days of UseNet and BBS.  Back when folks were online and it was a small group of folks.  And (the most important part) we were all civil to each other.  Flame wars were few and far between, but at the same time they were elevations of the art of argument (not debate, not that civil, but at the same time – no name calling).

Why did I feel compelled to write this?  Sometimes I just need to write this stuff to clear my head and move on to the next project.  We all need a little dusting every now and then.

Tuesday, July 13, 2010

Is System Protection in a VM necessary?

I just happened to be working through the set-up of a new virtual environment and I was walking through my standard steps and it occurred to me that I always log in to my VMs and disable System Protection and delete any restore points.

I do this for a couple reasons.  One is to reduce the storage requirements of the VM, another is to just take that overhead out of the system.

I might be stilly for doing this, but it is one of the practices that I consider standard in my environments (as well as redundant and unnecessary). 

I mean, if I want to be able to restore my VM, don’t I use a snapshot (checkpoint)?  So, if I do that I have storage requirements, and then on top of that the OS in the VM is basically doing the same thing so it can roll itself back.

Actually, if i left it turned on it would give me the ability to pluck that patch back out when things go south and I forgot to take a snapshot.  It could be one of those stealth features that we don’t normally think about when managing VMs.  We always focus on what we can do at the hypervisor and forget what we can already do within the operating system of the VM.

Hmm..  Quite the puzzle.

I brought this up as it is something that just happened to pop into my head as being unusual, strange, not required, however strangely comforting.  You know, that whole ‘I do it my way’ type of thing for no right or wrong reason.

I would love to hear comments on this one.

Monday, July 12, 2010

Visual Studio ALM Test Agent setup in a nutshell

I am working through a distributed installation of the Test Agents (with Test Controllers) for Visual Studio 2010 and boiled down the configuration gotchas into this nutshell:

(I assume that anyone can run an installation wizard, why walkthrough that..)

Two Modes:

1) Service

a. Supports automated testing

2) Interactive Process

a. Supports video capture

b. Supports coded UI

c. Supports automated testing


1) The user account that is used to run tests must have been logged on locally to the console of the machine – thus forcing a user profile to be created.

2) When registering an Agent to a Controller the logged on user who is running the configuration tool must be an administrator on both the Agent machine and the Controller machine.

a. Use a domain user account assigned to the local administrators security group

b. Use a local user account that is identical in username and password on both the Controller and Agent machines assigned to the local administrators security group.

3) The account that the Test Agent is configured to run as requires membership in the local administrators security group of the Test Agent machine when:

a. IntelliTrace or Network Emulation is used.

b. When the operating system of the Agent machine has UAC enabled.

4) Installation of the Test Agent on Windows XP does not include the Performance collection agents.

5) When configuring the Test Agent the Agent “run as” account is automatically added to the TeamTestAgentService security group of the Test Controller machine.

Friday, July 9, 2010

Visual Studio ALM Test Controller setup in a nutshell

I am working through a distributed installation of the Test Controllers for Visual Studio 2010 and boiled down the configuration gotchas into this nutshell:

(I assume that anyone can run an installation wizard, why walkthrough that..)

Two modes:

1) Registered with a Team Foundation Server Collection

a. The test controller and its associated agents can be managed (configure and monitor) through TFS using the Test Controller Manager in the Lab Center of MTM no additional configuration of the Test Controller is necessary.

2) Not registered with a Team Foundation Server Collection

a. The test controller and its associated agents must be managed using Test in Visual Studio and selecting Manage Test Controllers.

b. This requires additional security configuration on the Test Controller

i. any user that is allowed to execute automated tests (through Visual Studio) or create environments must be added to the security groups of the Test Controller as outlined here:


1) When registering a Controller to a TFS server the logged on user who is running the configuration tool must be a member of the TFS Collection Administrators group and a local administrator on the Test Controller machine.

Remote Desktop Connections for Windows 7

Okay, I am so slow on this one it is not funny.

Many of us grew to love “Remote Desktop Connections” (note the “s” at the end – it is what made it special.  It allowed us to run one RDP application and have multiple RDP sessions to multiple servers defined and within very easy reach.

I don’t know about you but I have RDP windows open all the time, and I frequently waste time sorting through the list that hangs out in my task bar.

Well, the answer has finally come.  Remote Desktop Connection Manager is for the Windows 2008 family and higher of operating systems.  That is Vista and above.

You can find it here:

Now, the web site is a bit confusing as it also mentions Server 2003 and Windows XP (but in a vague sort of way – it is unclear in the phrasing what is meant).

It also installs and runs on Windows XP and Server 2003 if you first apply the RDP/RDS update. (I tested that).

Tuesday, July 6, 2010

Ping is dead on Windows Server stop using it

Long live Ping!

For many years we have relied on Ping as a quick and easy measure of a server being ‘alive’ or not.

I have been stating in the TechNet forums since the release of Server 2008 that we have to get off the Ping train.  It is no longer a real measure.  We cannot expect it to be open and on.

Just today, I am installing a new test environment with Server 2008 R2 (all Enterprise edition, all built from scratch, all domain joined).

I began installing my applications, all fine, until I try to connect to my SQL database server (it is a VM of course).  What is the problem? I had added a Firewall rule.

Without even thinking, I pull out Ping.  hmm.. no response.  <All machines are domain joined, I expect the domain firewall rule to let me ping…>

hmm.. again, no response.  I check the domain controller, I check DNS, I run out of ideas.  So I go into the firewall rules.  One by one I disable to firewall while I have Ping running (just to make sure that my traffic is being detected as Domain traffic).

I began with Public, then Private, then Domain.  Well, yep, the traffic is being correctly detected as Domain traffic and Ping is blocked by default!

Just goes to show you that as an operating system gets secured tighter and tighter, that some quick and easy tools fade into the background.  If you want Ping then set a GP firewall exclusion for Ping, or simply move on to using something different…and focus on the fact that Windows Firewall actually works really well.

Saturday, June 5, 2010

Importing the XenApp EVA is really this easy.

I’m going to import the XenApp 6 Evaluation Virtual Appliance into Citrix XenServer 5.6 using a new feature of XenServer 5.6 and XenCenter 5.6 called the Disk Image Import Wizard.

You can also watch this in action here:

I will assume that you have already downloaded and installed XenServer 5.6 and XenCenter 5.6.

First, go to the download website for the Citrix XenApp EVA.

A login with a MyCitrix user account is required. Be sure to download two items: the Getting Started Guide and the Evaluation Virtual Appliance package for XenApp 6.

After downloading the Evaluation Virtual Appliance, execute the CitrixXA6EVA.exe to expand the self extracting archive into a folder that contains a VHD and licensing documents.

Now, open (and read) the getting started guide. The import wizard requires some important information regarding the configuration of the virtual machine. On page four, the guide states that the EVA requires at least 2GB of RAM and the VHD requires at least 30GB of storage space. So make sure that your XenServer has a Storage Repository with at least 30GB of available space.


Open XenCenter and connect to a XenServer 5.6 host (using version 5.6 is required). In the left hand tree view select the host and then right click for the context menu and choose the Disk Image Import.


In the import wizard browse to and select the VHD and choose next.

Enter the information for the virtual machine. The VM needs a name, the number of virtual processors to assign and an amount of memory.

Note: I am unselecting the Run Operating System Fixups option because I know that this VM contains Server 2008 R2 and that it was built for Hyper-V. Server 2008 R2 can handle hardware changes pretty well (single to multiple processors, critical boot devices, etc.). XenServer also presents a critical boot device controller similar to the Hyper-V IDE specific boot device.


Then begin the import.

Once the import completes, select done.

At this time: power on the virtual machine and login at the logon screen. The domain and user account and password are in the Getting Started Guide.

If the screen of the VM flickers, Windows Server 2008 R2 is busy setting up some of the new hardware devices that it is detecting. You should still be able to login and complete the setup of the Evaluation Virtual Appliance environment.

If you would like to optimize the virtual machine for XenServer, install the XenServer tools within the virtual machine.

There is no “smoke and mirrors” involved here, it is literally this easy using the Disk Image Import wizard.

Wednesday, June 2, 2010

OVF packages and virtual disk import now native in XenCenter

The latest update of XenServer (5.6) and its GUI management interface XenCenter includes the addition OVF package Import and Export and virtual Disk Image Import courtesy of Citrix Labs in Redmond.

This means that in the XenCenter application there is now the option to Import and / or Export (Create) packages using the OVF standard from the DTMF.

There is also the added feature of Disk Image Import – directly importing a VHD (Microsoft), VMDK (VMware), VDI (VirtualBox) virtual disk or a WIM (Windows IMaging format) disk image into a VM.

The OS Fixup feature from Project Kensho is also included.  This is a basic routine that handles the most common interoperability issues that folks can run into when importing an OVF package to XenServer when the VMs (the operating systems within the VM actually) have been installed on a hypervisor other than XenServer.

This feature is labeled as experimental in XenCenter but I hope that you all find it useful in bringing OVF packages into XenServer and also in producing OVF packages for internal and other purposes.

More later, in both the XenCenter forums and on my blog.

XenServer 5.6 can be downloaded from the Download link (in the top navigation bar) at – you must login with your MyCitrix ID and filter by selecting XenServer to get version 5.6.

Quick demos of the new features here:

Tuesday, May 18, 2010

P2V is not a panacea it is a Pandora's box

I monitor many forums where folks are constantly dealing with problems caused by converting a physical computer to a VM or taking a VM from one hypervisor and moving it to a different hypervisor.

I have one statement for these folks:  P2V is not a panacea, it is a Pandora's box.  P2V brings baggage.  It is this baggage that causes problems both during and after the conversion.

Now, don’t get me wrong, I totally understand why folks want to perform a P2V and I also understand why folks want free tools.

In regards to free tools – you get what you pay for.  I don’t think I need to say any more.  This means that there will be bad experiences, your chances of having a problem are high.

Let me back up and talk about V2V a little.  In this case it all comes down to hardware.  The simple example is the boot disk interface.  VMware presents a SCSI disk, Hyper-V presents an IDE disk, XenServer presents what looks like an IDE during boot.

If you have ever taken a Windows Server backup (Windows 2003 or older) and then tried to apply that to new hardware you quickly learn that device drivers are the big problem.  And this can be simply new SCSI arrays, let alone converting from SCSI to SATA, or IDE to SCSI.

The other big issue is hypervisor ‘tools’ that are installed into the VMs.  There is a total mixed bag here.  The optimum would be that the tools can be installed and they have no proble,s if that VM is moved to a different hypervisor, but that is not the case.  For best results, remove the hypervisor tools BEFORE migrating the VM.

I have talked about this before:

Back to P2V.

We still have to deal with hardware changes as described above.  And there are device drivers and agents as well, very similar to the ‘tools’ situation described above.

Many folks install hardware monitoring agents when an OS is installed on a physical box.  These agents can cause all kinds of problems following a P2V.  As with the ‘tools’ uninstall the agents prior to conversion.

Now, there are also ghost issues that will crop up over time.  This is just inherent in any system when the hardware is changed.  There are small device driver miss-matches, MAC address problems, application problems, etc.

It has always been the BEST case to build the entire server on the new hardware and then install the application and migrate any databases or other requirements.

Personally, I see two reasons for this:  1) you really understand your application and you can document a full rebuild for DR reasons.  2) you will get the absolute best performance of the application in the VM, no baggage.

Friday, May 7, 2010

OVF vs OVA the saga continues

It has been over two years since I began working with the OVF standard from the DMTF.

Repeatedly during this process I have had to educate folks about what an OVF package is, and quite frequently what an OVA is and when to use it.  Just yesterday I corrected a person in a conference call as this is still a relatively new thing to most folks.

In very simple terms, an OVA is a single file.  It is a TAR archive of an OVF.  It simply has the file extension “.ova” to give some indication that it is an OVF.  Otherwise it would be just any other “.tar”

The OVF is the real important part that the DMTF keeps defining and expanding upon.

The OVF package is basically two things. 

  1. It is an XML file that describes the virtual environment (vCPU, vRAM, vNetwork config, and it lists references to any other parts of the OVF package).
  2. It is a collection of files.  Virtual disks of virtual machines, .ISO installation media, any other file attachments that a person could dream up.

Use OVA when you need to take an OVF package somewhere.  Or want to give it to someone as a download.

Use an OVF for all of your internal purposes.

Is OVF a promise that an appliance can be imported to and run on any hypervisor?  Absolutely not.

OVF is NOT a way to convert workload from one platform to another – no reason why it could not be used that way (it makes sense) – but that is not why it exists.  Eventually I see companies implement conversions around OVF packages.  The closest thing I know of today is the “fixup” that is in the Citrix Kensho implementation.

There are two reasons why:  The OS installed in the VM and its built in driver support and that different hypervisors present virtual hardware in different ways. 

To a lesser extent there is a third reason: proprietary VM tools.  The tools from vendor A can prevent a VM from running on a different hypervisor (or at least make it really difficult) or they can cause the performance of the VM to be very poor on the new hypervisor.

So, the moral of the story:  Use OVA appropriately and don’t expect an OVF package build on one hypervisor to “just work” on a different hypervisor.  And be wary of VM tools installed within the VM.

Wednesday, May 5, 2010

PowerShell DateTime to CIM_DateTime

Obviously no one that is using PowerShell is using WSMAN against a remote Linux system.  Everything assumes WMI, simple enough.

Use WSMAN against Linux and you enter into a insane land of XML and properly formatting your XML.

Take for example the simple act to send an XML string that queries a time period.

In PowerShell you type Get-Date and you get a nice, human friendly value back: Wednesday, May 05, 2010 10:24:14 AM

Now, try to send that to a CIM provider of any type (in my case a CIM provider that sits behind a WSMAN interface) and you immediately get a value invalid error.

off to Bing-land.. searching, searching, searching – absolutely nothing.  Wait, there are a couple useful things…

on MSDN the Scripting API Objects, the SWbemDateTime Object.  The what?  you say.  Isn’t it obvious? (I didn’t think so).

Here is the kicker, the CIM_DateTime format.  It expects this really strange format that looks like this:  yyyymmddHHMMSS.mmmmmmsUUU

So how do I take this:  Wednesday, May 05, 2010 10:24:14 AM and turn it into this: 20100505102415.000000-420

I have to play with objects in PowerShell, here is my script part:

$startTimeWindow = ((Get-Date) - $9Minutes)
$objScriptTime = New-Object -ComObject WbemScripting.SWbemDateTime
$startTime = $objScriptTime.Value

I first set my time window to begin 9 minutes before ‘now’.  I then create a SWbemDateTime object from the Wbem.Scripting class.  I then take the start of my time window and set this friendly formatted time to the object.  Then I retrieve the value of the object and I have a CIM_DateTime to send off to my Linux system CIM interface (through WSMAN).

Thursday, April 8, 2010

Creating a Windows 7 or Server 2008 R2 image for VM deployment

Creating a Windows 7 image for deployment to a virtual machine is not as straightforward as you might think. If you simply perform a standard installation, image that to a WIM with ImageX and then attempt to deploy that image, you will be left with a system that requires that the boot manager be recreated.

Just in case anyone is confused as to why the boot partition is there, it's so that you can use bitlocker to encrypt your system drive on the fly.  Bitlocker needs the unencrypted boot partition to work.

If you create a VM and insert your Windows 7 media and proceed to click Next through the entire installation wizard you will end up with a system that is actually installed on two partitions and not deployed back from WIM without requiring repair.

One tool that can be used is the Wim2Vhd tool. This is useful if you meet the client requirements and have a way to import the resulting VHD into a new VM. It performs the required repair for you.

However, I would like to avoid the situation of needing to perform the BCDBoot repair all together.

Say that you want to use ImageX (from the Windows Automated Installation Kit ) to image the Windows 7 installation to a WIM and then deploy that Windows 7 image to other virtual systems. I do not want to repair each and every time – I want to build the reference system in a way that this is not required.

The key is to prevent the default Windows 7 installation behavior of creating a System Reserved partition (which contains the necessary boot loader files) and a separate partition where the OS is installed (where there are no boot loader files).

Method One – pre-partition the virtual disk:

This is the most direct approach; in the end nothing looks unusual.

1. Create the VM from a template

2. Boot to WinPE (or mount the disk to another (helper) VM)

(An alternative to this is to boot using the installation media.  At the "Install Windows" screen (the first screen of the installer) type Shift+F10 to open a command prompt)

3. Run diskpart

a. List disk

b. Select the disk

c. Create a primary partition (you must use the entire volume)

d. Make the partition Active

e. Format the volume


f. Exit

4. Power off the VM ( or detach the virtual disk from the helper vm)

5. Attach the Windows 7 installation ISO

6. Boot to the ISO

7. Begin the installation wizard

8. Select a “Custom” installation

9. At the “Where do you want to install Windows” screen accept the volume that was previously formatted


10. Complete the wizard

11. Apply all installations and patches to your VM

12. Prepare your VM with sysprep

13. Boot the VM to WinPE

14. Create your VM image using ImageX.

Method Two – confuse the installation wizard:

I could not think of a better title for this, because really what you are doing is messing with the options allowed in the installation wizard until you get the behavior you want.

1. Create the VM from a template

2. Attach the Windows 7 installation ISO

3. Boot to the ISO

4. Begin the installation wizard

5. Select a “Custom” installation

6. At the “Where do you want to install Windows” screen

a. Select the virtual disk

b. Select “Drive options (advanced)”

c. Select “New”


d. The default should be the entire volume – Select “Apply”


e. Acknowledge the warning

f. Select and Delete Partition 2


g. Acknowledge the warning

h. Select and Extend the System Reserved partition


i. The default should be the entire volume – Select “Apply”


j. Acknowledge the warning

k. Select Next

7. Complete the wizard

8. Apply all installations and patches to your VM

9. Prepare your VM with sysprep

10. Boot the VM to WinPE

11. Create your VM image using ImageX.

The primary difference between the results of the two methods is:

· Method Two results in a deployed image where the C: volume name is “System Reserved”

Method One:

Method Two:



Tuesday, April 6, 2010

BOOTMGR is missing - Repairing a Windows 7 or 2008 R2 image after VM deployment

Deploying a Windows 7 image to a virtual machine is not as straightforward as you might think.
The primary issue resides in how the reference Windows 7 system is installed.
If you create a VM and insert your Windows 7 media and proceed to click Next through the entire installation wizard, you will have a working Windows 7 installation, that can be templated, it can be copied, you can do just about anything you like to it – as long as all copies involve the entire virtual disk.
Say that you want to use ImageX to image the Windows 7 installation and then deploy that Windows 7 image to other virtual systems.
If all that you did was click Next through the installation wizard or you did not customize your unattend.xml at all – you actually have a VM that has been installed with two volumes – a System Reserved volume and the volume where the actual system is installed.
The standard process is that you boot your system into the WinPE environment and you use ImageX /capture to create a WIM from a particular volume. Note the word “volume.” ImageX is a volume based tool, not a disk based tool.
The problem comes when we try to deploy this image. We boot into ImageX and /apply – then we reboot the virtual machine and we end up with the error: “BOOTMGR is missing”
The boot manager resides on the System Reserved volume, that is the extra partition that is created but was not captured by ImageX.
According to Microsoft TechNet documentation How to Perform Common Deployment Tasks with Virtual Hard Disks we need to use BCDBoot.exe to “configure the boot entry in the BCD store to be on the volume inside the VHD.” If you follow the link, the portion that we are concerned with is: “Prepare a VHD image to boot inside a virtual machine.”
To repair the VM where the image we need some type of recovery environment. We can get to a prompt to fix the applied image by any of the following ways: Boot to a WinPE 3.0 ISO, Boot to a Recovery Console using the OS installation media, attach the virtual disk to another known working Windows virtual machine.
For my example I have used the Windows Automated Installation Kit and created a WinPE 3 ISO. However, in testing I first simply attached the failing virtual disk to a working VM and performed the same commands. I also found references that booting into the Recovery Console of the installation media also works, but I did not test this. It is important to note that if you use WinPE the bit-ness of your WinPE image must match the bit-ness of the OS in the VM (32-bit to 32-bit / 64-bit to 64-bit).
The repair process:
1. Boot the virtual machine into WinPE (or attach the virtual disk to a running VM).

(An alternative to this is to boot using the installation media.  At the "Install Windows" screen (the first screen of the installer) type Shift+F10 to open a command prompt)

2. Discover the drive letter assigned to the virtual disk that is failing to boot.
3. Execute the following command (pay attention to use the correct drive letter – in my example “C:” is the letter that was assigned to the virtual disk, the WinPE system volume is “X:”).
C:\windows\system32\bcdboot C:\windows /s C:
4. Simply type “exit” to cause WinPE to shutdown and reboot into the VM. If the virtual disk was mounted in another VM, then power off the VM and detach the virtual disk before booting the imported VM.

Thursday, March 11, 2010

WireShark broke XP Mode

I know right now, that Ben Armstrong over at MSFT is reading this post and I have no idea what he is thinking to himself except I am sure that it is something along the line of - “no, it didn’t”  [but I made him look  ;-)  ]

And, technically, he is right, it really didn’t but the initial perception is that it did.

Okay, the background – XP Mode is the new Win7 feature which is actually the new VPC 7 (VirtualPC 7 – not VirtualPC 2007) with a pre-built Windows XP VM (that you have a legal license for as long as you run it in this intended way).

That means that there is a virtualization engine running on your system, one that interacts with your processor, one that takes RAM, and one that owns (PWNs - your networking stack.

And it is actually some interesting side behavior that caused me installing WireShark on Win7 to break the networking in my XP Mode VM.

When an XP Mode application is closed – it really isn’t.  When an XP Mode Desktop is closed – it really isn’t.  The XP Mode operating system is simply put into a paused / saved state by VPC – kind of like hibernation.

The side effect is that the OS in the XP Mode VM is rarely rebooted or cleanly shut down (as we all know that XP likes to be).  So if things change we are in the automatic thinking of reboot.  that takes a bit more.

Anyway, back to networking and desktop virtualization engines.

What happened is that in the act of installing WireShark – I moved the networking stack away from VPC7 – thus my XP Mode VM can no longer connect through the virtual networking layer that I had it configured to use.

Yes, I installed WireShark (it added WinPCap) and I used it and went along my merry way.  All was fine until the next morning.  I maximized my XP Mode application (which was running along happily in the background the entire time, during the installation of WireShark and all) and boom, it cannot connect to its back-end server.

Hmm, I open the full XP Mode desktop – IE cannot get out either.

Well darn, I know I need to reboot my Win7 machine (I really know that is what I have to do).  But, first I try the futile hope of shutting down and restarting the XP Mode VM.  Futile yes, and a waste of my time.  The result was as expected, not networking love for my XP Mode VM.

So, I try to logon to my XP Mode VM as the local administrator – hmm… no love, I try a second time - I get an error that the system is busy with an existing Terminal Server session.  Okay, I bet it is tearing down my previous logon attempt and resetting the listener in the VM.

But, I can’t log in to the silly VM to shut it down and I need it powered off.

Time to “Disable Integration Features” – Now, I can logon as the local administrator of the XP Mode VM.

…time passes…  That was NOT fun.  I had to attempt to logon to the XP Mode VM multiple (4) times, each time I did the VM went straight into the mode of installing updates and shutting down.

Mind you, I have been working on PowerShell and WSMAN for the past few weeks.  I have all kinds of IE Tabs and windows open, documents, PowerShell Plus, etc.  I really don’t want to reboot – but I run Visual Studio in my XP Mode VM (I hate all the baggage that Visual Studio installs in my client – not good for testing).

In the end, yes the reboot of the entire system solved the problem.

The moral of the story – If you have a virtualization engine and you mess with the networking stack, plan to reboot, you will be broken until you do.

Tuesday, March 9, 2010

Just because you can - should you?

This is a question that all administrators must ask themselves at any point in time.

I have know quite a few very creative IT folks in my time, and we can all come up with very clever ideas, combinations, and adaptations of technologies.

Part of my role is to question why.  It is actually part of my job.

I frequently come across things I read by folks and I just think to myself, why the heck would you do _that_?  Just because you can? 

When I worked as an administrator I quickly learned the user dictum:  “Because they can, they will”

Yes, this is generally said in a demeaning way, referring to users, when administrators talk to each other, to their managers, or to folks that write software.

I still say this over and over to the developers I work with.  Generally framed with a statement like:  “If you don’t want them to do that..” or “Of course I entered 300 characters into that field” or “there was no error checking to stop me”.

Think about that as you apply technology or attempt to break technology.  It is all about the intent.

Are you intending to break it?  Do you just not know any better / or not understand it?

If you don’t understand it, then read and ask questions.

As a person who tests software – I absolutely say, yes do it.  But do it in a controlled and smart way.  Pay attention to the entire environment, not just the buttons you are clicking on.

It is usually the greater environment where the real bug exists.

Wednesday, February 10, 2010

The response that the WS-Management service computed exceed the internal limit for envelope size.

I have been hiding in PowerShell and WSMAN land for quite a few weeks now.  And each time I set up a new client machine or I scale out the number of VMs in my environment I run against this error:

Exception calling "Enumerate" with "1" argument(s): "The response that the WS-Management service computed exceed the internal limit
for envelope size. "

What am I doing?  Well, right now I am calling an enumerate class through WSMAN and listing all of the VMs that I have on a hypervisor.

Why do I get the error?  I know why, it is because the return string is so freaking large that the client side of WSMAN (the WinRM listener on my client PC where I am running the query from) says: “hey, my allocated buffer is full and you are trying to feed me ‘just one more wafer thin mint’”

Instead of exploding all over the place I get this nice, friendly warning.

How do I tweak my client to increase the receive envelope size?

Open an elevated command prompt (run as admin) and type:

winrm set winrm/config @{MaxEnvelopeSizekb="1024"}

Note = you do not need to set yours to 1024, I just set it higher than the default.

This was the last setting I had, but I need to go higher as I add more and more VMs.  One to 2Mb I go…

Thursday, January 7, 2010

FreeNAS as a storage target for simple testing

Working with various hypervisors I frequently have a need for various types of storage volumes.

We can always use local storage for testing, but how do you handle a cluster of Hyper-V Server or a Pool of XenServer hosts?

I have used a variety of services.  Right now in the lab I am running Windows Storage Server 2008 to present NFS volumes and StarWind to present iSCSI volumes (I found StarWind iSCSI far easier to get working with XenServer than the iSCSI target add-on to Storage Server).

Mind you, I don’t need any of the enterprise bells and whistles that Storage Server gives me, I simply need storage.

Just the other day someone pointed me to FreeNAS.  Hey, a storage system that can present many different types of volumes.  FreeNAS is built upon FreeBSD and is bundled as a LiveCD ISO that can be installed to a hard disk, USB, or run as a LiveCD.

FreeNAS can do NFS, SMB, iSCSI, iTunes, FTP, TFTP, SSH, UPnP, BitTorrent, and other sharing protocols – from that standpoint it is pretty slick.

I must say that other than dealing with having to learn the UI – I have been pleasantly surprised at how well it is working.

Mind you, I am running FreeNAS in a VM.  That VM is hosted by Hyper-V.  And I have a XenServer that is accessing the LUNs being presented by the NAS device.

At the moment I have an iSCSI LUN and an NFS volume presented.  the big limitation that I currently have is that I can only have 4 IDE devices on my Hyper-V VM that limits me to one virtual disk for install, and three for serving as storage LUNs.

My VM simply runs with 512 Mb of RAM and a Legacy NIC.  The rest involved about an hour of figuring out how to install and configure the LUNs. 

Very little overhead and the storage actually ran pretty well when deploying a XenServer VM to the NFS share.  I really can’t argue.

When it is time to upgrade my current lab test storage server I think I will have to boot into the LiveCD and see if FreeNAS can identify the really old adaptec SCSI card and Compaq storage array.  If it can, it will win moving forward as Server 2008 R2 no longer has drivers for my really old Adaptec SCSI card – making the antique array useless. 

But that array is not useless, I just wouldn’t trust it for anything important.

The next step might be trying to build it into a paravirtualized VM for use on XenServer.  Since it sees the XenServer virtual devices as QEMU devices, I know that the kernel is Xen aware…

Friday, January 1, 2010

IT needs a language

I constantly struggle with writing documentation and also communicating concepts to many IT folks in writing. 

This is one of the reasons that has driven me to producing videos and capture sessions over the past couple years (I can say far more, more accurately with combined speech and visuals than attempt to accurately convey in writing).

My work as a moderator of the Hyper-V TechNet forum also speaks to this.  I spend a great deal of time describing concepts in forum posts – that end up being incredibly long.  Each time I write it a bit differently, still trying to convey a concept.

The reason for this post?  Accuracy.  Accurate conveyance of information.

This all started with a recent trip to the doctor.  20 minutes of discussing things with my doctor was summarized into a single paragraph of Latin.  The use of the common language of Latin to describe symptoms, and convey that description from doctor to doctor.

I am not proposing that Latin be revived in IT.  However, to use commonly understood grammatical structure to describe concepts, or installations, or topology, or systems architecture.

This gets even more complex as we move into the virtual workload world.  Where we describe physical characteristics in a non-physical way.

A virtual appliance for example.  In its most common representation today, this is a virtual machine – installed within that virtual machine is an operating system and some application.  The application and OS have dependencies – DNS, networking, Active Directory authentication for example.  The machine has dependencies – two virtual NICs, one public facing, one that faces a private subnet that connects to the database server.  Oh, and now we have an external entity of a database server.  Where does it all end – I don’t know of a single workload that exists in an enterprise as a single entity.

The struggle is:  How do we describe this in a universal way so that no matter where you take that appliance the environment knows what to do with it?  How to configure it.  How to connect it.  Where to connect it.

This is the role of a Standards Body – in this case the DMTF is attempting to build this common term framework.  But this is a framework, it is not universal terms – it is simply items or entities.  So at least we begin with common items – however describing them in different languages by vendor.

Now, how do I take that DMTF developed structure and tern it into words.  One set of words that can be used in association with a NIC to describe its network connection and then the attachment and VPN attributes.

Or, throw that idea out the window and focus on describing the interactions between the workloads only.  Don’t describe the physical topology in any way – simply describe the dependencies between workloads or appliance entities.

Most IT folks would never see this description language – but integrators would find it useful, documentation folks would find it useful, and even developers would find it useful.