Wednesday, August 26, 2009

KMS Client Setup Keys

I always have the hardest time searching for this each time I need it again.
Vista and above (Vista, Server 2008 and higher) have this wonderful KMS licensing system.  That is all fine and dandy.
During installation and creation of virtual machines, I frequently run into situations where I must provide a product key either during installation or for the sysprep process to allow mini-setup to complete.
I don’t want the recipients of my virtual machines to have to input a personal product key as we run KMS – I want that to all happen silently in the background.
Today, I had to go searching all over again – 30 minutes later I finally find the magic combination of search phrases that get me where I needed to go.
I actually did a search that I knew would take me to a TechNet forum post that I made in the past, that I knew had the answer in it.  Basically, searching for my own answer that I had not used in such a long time that I couldn’t even recall the ‘proper’ title for it.
Both Google and Bing failed me until I decided to go looking for my own forum post.
The link to the Volume Activation 2.0 Deployment Guide – KMS Client Setup Keys

The Windows 7 and Server 2008 R2 Setup keys:

Saturday, August 22, 2009

The basics of VDI

VDI is a term that is thrown around a lot lately and in many ways.
the acronym stands for Virtual Desktop Infrastructure.

In its most basic description, this simply means that the operating system runs in a location that is separate from the end use the is interacting with the operating system (and using applications that are running within it).

This is only one form of virtualization, which is also becoming a pretty broad reaching term in the computer industry and encompasses many forms of technologies and ways of presenting workloads.

In my definition I stated that the operating system runs in a location separate from the end user. What does this mean?
The operating system can be installed on a PC, or a blade system, or in a virtual machine. Most commonly these will be in some type of data center, but they don't have to be.

I need to mention that MSFT has recently muddied the waters by using the term Remote Desktop Services to both describe VDI and Terminal Services (and possibly the application formerly known as SoftGrid) - a very generic marketing term to encompass the many ways to use various virtualization technologies to get an application to a user. When it gets down to implementation and design - it is important to separate each of these individual virtualization technologies.

Technologies that loosely enable VDI have been around for years and vary greatly. Back in the stone ages of IT we had PCAnywhere and a modem, we would dial directly into an awaiting PC and use its desktop from some other location. Today we have a similar technology called GoToMyPC. These were great for very simple one to one relationships of user to desktop.

Over time all of that has grown up into the enterprise level products that we call VDI today. In today's scenario the relationship and control is far different. It could be many users to a single source desktop (desktop pool), or the more traditional one to one (CEO to specific desktop).
This has evolved out of the need for both flexibility, control, and security. You no longer have to worry about the financial broker losing his laptop as there is no data on it - it becomes 'just a laptop'.
Today, most VDI infrastructures have some basic, common, components.

1) the end user
2) a control portal or broker
3) a transport / remoting protocol
4) the resulting target operating system

I don't think that I need to describe the end user.

The Broker is the portion of the infrastructure that provides access control - the user is required to authenticate, the broker checks that an assigned resource is available and then connects the two together. It also monitors the back end resources, sessions, prepares additional resources, etc.

The transport is how the devices at the end user remote back into the OS, as well as how the console of the OS (plus mouse and keyboard) get to the user. Again, back in the stone age there was VNC. And it is still around today. However, that basic KVM style remoting is giving way to RDP and ICA. From Microsoft and Citrix respectively. These are the protocols and not the client application that actually runs at the remote OS or the client device.

The target operating system is the operating system that resides in the data center or on-premise device. It is here that the applications actually execute.

There is also the more traditional Terminal Services which is strictly session remoting and uses one server to run many individual instances of an application and possibly a desktop.
These two technologies do directly cross over each other and in many cases Presentation Server or Terminal Server are a better fit than a full VDI infrastructure.

What is required in implementing a VDI infrastructure?
Physical resources.
Places to run the workloads - hypervisor or blade systems.
Storage - that operating system needs to write and remember, as do the applications. In the case of pooled desktops, don't forget user profiles.

This entire article was prompted by a former co-worker of mine, Jeff Koch ('cook' that is). And I am sure that he will ask questions that force me to continue to expand.

Friday, August 21, 2009

Importing the virtual machine succeeded but with the following warning.

When importing a virtual machine to Hyper-V R2 you might see the following error dialog:

Importing the virtual machine succeeded but with the following warning.

Import completed with warnings.

I have seen this error quite a bit, and I must say that it is no reason for panic.  Your VM is safe.

If you open the error and read the detail, you will see what really went wrong.  (Click on that See details down arrow).

Well, psyche.  That details box is rarely helpful – it simply points you to the proper event log- then you being digging.

Each time I have seen this error the repair has been the same.  Simply open the settings of the virtual machine and connect the virtual NIC to the proper virtual network switch.

Also, each time I have seen this the leading events have been the same.  I created the Export from Hyper-V v1 and I am importing to Hyper-V R2.

Thursday, August 20, 2009

Project Kensho Demonstration Videos

Here are the instructional videos for Project Kensho.

1. Installing the OVF Tool

2. Installing the XenServer-CIM

3. Using the OVF Tool (the Basics)

4. Using the OVF Tool (Advanced)

5. Using the OVF Tool with Hyper-V

Wednesday, August 19, 2009

Thinking in Workloads with OVF

Many of you realize that I am pretty close to the Citrix Project Kensho OVF Tool.

Frankly, I find it as a very useful tool with some very useful features.

First of all – let me mention a bit about OVF again.  OVF is NOT a method of converting virtual machines.  OVF is a way to describe a virtual appliance using standardized XML tags, so that various platforms can consume and use that virtual appliance as it was defined.

A virtual appliance has traditionally been though of as a single virtual machine (thank you VMware).  However, a virtual appliance is actually a “workload.”

Many of you might realize that an enterprise application is rarely a simple single .exe file that simply runs on a desktop.  A very simple reporting application might be an executable, a SQL database, and even a document archiving system.  All of these entities grouped together is the workload.

It takes all of these pieces working together for the application to be fully functional and feature rich.  The Application Workload would be a better way to describe this.

In the same light there is a component that might participate in multiple workloads – the SQL server can serve databases to multiple front-end and back-end applications.  It would have the most complex relationship in this example.

This brings me pack to the virtual appliance – the OVF is a description of the workload.  This example has that defined as two servers and one client.

If you are the person creating the package, you might leave the client out of the package, or only deliver the client executable as a component of the package, but it is not imported to a virtualization host as a virtual machine.

Some might call this creative thinking, but really it is just taking what the OVF allows and applying that to real situations.

The OVF standard (VMAN at the DMTF) is still evolving and changing.  And vendors are still working on compatibility and pushing those standards to ever complex designs.

It is because of this that not all vendors support each other.  They have to choose to allow for consumption of other implementations of the OVF standard.  Yes, this gets very complex and interwoven and creates a bummer for some folks that see OVF as the answer to virtual machine portability – when that portability has far more to do with the applications and operating systems within the virtual appliance themselves than it does in the depths of an XML file.

Tuesday, August 18, 2009

Citrix Kensho releases 1.3

After what seems like months of work (pretty close), version 1.3 of Citrix Project Kensho releases with enhanced OVF capabilities.

Some of you are aware that Project Kensho is the Citrix set of tools that have been developed at Citrix Labs, in Redmond, WA.

Support for creating and consuming OVF content from XenServer, Hyper-V (v1 and R2), and consumption of VMware OVF packages are the major features.

There were a few technical hurdles along the way – not to mention adding OVF support into XenConvert with the 2.0 release.

You can find out more about it here:

All that I ask is that you download, use it, and report back to the forums.  Hopefully, no one finds an issue that I don’t already know about ;-)