Monday, November 4, 2013

Deploying XenDesktop using the SCVMM Service Template

The latest release of XenDesktop is now available as a Service Template for System Center Virtual Machine Manager.

I am assuming that my blog readers are already familiar with the concept of Service Templates, introduced with SCVMM 2012. 

The “Service” is where the applications and OS are separate entities, they are layered across each other and deployed as a composed entity with intact references and dependencies.  The Template is the representation of these dependencies and relationships.  The Template is ‘deployed’ into a ‘Service’.  The Service is the running machines.

For most of the past year we have been focused on simplifying the deployment of the XenDesktop infrastructure.  After all, there are enough decisions to make without having to spend one or two days installing operating systems, applications, and configuring them.  This is where the XenDesktop VMM Service Template comes in.

The whole idea is to take the monotonous tasks of deploying VMs, installing XenDesktop, configuring those infrastructure machines and reducing them to a few questions and time.  Freeing you up to do more important things.  Ant the end you can have a distributed installation of XenDesktop – the license server, Director, StoreFront, and Controller.  All connected and ready to deliver applications or desktops.

Why not give it a go?

If you just want to see what this is all about, take a look here:  http://www.citrix.com/tv/#videos/9611

If you head over to the XenDesktop download page, you will find a “Betas and TechPreviews” section.  In there you can download the XenDesktop Service Template zip package.  (The Service Template is the TechPreview not the version of XenDesktop).

http://www.citrix.com/downloads/xendesktop/betas-and-tech-previews/system-center-service-template-tech-preview.html

By the way – there are four templates.  One template to install a scaled out XenDesktop, another for an evaluation installation of XenDesktop.  You will also find Provisioning Server templates that can also support scaling out or an evaluation installation.

After downloading the package, unzip it to a convenient location, then open up the SCVMM Console, Select the Library view, and click on the Import button in the ribbon. 

You can always stop here and read the administration guide, it is short and has all the pretty screen shots that this post is missing).

Browse to the XML file in the package you just unzipped. 

Then map your generalized Server 2012 (or Server 2012 R2) VHD / VHDX to the package by selecting the pencil (a red X appears when it is mapped – don’t ask me why a red X)

Just like the generalized virtual disk, if you want SCVMM integration enabled, then place the SCVMM installation ISO in your VMM Library and select that pencil icon to create the mapping.

The Custom Resource should be uploaded from the package and contains the Citrix parts.

There is a really short import video here if you don’t want to read all of that.

After you import, you can deploy the XenDesktop infrastructure by simply right clicking the template and selecting Configure Deployment.  Answer a few pertinent questions, select Refresh Preview for SCVMM to place the machines, and select Deploy Service.  The name you give your Service will also become the name of your XenDesktop Site.

Now, go to lunch.  When you return, connect to the console of the Controller VM, open Studio, and begin publishing desktops.

You can watch a shortened version of the deployment process.

The requirements are not different than any version of XenDesktop.  There needs to be; a domain to join with DNS, a SQL Server, a VM Network from which the machines can reach those resources, a RunAs account for the (first) XenDesktop administrator, and a user account for XenDesktop to integrate with SCVMM.

As time passes you may decide that you need additional StoreFront Servers or Desktop Controller servers.  To do that, select your Service in the VMs and Services view, right click, and Scale Out.  Select the tier and go.  Additional Controller capacity is created for you and added to the Site, StoreFront requires some additional configuration so you can tailor the load balancing to your environment.

If you need to see that one, I have a video for that as well.

If you need support, you can get that in the XenDesktop forums, we will be there to help and respond to questions.

Please, give us feedback and let us know what you think.

Wednesday, October 30, 2013

Modifying Hyper-V Generation2 VM boot device order with PowerShell

Yes, I know, some of you are looking at this and thinking, that has to be simple.  Or, “just use the GUI”.

Well, I can tell you.  The new Generation 2 VM introduces some interesting thinking to the world of Hyper-V.

First of all, let me drop this idea:  resource references / resource definitions / resource paths – or as Hyper-V calls it “FirmwarePath”

Okay lets look at what we have.

In Hyper-V 2012 I used:

PS C:\Users\Foo> Get-VMBios gen2r2
Get-VMBios : A parameter is invalid. Generation 2 virtual machines do not support the VMBios cmdlets.  Use Get-VMFirmware and Set-VMFirmware instead.

Whoops.  Not going to set that in the VM BIOS.  And at least there is some good guidance in the error message though (I like that).

PS C:\Users\Foo> Get-VMFirmware gen2r2

VMName SecureBoot PreferredNetworkBootProtocol BootOrder
------ ---------- ---------------------------- ---------
Gen2R2 Off        IPv4                         {File, Drive, Drive, Network}

Okay, easy enough.  Before we just fed in a string and changed the order.  But, before I do that, let me jsut avoid that error altogether and dig deeper.

PS C:\Users\Foo> $gen2r2 = Get-VMFirmware gen2r2
PS C:\Users\Foo> $gen2r2.BootOrder

VMName BootType Device                                       Description          FirmwarePath
------ -------- ------                                       -----------          ------------
Gen2R2 File                                                  Windows Boot Manager \HD(2,GPT14FD3F49-A5D7-4B1E-97EF-C...
Gen2R2 Drive    Microsoft.HyperV.PowerShell.HardDiskDrive    EFI SCSI Device      \AcpiEx(VMBus,0,0)\VenHw(9B17E5A2-...
Gen2R2 Drive    Microsoft.HyperV.PowerShell.DvdDrive         EFI SCSI Device      \AcpiEx(VMBus,0,0)\VenHw(9B17E5A2-...
Gen2R2 Network  Microsoft.HyperV.PowerShell.VMNetworkAdapter EFI Network          \AcpiEx(VMBus,0,0)\VenHw(9B17E5A2-...

Wait.  Those are objects, device references.  In the CIM world they are Resource References.  Very interesting.

But, all I want is to set my VM to PXE boot. 

And, I am going to do this the long hand way just for example – because the order has the be changed by feeding the objects in.  I am assuming that bunches of you can sort that out in various ways and will gladly leave that in the comments.  :-)

Lets capture the objects:

PS C:\Users\Foo> $genFile = $gen2r2.BootOrder[0]
PS C:\Users\Foo> $genNet = $gen2r2.BootOrder[3]
PS C:\Users\Foo> $genHD = $gen2r2.BootOrder[1]
PS C:\Users\Foo> $genDVD = $gen2r2.BootOrder[2]

Now, lets set those back, in the order I want them

PS C:\Users\Foo> Set-VMFirmware -VMName Gen2R2 -BootOrder $genNet,$genFile,$genHD,$genDVD
PS C:\Users\Foo> Get-VMFirmware gen2r2

VMName SecureBoot PreferredNetworkBootProtocol BootOrder
------ ---------- ---------------------------- ---------
Gen2R2 Off        IPv4                         {Network, File, Drive, Drive}

Let me see snazzy ways that you script this to change the boot order.

(BTW - VMM 2012 R2 does not let you do this)

Friday, October 25, 2013

SCVMM Service deployment and NO_PARAM server: NO_PARAM: NO_PARAM

I have to say.  This particular error is my favorite of all time (so far).

Here is the scenario:

  • I deploy a Service form a Template.
  • I wait.
  • The Job fails.
  • I check the SCVMM Job log and see something resembling this:

Error (2912)

An internal error has occurred trying to contact the NO_PARAM server: NO_PARAM: NO_PARAM.

NO_PARAM

Recommended Action

Check that WS-Management service is installed and running on server NO_PARAM. For more information use the command "winrm helpmsg hresult". If NO_PARAM is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running.

Error (20400)

1 parallel subtasks failed during execution.

Error (2912)

An internal error has occurred trying to contact the NO_PARAM server: NO_PARAM: NO_PARAM.

NO_PARAM

Recommended Action

Check that WS-Management service is installed and running on server NO_PARAM. For more information use the command "winrm helpmsg hresult". If NO_PARAM is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running.

Error (20400)

1 parallel subtasks failed during execution.

 

I can tell you from experience that this error has absolutely nothing to do with WinRM.  In fact, if you spend time there, it is wasted.

So, what happened?

In a nutshell; your script / installer ran, and it did not throw a single error.  Not one.  But, your timeout setting was too low due to something, anything and SCVMM gave up waiting for the Exit Code 0 that your script had finished.

Recall, that there was no error, so SCVMM did not have one to pass back up the chain to you and put in the job log.  That is where all of this NO_PARAM business is coming from.  Literally, no error was passed to something as a parameter and that particular piece of code is simply stating that it didn’t receive one.

And SCVMM reports this error as an error and pattern matches it and attempts to give you some guidance around it – where the WinRM part comes from.

 

I first caused this to happen because my script was stalled with a dialog box that was open, waiting for someone to respond, and since everything you define in your Service Template runs headless, there is no way to even know the dialog appeared – other than to logon to your VM and see that the script process continues to run.

I have also seen this happen again when there is high disk IO causing the various installers or configuration scripts to actually run slower.

Just to give a few clues as to why you see this in the first place, as it is a real mystery until you figure it out.  It took me a couple weeks to sort it all out.  Now, I avoid it – I spread my VMs across my hosts by selection.

Monday, October 7, 2013

Exporting the VHD of a running VM with Hyper-V 2012

A co-worker recently asked me about how to clone / export a running VM on Hyper-V 2012.

My first reply was; “upgrade to Hyper-V 2012 R2 and it is built-in”. 

Unfortunately that didn’t meet his needs, he is stuck in the Hyper-V 2012 world for a bit.

I came up with a process, it is not a pretty process, that is within all the parameters of file locking, doing things the way that you ‘should’, etc.

The key thing to wanting to ‘clone’ or export a VM is that you really want the virtual disk.  That is the ‘state’ of the machine.  The settings are easily copied and relatively incidental, the most important part is the virtual disk. 

I say that because this entire convoluted process is all about getting a very clean virtual disk state.  In this entire process, the settings of the machine (CPU, RAM, dynamic memory, virtual switch attachment, etc.) don’t matter.  And in the real world (outside of my little perfect test world) they really don’t matter until you Import.

Enough rambling on.  So, what is this process anyway?  In a nutshell it is:

If you take a snapshot of a VM, you can then add a differencing disk to the parent disk of the snapshot, create a VM from that, export that VM, then destroy the VM, then destroy the differencing disk.

Because this is not a snapshot, with the export Hyper-V gives you the differencing disk plus the parent.
If you exported a snapshot you get a single virtual disk, since Hyper-V does special things with AVHDX files.
If you want a single file, then you merge the diff that is in the export.

I know that some of my blog readers dream in command line, so here comes the PowerShell.

Special note:  This is specific to Hyper-V 2012 and works because of live merging and the built-in PowerShell provider.  Hyper-V 2012 R2 does not need all this mess, just take a snapshot and Export.  Hyper-V 2008 or 2008 R2 does not have a built-in PowerShell provider, but you could do all this with WMI.

$vm = get-vm "datest"

# I always want 'now' so we take our own snapshot
$checkPoint = Checkpoint-VM -VM $vm -SnapshotName "clone" -Passthru

# Create a differencing disk and link it to the disk of the snapshot.
$diffVhd = New-VHD -Differencing -ParentPath $checkPoint.HardDrives[0].Path -Path ("D:\Test\" + $checkPoint.Name + ".vhdx")

# If you really care about the exact configuration of your VM and want to Import it on the other side, then do the configuration only export using WMI:  http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/03/24/performing-a-configuration-only-export-import-on-hyper-v.aspx
# on the Import side you would 'fix-up' the configuration and use the merged new disk from later on in this example.  http://itproctology.blogspot.com/2012/08/handling-import-vm-errors-in-server.html

$clone = New-VM -Name $checkPoint.Name -VHDPath $diffVhd.Path

Export-vm -VM $clone -Path D:\Test -Passthru

Remove-VM -VM $clone -Force
Remove-Item $diffVhd.Path

$vhds = Get-ChildItem -Path D:\test -Recurse -File -Include "*.vhd*" | Get-VHD

foreach ($vhd in $vhds) {
    if ($vhd.VhdType -eq "Differencing") {
        $parent = Get-Item $vhd.ParentPath

        Merge-VHD $vhd.Path -Force
    }
}

I am going to mention it again.  I am using the Export process to get a clean virtual disk, not to have a proper VM configuration.

Use Ben’s Configuration Only export to get the configuration XML.  Then on Import use the Fix-up methodology to point to the new VHD.

Sounds like I need a second blog to put this all together.

Friday, October 4, 2013

Scripted installation of the SCVMM Console

This actually seems like it would be pretty straightforward.  However, a documentation bug leaving out a critical switch leaves you guessing.

The following runs on a VM where the the SCVMM ISO is attached.  Nothing more needs to be done beyond attaching the ISO to the VM and executing the script.  This is essentially totally hands-off.

The thing you will probably want to pay attention to are the command line switches for the Console installer.

# CD-ROM selects anything in the DVD drive.  The size ensures that something is mounted.
$dvdDrives = Get-Volume | where {$_.DriveType -eq "CD-ROM" -and $_.Size -gt 0}
# Since a VM could have more than one DVD drive, and SCVMM might be using one for its own purposes we need to find the correct one.
foreach ($dvd in $dvdDrives){
    #test for the sample INI file in the right location to ensure this is the VMM media.
    Switch ([System.IntPtr]::Size)
    {
        4 {
            If (Test-Path -Path ($dvd.DriveLetter + ":\i386\") -PathType Container){
                $vmmMedia = Get-ChildItem -Path ($dvd.DriveLetter + ":\i386\Setup\") -recurse -Filter "VMClient.ini"
            }
        }
        8 {
            If (Test-Path -Path ($dvd.DriveLetter + ":\amd64\") -PathType Container){
                $vmmMedia = Get-ChildItem -Path ($dvd.DriveLetter + ":\amd64\Setup\") -recurse -Filter "VMClient.ini"
            }    
        }
    }
    If ($vmmMedia -ne $null){
        If (Test-Path $vmmMedia.FullName){
            $FilePath = (Get-ChildItem -Path $vmmMedia.PSDrive.Root -Filter "Setup.exe").FullName
        }
    }
}
if ($FilePath) {
    try {
        "Starting SCVMM Console installation."
        Get-Date -Format HH:mm:ss
        Start-Process -FilePath $FilePath -ArgumentList "/client /i /IACCEPTSCEULA" -Wait -NoNewWindow
        "Done waiting for the installer"
        Get-Date -Format HH:mm:ss
        Start-sleep 30
        "SCVMM Console installed."
        Get-Date -Format HH:mm:ss
    }
    catch {
        $Error |  Out-File $logFile -Append
    }
}
else{ Write-Error -Category ObjectNotFound -Message "The SCVMM Installation media was not detected." -RecommendedAction "Please manually install the SCVMM Console" }

Thursday, September 12, 2013

PowerShell to enable Remote Desktop for Administration on the local machine

I had a teammate request that I enable Remote Desktop for Administration as a portion of my SCVMM Service Template.

You cannot script sconfig – although that is a easy manual way to do it.

If you try any of the Server 2012 cmdlets you will end up mucking with Remote Desktop Services and enabling user access.

Well, it turns out the key is a key.  And it is easiest to tweak it with WMI.

The following script runs on the server that is being modified (localhost is the default).  And it can run using administrator or local system security credentials.

try {
    $RDP = Get-WmiObject -Class Win32_TerminalServiceSetting `
                        -Namespace root\CIMV2\TerminalServices
                        # -Computer $Computer `
                        # -Authentication 6 `
                        # -ErrorAction Stop
} catch {
    "WMIQueryFailed"
    continue
}
if($RDP.AllowTSConnections -eq 1) {
    "RDP Already Enabled"
    continue
} else {
    try {
        $result = $RDP.SetAllowTsConnections(1,1)
        if($result.ReturnValue -eq 0) { "Enabled RDP Successfully" }
        if ($result.ReturnValue -eq 4096) {
                $Job = [WMI]$Result.Job
                while ($Job.JobState -eq 4) {
                    Write-Progress -Id 2 -ParentId 1 $Job.Caption -Status "Executing" -PercentComplete $Job.PercentComplete
                    Start-Sleep 1
                    $Job.PSBase.Get()
                }
        }
    } catch {
        "Failed to enable RDP"
    }
}

Wednesday, September 4, 2013

PowerShell to disable IE Enhanced Security

So, my employer has a number of web consoles for various applications.

This is fine, except for pesky IE Enhanced Security.

So, to automatically disable this for members of the local Administrators group just comment out the User section from the script below.

Now, before you reply that I should be adding the URL to the exclusion list and all that.  This is so much simpler.  Why?  Because I don’t have to worry about a shortcut having localhost vs. the FQDN in it.

This one section of my script runs and Administrators are happy.  After all, these are servers.  And outside of hitting a local console once or twice or applying updates, they should not even be logged in locally, right(?)

# Disable IE Enhanced Security Configuration for Administrators and Users for web consoles
try {
$AdminKey = “HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A7-37EF-4b3f-8CFC-4F3A74704073}”
$UserKey = “HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A8-37EF-4b3f-8CFC-4F3A74704073}”
Set-ItemProperty -Path $AdminKey -Name “IsInstalled” -Value 0
Set-ItemProperty -Path $UserKey -Name “IsInstalled” -Value 0
Stop-Process -Name Explorer
“IE Enhanced Security Configuration (ESC) has been disabled on this machine.”
}
catch {"Failed to disable IE ESC" }

Friday, August 30, 2013

Zip files and folders with PowerShell

One of the more frustrating things in this day of PowerShell v3 is not having cmdlets that can simply manipulate ZIP archives.

It is right there in the Windows GUI, but is it easy to automate?  Nope.

In fact if you search around you will find lots of different ways to handle this, one getting more complex than the next.  You will also find community projects that attempt to do the same thing.

One common reference that I ran across was this:

http://blogs.msdn.com/b/daiken/archive/2007/02/12/compress-files-with-windows-powershell-then-package-a-windows-vista-sidebar-gadget.aspx

From the spectacular David Aiken. I have to admit, it is not the first time he has saved my bacon.

His solution is built in, no funky community add-ins, nothing strange I can’t follow, and best of all it is compact – it is really small.

I have already had to do some things with Shell.Application with PowerShell, so I figured I would give it a shot.

Well, I immediately ran into some issues. 

One, his use of –Recurse.  Not necessary.  Use Get-ChildItem and pipe in the folder and then entire folder is zipped.  Perfect.  So you can work the input just about any way you like and it will simply pass whatever you pipe to it.

Two, file locking.  This tripped my up for hours.  And lots of other folks that have found his solution too.  His little 500 millisecond wait before advancing on to the next is simply not real nor flexible based on varying file sizes.

I found all kinds of folks commenting on the same thing and developing all kinds of fancy solutions to handle it.  But, in the end I found something really simple, and it was buried right there in the shell.application all along.  Simply test for the existence of the item you are zipping in the zip archive.

So simple.  One little do loop.  With my crafty Until ( $zipPackage.Items() | select {$_.Name -eq $file.Name} )

And, I only modified the Add-Zip, since David already had a fail safe to create the .ZIP if it didn’t already exist.

I am going to update my use of his usage example as well.

In my case I have a number of XML files.  Each is in a unique Folder.  There are other files and folders with the XML files as well (this is an OVF, for you OVF fans).

I want to create the .ZIP one level up from the folder where the XML is.

$ovfFolder = Get-Item $xmlFile.PSParentPath 

$zipFullName = $ovfFolder.Parent.FullName + "\" + $xmlFile.BaseName + ".zip"

Get-ChildItem $ovfFolder | Add-Zip $zipFullName

Now, below is my modification to the original Add-Zip function.

function Add-Zip  # usage: Get-ChildItem $folder | Add-Zip $zipFullName
{
    param([string]$zipfilename)

    if(!(test-path($zipfilename)))
    {
        set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
        (dir $zipfilename).IsReadOnly = $false   
    }
    $shellApplication = new-object -com shell.application
    $zipPackage = $shellApplication.NameSpace($zipfilename)
    foreach($file in $input)
    {
        $zipPackage.CopyHere($file.FullName)
        do {
            Start-sleep 2
        } until ( $zipPackage.Items() | select {$_.Name -eq $file.Name} )
    }
}

Wednesday, August 28, 2013

PowerShell to test if a network connection is up and on the domain

If you have noticed I have been spending a lot of time working with deployments, deploying, and scripting configurations.
In fact, I have spent nearly two years, off and on, working on this in various ways and permutations from Windows Azure VMRole (the now dead non-persistent one) to SCVMM Service Templates.
The thing that makes this type of scripting unique is that the scripts are executed within the OS of the VM, not externally from some manager that uses a PowerShell remoting session or the like.
This means that each script has no knowledge of anything beyond the boundaries of the OS where the script is running.
Now, I assume that many of you are aware of the Hyper-V Synthetic Nic, and that the Synthetic NIC driver comes to life later in the boot process (not in 2012 R2 generation 2 VMs, but that is different).
The problem is one of timing.  Your script could be running prior to your network being awake an functional.
Here is a little script that I use to test my domain joined machines prior to continuing when I have a need for domain connectivity (such as executing a command using a domain credential).

Do { $upTest = ( Get-NetConnectionProfile | where {$_.IPv4Connectivity -ne "NoTraffic"} ) } until( $upTest.NetworkCategory -eq "DomainAuthenticated" )
If you want to take this to the next level and identify the IP address and physical NIC (say you have multiple NICs and you need to bind to the IP of the domain NIC or the NIC itself in some configuration.

$mgmtNetProfile = Get-NetConnectionProfile | where {$_.NetworkCategory -eq "DomainAuthenticated" }  # Assuming only one NIC is domain joined.
$mgmtNetIpAddress = Get-NetIPAddress -InterfaceIndex $mgmtNetProfile.InterfaceIndex -AddressFamily IPv4

Monday, August 19, 2013

Hyper-V WMI v2 porting guide

Ben Armstrong of the Hyper-V team just released (on the TechNet Wiki) a v2 namespace porting guide.

To all of my DevOps and developer friends out there, this is a highly useful guide.  Because if you have not noticed, moving from the v1 namespace to the v2 is not a simple change of the namespace.

And if you have not heard yet, the v1 namespace is GONE with the 2012 R2 release.

So, spend some time over at the TechNet Wiki page;

http://social.technet.microsoft.com/wiki/contents/articles/19192.hyper-v-wmi-v2-porting-guide.aspx

and then go over to Taylor Brown’s blog and check his updates of his v1 namespace examples to the v2 namespace:

http://blogs.msdn.com/b/taylorb/

Thursday, July 25, 2013

Disabling the BIN save state file on Hyper-V

This is a follow-up to my post: http://itproctology.blogspot.com/2013/07/the-forgotten-bin-element-in-hyper-v.html

Over time I have seen lots of posts from folks wanting to turn this off.

Well, that became possible with Hyper-V 2012. And actually is totally hidden, you might not know you are turning it off.

First, on the previous post I mentioned why the BIN (as I refer to them as the save state file) exist in the first place.

So, what if you turn them off?

Well, you turn them off and you lose the protection they offer. The safety net if you like. Think about that, consider the risk.  I did my job and warned you.

Before we jump in and disable it let's look at the settings option related to it first.

Per this post by Ben Armstrong this is rather simple; http://blogs.msdn.com/b/virtual_pc_guy/archive/2012/03/26/option-to-remove-bin-files-with-hyper-v-in-windows-8.aspx

Set-VM -AutomaticStopAction Save

This is the default and what you get without thinking. The safety net is on.

Set-VM -AutomaticStopAction ShutDown

This will cause Hyper-V to attempt clean shut down of the VM or click it off if it can't

Set-VM -AutomaticStopAction TurnOff

This will just click off your VMs, and give you that wonderful "why did I not shutdown cleanly" dialog later on.

According to Ben’s post; if you set either ShutDown or TurnOff, Hyper-V no longer creates the BIN file as a placeholder.

Now, if you take a snapshot of a running VM, you get one as a part of the snapshot (it is the actual saved memory state, not a placeholder). But it won't be there from the moment you turn on your VM.

In case you didn’t know how to use Set-VM: Get-VM –Name “Foo” | Set-VM –AutomaticStopAction ShutDown

 

Now, what if you want something a bit more useful, or you have bunches of VMs?

If you are using SCVMM:

Import-Module virtualmachinemanager

$prefix = Read-Host "Enter prefix to search on"

$vms = get-scvirtualmachine | where {$_.name -match $Prefix}

foreach ($vm in $vms) {
    # check if its status is clean in VMM, and clear it if not
    if ($vm.Status -eq "UpdateFailed") {Repair-SCVirtualMachine -VM $vm -Dismiss}
    if ($vm.VirtualMachineState -eq "Running") {Stop-SCVirtualMachine -VM $vm -Shutdown}

    # wait for the darn thing to cleanly shut down
    Do {Start-Sleep 30} until ($vm.VirtualMachineState -eq "PowerOff")
    if ($vm.StopAction -ne "ShutdownGuestOS"){Set-SCVirtualMachine -VM $vm -StopAction ShutdownGuestOS}

    if ($vm.StartAction -ne "TurnOnVMIfRunningWhenVSStopped"){Set-SCVirtualMachine -VM $vm -StartAction TurnOnVMIfRunningWhenVSStopped}

}

With the Hyper-V cmdlets the script would be changed very little.

If you are using Hyper-V cmdlets:

$prefix = Read-Host "Enter prefix to search on"

$vms = get-vm | where {$_.name -match $Prefix}

foreach ($vm in $vms) {
    if ($vm.State -eq "Running") {Stop-VM -VM $vm}

    # wait for the darn thing to cleanly shut down
    Do {Start-Sleep 30} until ($vm.State -eq "Off")
    if ($vm.AutomaticStopAction -ne "ShutDown"){vm | set-vm -AutomaticStopAction ShutDown}

    if ($vm.AutomaticStartAction -ne "StartIfRunning"){$vm | set-vm -AutomaticStartAction StartIfRunning}

}

SCVMM UR3 causes VMM Service to crash on Placement

I am sharing this because it burned me, and why should it burn anyone else.

This should have the sub-title of: “you need to read the release KB articles in deep detail” or “just don’t install Update Rollup 1 in the first place and you will be happier”.

I recently jumped straight into Update Rollup 3 for SCVMM 2012 SP1.  Because I had reported some bugs and the fixes were in there.

So, I downloaded the update rollup and I applied it to my VMM Console and to my VMM Server.

I then pushed out new agents to my remote Library Server and my Hyper-V Servers.  All was good.  Or so I thought.

I then deployed a Service Template.

During the time when SCVMM was sorting out where to place the VMs in my Service the VMM Service crashed.  Okay, so I tried again.  (not to expect a different result but to pay attention to figure out what was going wrong).

In the event log of the SCVMM Server I found the following message:

Log Name:      Application

Source:        Windows Error Reporting

Date:          7/24/2013 12:15:47 PM

Event ID:      1001

Task Category: None

Level:         Information

Keywords:      Classic

User:          N/A

Computer:      beSCVMM.brianeh.local

Description:

Fault bucket , type 0

Event Name: VMM20

Response: Not available

Cab Id: 0

Problem signature:

P1: vmmservice

P2: 3.1.6018.0

P3: Engine.Placement

P4: 3.1.6027.0

P5: M.V.E.P.C.VMDCConversionHelper.GetVMDCPrecheckResources

P6: System.MissingMethodException

P7: bf35

P8:

P9:

P10:

These files may be available here:

Analysis symbol:

Rechecking for solution: 0

Report Id: 711cf4e8-f495-11e2-9404-00155d289b00

Report Status: 262144

Hashed bucket:

Event Xml:

<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">

  <System>

    <Provider Name="Windows Error Reporting" />

    <EventID Qualifiers="0">1001</EventID>

    <Level>4</Level>

    <Task>0</Task>

    <Keywords>0x80000000000000</Keywords>

    <TimeCreated SystemTime="2013-07-24T19:15:47.000000000Z" />

    <EventRecordID>5184</EventRecordID>

    <Channel>Application</Channel>

    <Computer>beSCVMM.brianeh.local</Computer>

    <Security />

  </System>

  <EventData>

    <Data>

    </Data>

    <Data>0</Data>

    <Data>VMM20</Data>

    <Data>Not available</Data>

    <Data>0</Data>

    <Data>vmmservice</Data>

    <Data>3.1.6018.0</Data>

    <Data>Engine.Placement</Data>

    <Data>3.1.6027.0</Data>

    <Data>M.V.E.P.C.VMDCConversionHelper.GetVMDCPrecheckResources</Data>

    <Data>System.MissingMethodException</Data>

    <Data>bf35</Data>

  </EventData>

</Event>

So, I removed UR3 (the VMM Server KB…510).  I removed the hosts and remote library server from vmm management and added them back.  I fixed up all of my templates (Export them and Import them do fix them quickly).

I opened a case.

In the end, what was the root problem?   I didn’t uninstall Update Rollup 1.

Seriously?

Come to find out, buried in the details of this KB: http://support.microsoft.com/kb/2802159 is the installation information that Update Rollup 1 for SCVMM should be uninstalled manually prior to installing Update Rollup 2 (or any following Update Rollup).

Personally, I never read that particular article prior to installing UR2.  So I never knew to install UR1.  And, it is highly unusual to have to manually uninstall an Update Rollup prior to adding a second.

So, I then uninstalled UR1.  I re-applied UR2.  Then I installed UR3.  And all was happy.

Now, one easy way to check for this: 

If you select About from the ribbon of the SCVMM console you will see a version.  If you have UR3 installed it should be 3.1.6027.0 

If you have UR2 installed it should be 3.1.6020.0 

If you never manually uninstalled UR1 it will remain at 3.1.6018.0 after applying UR2 or UR3.

The forgotten BIN element in Hyper-V Storage planning

This is an issue that VDI deployments always run in to.

The symptom is this:  I deployed bunches of VDI VMs.  Folks begin using them.  And a week goes by.  Suddenly I notice that most all the reserve storage that I had planned on is gone. 

I don’t have any snapshots happening.  I check that the backup software isn’t leaving weird VSS backups or temporary files lying around.  I even look for AVHDs (though I know there should not be any).

So what happened?

I go looking around and I see all these .BIN and .VSV files.

The VSV is a small placeholder.

However, the BIN is equal in size to the amount of RAM that the running VM is consuming.  If you have Dynamic Memory enabled, it equals the amount of Assigned Memory – so it changes.

Therefore, Dynamic Memory on, means that you design for the worst case scenario – calculate for the Maximum Memory setting of all the VMs stored on that particular LUN.

What this will cause you to do is:

  1. Realize the impact that this BIN file has on storage.
  2. Be smart about setting the Maximum Memory of your VMs.
  3. Understand the risk and accept the risk and push forward without making any considerations.

If you have read this far you might be thinking, What is this BIN file for anyway?

It is a safety net for your VM(s).

It is a placeholder that reserves storage on the file system, so that if the hypervisor needs to reboot, it can quickly dump the memory of your VM(s) to disk.

This does not help you if the hypervisor crashes due to something like a bad driver, this is Hyper-V being able to respond to something and save your VMs – Oh, such as when you run out of storage and it puts your VMs into a saved state.  That is one use.

Wednesday, July 24, 2013

Turn off SCVMM console auto logon

I am sure that everyone at some time has checked that little box on the SCVMM console during the prompt for credentials “Automatically connect with these settings”

image

Only at some time in the future to want to turn that off.

Well, it surely isn’t obvious how to uncheck that box. 

If you are super fast and you capture the window and attempt to click it, you will discover that the opportunity never exists.  It is grayed out, blocked.

If you go looking for ini files or .config files you will come up dry.  Anything in the user profile, again you are dry.

So, what is left?  The registry.

Here it is:

Under HKEY_CURRENT_USER\Software\Microsoft\Microsoft System Center Virtual Machine Manager Administrator Console\Settings\Shared in the registry you will find “autoConnect”

Simply modify that from “True” to “False” and you will once again be prompted for credentials.

Because who wants to logout and on to Windows as different users all the time just to act as different RBAC roles in SCVMM?

Not I.

And what if I want to have multiple consoles open, each as a different user.  I need to uncheck that for that case as well.

Wednesday, July 3, 2013

Importing a PFX file with PowerShell

The PFX format is great because it includes a certificate and the private key as a single package.

This lets you create a certificate on one machine and then replicate that around for a number of purposes.

Now, this is not the first PowerShell script that handles PFX files.  But one problem that I have found with many is that they are functions and can’t just run on their own and they don’t actually import the private key!

Here is a simple script that you can execute and it checks its execution location for any PFX files and prompts the person running the script for the password to the PFX file.

The assumption is that the PFX file needs to be in the LocalMachine Personal ( or Root) store.

"Looking for included *.pfx.."
$certFile = get-childitem | where {$_.Extension -match "pfx"}
if ($certFile -ne $NULL) {
    "Discovered a .pfx. Installing " + $certFile.Name + " in the LocalMachine\My certificate store.."
    $pfxPwd = Read-Host -Prompt "Please enter the password for your PFX file " -AsSecureString
    $pfxCert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($certFile.FullName, $pfxPwd, "Exportable,PersistKeySet")
    $store = get-item Cert:\LocalMachine\My
    $store.Open("MaxAllowed")
    $store.Add($pfxcert)
    $store.Close()
}

 

BTW – I have been sitting on this post for a really long time.  I just found it in my drafts.


Monday, June 24, 2013

Passing and parsing @ServiceVMComputerNames@

In my past post I mentioned the undocumented Service Settings that SCVMM will automatically fill-in for you and pass to your Application Scripts.

But, how can we pass these?

Some are easy, you just pass them like any other setting since they are relatively short strings.

For example, in testing things for this article I passed @ServiceVMComputerNames@, @ComputerName@, @ServiceName@, @VMID@, and @ServiceID@.  I had no idea how long these might be or what they might look like.

My Test Service had two tiers.  One tier with two VMs, one tier with one VM.  It looks like this:

image

When I deployed the Service I named it “blah”, I have a tier named “The Tier” and another named “The Other Tier”, and three VMs named “xVM01”, “xVM02”, and “yVM01” (SCVMM applied the numbers using the ## notation). 

Within VM xVM01 I sent and captured all of the settings I have mentioned.

What I got out was:

  • @ServiceVMComputerNames@ = “The Tier[xVM01,xVM02]The Other Tier[yVM01]”
  • @ComputerName@ = “xVM01”
  • @ServiceName@ = “blah”
  • @VMID@ = “26fd4a55-a707-4fba-89b5-c6955e4e05a2”
  • @ServiceID@ = “741fbf99-e676-4a8b-9df7-096c0be1fd3e”

These short service names you can safely pass using:  myscript.ps1 –paramName @VMID@ or myscrpt.cmd @VMID@

There are lots of examples about that.

It is @ServiceVMComputerNames@ that could get really long and in turn make the command line too long to execute.  So, this one I pass in a bit differently.  To accommodate the length I pipe the setting to my PowerShell script (as I blogged about here).

In the Service Template designer this looks like:

image

My script receives the object as a pipeline and writes it out.

Within the script:

Param
(
    [parameter(Mandatory = $false, ValueFromPipeline = $true)]
    $serviceNames = ""
)

$logPath = "$env:ProgramData\testPath\"
# write out the string for debugging
$serviceNames | Out-File ($logPath + $startTime.Second + $startTime.Millisecond + ".txt")

Then I can just keep reusing this snip to see what I end up with.

What I in turn do with this input is I parse it into an XML document that I later re-use with other SCVMM Application Scripts.

Param
(
    [parameter(Mandatory = $false, ValueFromPipeline = $true)]
    $serviceNames = ""
)

$logPath = "$env:ProgramData\MyService\"

# make the path so Out-File will be happy
If (!(Test-Path -Path $logPath -PathType Container)){
    New-Item -Path $logPath -ItemType Container -Force
    Start-Sleep 10  # so the OS can register the path
}

# Parse the Service information from SCVMM
$tiers = $serviceNames.Split(']')

$service = New-Object System.Xml.XmlDocument
$root = $service.CreateElement("ServiceTemplate")
$root.SetAttribute("version","1.0")
$root.SetAttribute("createon",(Get-Date))
$root.SetAttribute("createdby","brianeh")

foreach($tierString in $tiers) {

    if($tierString){  #ignore any empties
        $tier = $service.CreateElement("Tier")

        $e = $tierString.Split('[')

            $tier.SetAttribute("Name",$e[0])

            $VMs = $e[1].Split(",")

            foreach ($vmString in $VMs){
                if($vmString){
                     $vm = $service.CreateElement("VM")
                     $vm.SetAttribute("Name",$vmString)
                 }
                $tier.AppendChild($vm)
            }

    }
    $root.AppendChild($tier)
}
$service.AppendChild($root)

$service.Save(($logPath + "ServiceNames.xml"))

There you have it, nice and neat XML.  And your scripts have a clue about themselves.

Friday, June 21, 2013

SCVMM hidden system Service Settings

Lately I have been spending a great deal of time with the System Center Virtual Machine Manger Service model and building Service Templates.

Just understanding what the Service model is, and how it takes the SCVMM VM composition model and brings it up to the enterprise / distributed application level helps you understand where MSFT is headed.  To continue this evolution, take a look at the v1 of Desired State Configuration that has been announced in Server 2012 R2.  But that was a bit off topic.

In the title I mention “system” Service Settings.

If you have spent any time with building SCVMM Service Templates you know that you can define many settings by using the “@MySetting@” notation.  When you deploy your Template and it becomes a running Service whomever is deploying is prompted to fill in the Settings you have defined.

This is excellent, now I have the ability to build a template, give it to you and you can personalize it for your environment.  I have one example back here where I built a template for the first domain controller in a domain.

Now, that is all fine and dandy. But what if you need to know ‘something’ about all the other Tiers in the Service.  Such as; “who are the machines in Tier Q?”  Why; '”Because I need to configure an application on this machine to talk to them.”

Before going farther, understand what when SCVMM executes the application scripts in a Service Template as it is being deployed, the scripts within a single machine all run inside the bubble of that machine.  They only know about themselves.  But most enterprise applications need to know about something other than them.  These are big, scaled-out, distributed applications with multiple server roles that all need to talk to each other.

Well, this is where (courtesy of a MSFT PM, who gave approval for me to write this) I ran across some undocumented Service Settings.  These are Service settings that SCVMM fills in during the “Refresh Preview,” instead of prompting your customer; and will process during the deployment.

In my last post I wrote about turning a string into XML.  My sample string was from the Service Setting; @ServiceVMComputerNames@.

  • @ServiceVMComputerNames@ – Each Tier in the Service and the ComputerName within them.  In my mind, this is the handiest.  And you can get very similar information from the Azure Service.Runtime within a PaaS VM.

This gives you a string with the name of each Tier in your Service and the ComputerNames within it.  It is handy information.  In the follow-up post I will give my entire solution to passing and parsing this.

Other system Service settings that SCVMM can directly pass include: 

  • @ComputerName@ – the computer name assigned to the VM.
  • @ServiceName@ – the name of the deployed Service the VM is a member of
  • @VMID@ – the GUID of the VM in SCVMM
  • @ServiceID@ – the GUID of the Service in SCVMM

There may be more.  As far as I can tell, these are not documented at this time.  And it takes a bit of work to discover them and the formatting.

Thursday, June 13, 2013

Turning a string into an XML document with PowerShell

This is one of those things that always seems like it should be really easy, and straightforward.

Well, if you are deeply familiar with XML it might be.  And If you can pry your string apart into an array using split or regex you have half of the problem tackled.

Lets begin by looking at my string.

$s = "Machine Tier - ScaleOut[Machine03,Machine04,Machine05]Machine Tier[Machine02]"

Ugly thing.  That actually contains three elements of data.

The entire string represents a Service.  The data outside the brackets represents a Tier.  The data inside the brackets is a list of VMs in that Tier.

First, I need to break it all apart.

$a = $s.Split(']');
foreach($b in $a) {
    $c = $b.Split('[');
    "Tier = " + $c[0];
    "VMs = " + $c[1];
}

Tier = Machine Tier - ScaleOut
VMs = Machine03,Machine04,Machine05
Tier = Machine Tier
VMs = Machine02
Tier =
VMs =

Okay, that is close, but not quite there. And still not usable.

First, lets do something about that empty last item in the array.

$a = $s.Split(']');
foreach($b in $a) {
    if($b){
        $c = $b.Split('[');
        "Tier = " + $c[0];
        "VMs = " + $c[1];
    }
}

And now, break apart the VM name string

$a = $s.Split(']');
foreach($b in $a) {
    if($b){
        $c = $b.Split('[');
        "Tier = " + $c[0];
        $VMs = $c[1].Split(",")
        foreach ($vm in $VMs){
            "VM = " + $vm
        }
    }
}

Tier = Machine Tier - ScaleOut
VM = Machine03
VM = Machine04
VM = Machine05
Tier = Machine Tier
VM = Machine02

Okay, now I have something that is usable.  And now I want to turn that into XML.

I have spent a great deal of time searching on PowerShell and XML.  Trying to figure out how to build an XML using PowerShell on the fly in my script.  All the examples always start with a TXT based framework of some type that is in turn manipulated by the author.  Or a file is read, or objects are queried.

I am sorry, but this is not an example of generating an XML using PowerShell as the title suggests.  It is a huge formatted text object that is manipulated.  Such a frustrating example to hit over and over again.

Well, I just have this silly string I parsed apart.  And I want that to be XML.  It already has meaning to the pieces, I just need to make them tags and whatnot.

I mentioned early on that I don’t know much about XML.  I read it, transpose it, consume the XML of others’, I never made my own.  So, I had to visit the wisdom of a developer friend and make sure that i was doing it ‘correctly’ and it was ‘proper’.

In it simplest sense, to create an empty XML document in PowerShell do this: “$service = New-Object System.Xml.XmlDocument” and you have an XML document.  But how do you put things into it.

Okay, this is all about objects, and object manipulation.  You don’t just add tags in.  You create objects that are of the type $service and you add them back to $service in the correct order.

I began with this:

$service = New-Object System.Xml.XmlDocument
$tier = $service.CreateElement("Tier")
$tier.SetAttribute("Name","My Test Tier")

$vm = $service.CreateElement("VM")
$vm.SetAttribute("Name","My VM Name")

$tier.AppendChild($vm)
$service.AppendChild($tier)

I create the XML Document $service.  Then I create an Element of type $service and define a Name and value as an Attribute of that Element.  I repeat this and create a $vm element as well. 

If you query $service, you find that these things aren’t there.  They are three separate objects at this point.  They are all share the same object type of $service.  But nothing more.  Now I assemble them together.

I take the $tier object and I add the $vm object to it as a child.  This nests <vm> under <tier> in the XML.  I then repeat this adding this updated $tier object to $service as a child.

The above is fine enough.  However, I was informed that I was missing a root element.  To define the Document.

$service = New-Object System.Xml.XmlDocument
$root = $service.CreateElement("RootElement")

$tier = $service.CreateElement("Tier")
$tier.SetAttribute("Name","My Test Tier")

$vm = $service.CreateElement("VM")
$vm.SetAttribute("Name","My VM Name")

$tier.AppendChild($vm)
$root.AppendChild($tier)
$service.AppendChild($root)

I have been informed that simply doing what I just showed above is still not quite good enough.  It meets the requirement of a root element but totally missed on the intent or spirit.  We will get back to that.

So, what does this XML document look like?  Well, you can step through $service and try to imagine it in your head our you can send it out to a file and open it in notepad.

$service.Save(".\service.xml")

Open that up and you have:

<RootElement>
  <Tier Name="My Test Tier">
    <VM Name="My VM Name" />
  </Tier>
</RootElement>

Now.  I have some XML.  And I am feeling pretty proud of myself.

Why did I ever do this in the first place?  So I could do this:

PS C:\Users\Public\Documents> $service.GetElementsByTagName("Tier")

Name                                                     VM
----                                                     --
My Test Tier                                             VM

PS C:\Users\Public\Documents> $service.GetElementsByTagName("VM")

Name
----
My VM Name

Now I have associations that I can look for and query against.

Now, the fun part, meshing those two different activities together as one.  And I have the following:

$tiers = $s.Split(']')

$service = New-Object System.Xml.XmlDocument
$root = $service.CreateElement("RootElement")

foreach($tierString in $tiers) {

    if($tierString){  #ignore any empties
        $tier = $service.CreateElement("Tier")

        $e = $tierString.Split('[')

            $tier.SetAttribute("Name",$e[0])

            $VMs = $e[1].Split(",")

            foreach ($vmString in $VMs){
                if($vmString){
                     $vm = $service.CreateElement("VM")
                     $vm.SetAttribute("Name",$vmString)
                 }
                $tier.AppendChild($vm)
            }

    }
    $root.AppendChild($tier)
}
$service.AppendChild($root)

Now, back to that really lazy root element I created.  In practice, that should be some meta information about the XML document itself.  If you look at lots of XML you will see things like creation dates, versions, authors, and a name that is somehow descriptive.

After I create the $root object (with a better name) I just update it with a few attributes and I am good to go.

$root.SetAttribute("version","1.0")
$root.SetAttribute("createon",(Get-Date))
$root.SetAttribute("createdby","brianeh")

Now, a really short example of what I can now do with this information.

# query on a specific VM element with a Name equal to "Machine02"
# The Item(0) returns the XML object itself instead of a reference to it.  $me = $me.ItemOf(0)

$me = ($service.SelectNodes("//VM[@Name='Machine02']")).Item(0)

# What Tier do I belong to?
$me.ParentNode.Name

# Do I have Siblings or am I an only Child?
$me.NextSibling
$me.PreviousSibling
$me.ParentNode.ChildNodes.Count

Note:  be careful.  These queries are in XPath format, and they are case sensitive.

You can also simply walk the XML as PowerShell supports that as a ‘.’ notation path.

Friday, June 7, 2013

Handling freakishly long strings from CMD to PowerShell

This is one of those “quick, blog it before you forget it all” type of posts.  As well as “hey, I think I get it” and “the things you learn”.

I can thank a particular Microsoft PM for even causing me to look into this.  Never would I have expected that I would need to be prepared to handle an “extremely long” string.  What do I mean by extremely long?  Over 8200 characters, that is what I mean.

To quote the individual that caused me to spend time figuring this out: “I don't recommend this .. because <string in question> can expand to a very large string that will overflow the max # of chars you can pass from cmd.”

That is my entire point here.  I am not just in the PowerShell state of euphoria.  I have some process that is invoking PowerShell as a command calling my script and feeding it parameters.  Open a command prompt and type “PowerShell.exe /?” for a flavor of this world.

Well, what is that maximum number?  How big could this be?

Open a command window and start typing.  When you think you are done, keep going.  You will hit a limit, it is just below 8200 characters.  If you try to pass a string to a script and output it, you will be there.  I did this:

.\SampleScript.ps1 my incredibly long string where I kept typing and copy and pasting until I ran out of characters.

Within the script I captured the input in a special way and output the length and the string back to a file.  Note that param property “ValueFromRemainingArguments” (below) if this is not there, each space in that string gets treated like a new argument and within the script you end up with $args as an array.

Param (
   [parameter(Mandatory = $false, ValueFromRemainingArguments = $true)]
   $ComputerNames = ""
)
$ComputerNames.getType() | Out-File C:\users\Public\Documents\ComputerNames.txt
$ComputerNames.length | Out-File C:\users\Public\Documents\ComputerNames.txt -Append
$ComputerNames | Out-File C:\users\Public\Documents\ComputerNames.txt -Append

Also buried in this correspondence with this individual was a suggestion to write an executable and capture the string using stdIn, parse it, and then invoke PowerShell from there. 

Well, I am becoming a bit of a PowerShell junkie, and I am a lot more comfortable there than with C#.  And, I have to think about the poor sole that has to take care of this project when I finish.  Why give them some hack of an exe to take care of?

Lets back up.  StdIn.  What in the world is that?  I have never run across that with PowerShell. I am definitely not a developer either.

The best way that I can describe StdIn is a read stream.  Instead of passing in a string a stream is passed in and parsed when the end of stream is received.  After talking to a developer cohort, I learned that I actually use StdIn in PowerShell quite frequently.  Pipelining “in” uses StdIn.

So doing this:

$someString | .\ComputerNames.ps1

Uses the StdIn method for inputting the data.

But wait.  Okay, a bit of noise about StdIn and cmd limits.  what about the long strings?

Okay, pipeline. 

My test was a 24,000+ character string.

$someString = "However you want to make this really, incredibly long.  Keep going."
$someString.Length
$someString | .\SampleScript.ps1

But you need to change that param line so that it takes the pipeline.

Param
(
[parameter(Mandatory = $false, ValueFromPipeline = $true)]
$ComputerNames = ""
)

If you want to see what happens the other way, go back and try.  If you keep it all in PowerShell it works.  But if you call the PowerShell script from cmd it truncates due to the original issue I was warned about.

So, in the end, I did not have to write some exe that only parses this input, I actually used PowerShell instead.

Here is how I invoke:

%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command "'%ComputerNames' | .\SampleScript.ps1 -otherArg 'gibberish'"

Here is my script:

Param
(
[parameter(Mandatory = $false, ValueFromPipeline = $true, ValueFromRemainingArguments = $true)]
$ComputerNames = "",
[parameter(Mandatory = $true)]
[string]$otherArg = ""
)

$otherArg | Out-File C:\users\Public\Documents\ComputerNames.txt

$ComputerNames.getType() | Out-File C:\users\Public\Documents\ComputerNames.txt -Append
$ComputerNames.length | Out-File C:\users\Public\Documents\ComputerNames.txt -Append
$ComputerNames | Out-File C:\users\Public\Documents\ComputerNames.txt -Append

Thanks for reading. If you ask why? My only answer is because I have to, I have no other option. We don't always have interactive sessions at our disposal. Sometimes we are headless scripts running under some workflow engine.

==

My developer friend just uncovered the following this dies not support additional parameters to the string and may be able to handle strings that are longer yet:

File  Blah.ps1
$c = [Console]::In.ReadToEnd()
"Here"
$c

In use you still take your long input and pipe to the script, but the use of ‘reading in’ the input is more literal (for lack of a better description).

Wednesday, June 5, 2013

Discovering and initializing a data volume at VM provision

A few posts back I wrote about using PowerShell to find the DVD drive that a particular installer was attached to and then running that command line installer.

To take that a bit further I have a VM that I am deploying, and that VM has an empty VHDX attached to it.

This VHDX is on the second IDE controller (it needs to be available early in the boot process of the OS).  When I provision this VM the first time, I want to find, online, initialize, and format that virtual disk.

$disk = Get-Disk | where {($_.BusType -eq "ATA") -and ($_.IsBoot -eq $false)}

(You could also find the disk if it was on the SCSI controller)

$disk = Get-Disk | where {$_.BusType -eq "SCSI"}

And now for mounting, and formatting the disk.

Set-Disk -InputObject $disk -IsOffline $false

Set-Disk -InputObject $disk -IsReadOnly $false

Initialize-Disk -InputObject $disk -ErrorAction SilentlyContinue -WarningAction SilentlyContinue

$partition = New-Partition -InputObject $disk -UseMaximumSize –AssignDriveLetter

$partition | Format-Volume -FileSystem NTFS -NewFileSystemLabel "Cache" -Force:$true -Confirm:$false

And now you simply continue along with your scripting.

The reason that I capture the new partition to $partition is that there are lots of useful stuff in there for configuring things moving on.  Little things like: $partition.DriveLetter are highly useful.

Tuesday, May 21, 2013

Enabling Network Virtualization using SCVMM Run Script Command

So, if you are in the Fabric node of SCVMM 2012 SP1 you might happen to right click on a Hyper-V Server and notice the option “Run Script Command”.

This is great, it lest me push commands out to my Hyper-V Servers and run them, just like I do with Application Scripts in Service Templates.

I recently ran into a situation where I needed to enable the Network Virtualization binding on all of my Hyper-V Servers (seven of them).  And I had no desire to open a PSSession to each and run my script block that enables it.

Since I discovered this Script Command option in SCVMM, why not use it, I am here.

Like any good SCVMM admin I defined the command and then looked at the PowerShell script behind it.

(The big problem here is that I don’t see a way to save these Script Commands and run them again.  Big fail there.)

I then took that script and made it a bit more useful to me by getting all of my Hyper-V hosts and running my command on all of them at the same time.

And I figured that ‘why not’ this is the one step that all the SCVMM Network Virtualization setup is missing.  Frankly, I would expect SCVMM just to do this for me, but alas SP1 does not.

Here is my execution:

import-module virtualmachinemanager

$scriptSetting = New-SCScriptCommandSetting

Set-SCScriptCommandSetting -ScriptCommandSetting $scriptSetting -WorkingDirectory "" -PersistStandardOutputPath "" -PersistStandardErrorPath "" -MatchStandardOutput "" -MatchStandardError ".+" -MatchExitCode "[1-9][0-9]*" -FailOnMatch -RestartOnRetry $false -AlwaysReboot $false -MatchRebootExitCode "" -RestartScriptOnExitCodeReboot $false

$RunAsAccount = Get-SCRunAsAccount -Name "DomainAdmin"

$VMHosts = Get-SCVMHost | where { $_.VirtualizationPlatform -eq "HyperV" }

$scriptBlock = {

   $vSwitch = Get-VMSwitch -SwitchType External

   ForEach-Object -InputObject $vSwitch {

      if ((Get-NetAdapterBinding -ComponentID "ms_netwnv" -InterfaceDescription $_.NetAdapterInterfaceDescription).Enabled -eq $false){

         Enable-NetAdapterBinding -InterfaceDescription $_.NetAdapterInterfaceDescription -ComponentID "ms_netwnv"

      }

   }

}

ForEach ($VMHost in $VMHosts) {

   Invoke-SCScriptCommand -Executable "%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe" -TimeoutSeconds 120 -CommandParameters "-ExecutionPolicy RemoteSigned -command $scriptBlock" -VMHost $VMHost -ScriptCommandSetting $scriptSetting -RunAsAccount $RunAsAccount

}

Be sure to have the SCVMM Console installed, and modify your RunAs account name.

That Set-SCScriptCommandSetting line is all about the error handling for the script.  The only non-default setting I have in here is making sure that my hypervisors were not rebooted, no matter what happened.

Tuesday, May 7, 2013

The IP the NIC on a particular domain or address space

I have this VM.  This VM has multiple interfaces.  One interface is _the_ management interface and it is the one that the VM used when it joined my domain.

I want to discover this NIC.  I then want its IP address so I can embed that into a configuration file.

The initial list of questions that I began with were:

  • Am I domain joined?  If so, what domain?
  • What network connection reports that domain?
  • What NIC is that?
  • What is the IPv4 address on that NIC?

I have discovered that there are multiple ways to handle this.

First of all, asking the questions; Am I joined and what domain am I joined to.

$env:USERDNSDOMAIN or Get-WmiObject -Class Win32_ComputerSystem | select domain

Both give you results, but different objects back.

BRIANEH.LOCAL (a string) vs.

domain                (a portion of a WMI object)                                                                                                
------

brianeh.local

Now, lets look at the question of what NIC is on what domain or connected to any domain.  If you only have one NIC connected to any domain (or that can resolve any domain) you can probably simplify this to:

Get-NetConnectionProfile | where {$_.NetworkCategory -eq "DomainAuthenticated" }

The alternate to that is to be more verbose (or precise if you like).

Get-NetConnectionProfile | where { $_.Name -eq $env:USERDNSDOMAIN }

What you get back is a Connection Profile object.  And to some this looks familiar, you know that status that you see on the network icon in your system tray?  The one that says if you have internet connectivity, or what DNS domain is discovered?  This is the information behind that.

I have two and they look like this:

Name             : Unidentified network
InterfaceAlias   : Ethernet 2
InterfaceIndex   : 13
NetworkCategory  : Public
IPv4Connectivity : LocalNetwork
IPv6Connectivity : LocalNetwork

Name             : brianeh.local
InterfaceAlias   : Ethernet
InterfaceIndex   : 12
NetworkCategory  : DomainAuthenticated
IPv4Connectivity : Internet
IPv6Connectivity : LocalNetwork

In the example I went straight to the selection of the desired one.

Now, how do I get to the IP you might wonder.  One more step.

Get-NetIPAddress -InterfaceIndex $netProfile.InterfaceIndex -AddressFamily IPv4

Again, I went straight to the IPv4 filter.  I used the Network Profile object captured as $netProfile and its InterfaceIndex property.  But, as you play with the Network IP Address object you will see that it can be selected multiple ways.

And if you only want the IP address itself, take that Network IP Address object and select only the IPv4Address property.

$netIpAddress.IPv4Address

And there you have it.  Now, off to building my configuration file…

Wednesday, May 1, 2013

Discovering programs on a disk to run silent installers

I recently worked through my version of a PowerShell snip that has to discover in installer and run it.

In my case that installer is on an ISO that is attached to a VM.  And my script runs within the VM.

Now, this should be easy, just find the DVD drive, and run the installer.

Well, as always happens, it is not that easy in my case.  Come to find out, I could have multiple ISO attached at the moment my script is running, and I need to detect the correct one.  I also have to make sure that the installer exists before I try to run it and cause an error.  (there is always a need for error handling).

If I didn’t have this mess I could just assume that my ISO is D:\ and put in the path and move on.

First, ask yourself if the ISO will ALWAYS be on D:\?  If not, then you need to find the DVD drives and specifically those that have something mounted.  From within the running OS, not at some VM management layer.

I do that with the following:

# CD-ROM selects anything in the DVD drive.  The size ensures that something is mounted.
$dvdDrives = Get-Volume | where {$_.DriveType -eq "CD-ROM" -and $_.Size -gt 0}

I also have different installers for x86 and x64, so I have to detect the ‘bit-ness’ of the OS and smartly choose the correct installer.  A bit of searching turned me to this reliable way:

Switch ([System.IntPtr]::Size)
{
    4 {
        # x86
       }
    }
    8 {
       # x64
       }    
    }
}

Now, I have to build the path so that later on in my script I can run it.

If (Test-Path -Path ($dvd.DriveLetter + ":\Installer\") -PathType Container){
    $InstallMedia = Get-ChildItem -Path ($dvd.DriveLetter + ":\Installer\") -recurse -Filter "ServerInstaller.exe"
}

And, before attempting to run it, I need to test that the file really exists. And what is the literal path to the installer. Because how I got here was by testing for the folder structure, not by searching for the individual files (which would take a lot longer).

If ($InstallMedia -ne $null){
    If (Test-Path $InstallMedia .FullName){
        $InstallPath = $InstallMedia .FullName
    }
}

The entire script:

# CD-ROM selects anything in the DVD drive.  The size ensures that something is mounted.
$dvdDrives = Get-Volume | where {$_.DriveType -eq "CD-ROM" -and $_.Size -gt 0}

# Since a VM could have more than one DVD drive, we need to find the correct one.
foreach ($dvd in $dvdDrives){
    Switch ([System.IntPtr]::Size)
    {
        4 {
            If (Test-Path -Path ($dvd.DriveLetter + ":\x86\") -PathType Container){
                $Media = Get-ChildItem -Path ($dvd.DriveLetter + ":\x86\") -recurse -Filter "ServerSetup.exe"
            }
        }
        {
            If (Test-Path -Path ($dvd.DriveLetter + ":\x64\") -PathType Container){
                $Media = Get-ChildItem -Path ($dvd.DriveLetter + ":\x64\") -recurse -Filter "ServerSetup.exe"
            }    
        }
    }

    If ($Media -ne $null){
        If (Test-Path $Media.FullName){
            $FilePath = $Media.FullName
        }
    }
}

Start-Process -FilePath $FilePath –ArgumentList “/quiet “ -Wait -NoNewWindow
"Done waiting for the installer"

Tuesday, March 26, 2013

Converting VHDX VHD and back without Hyper-V

So.  I am really rehashing another script that I already did a few posts ago.

If you look back here: http://itproctology.blogspot.com/2013/01/powershell-for-reducing-size-of-vhd.html

If you being with a virtual disk that is a VHD.

And you alter this one line:

$newVhdPath = $orgvhd.ImagePath.Split(".")[0] + $partNum +"New." + "vhdx"

You end up with a VHDX instead of a VHD.  Go backward, flip it around. 

Server 2012 / Windows 8 have some built in assumptions based on the extension. You get the VHD format that the extension defines. That is why you just cannot modify the file extension.

 

What got me started on this?  Well, my original post was because I was annoyed with certain cmdlets being only bound to the existence of the Hyper-V virtual machine management service (having Hyper-V installed).  Since the OS storage layer knows about these virtual disks, why can’t I natively manipulate them?

And then this post came through the Hyper-V forums: http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/thread/51984c52-cc69-459f-8999-6ca186d48931

Again, folks with my same original complaint.  It requires Hyper-V.  Ugh.

 

So. Let’s make this script a one trick pony.  And ONLY let it convert virtual disk formats.

Now, that said.  This is a hack folks.  I did it because I can, and the tools are built in to the OS.  DISM does not convert disks or volumes, it converts partitions.  So you get one virtual disk per partition. (Frankly, who partitions disks anymore anyway?)

 

# Server 2012 / Windows 8 VHD / VHDX converter.  Using DISM.
# Prototyped on Server 2012 not running Hyper-V
# This utilizes PowerShell v3 and cmdlets from Server 2012 / Windows 8 - it will not run on any older OS.
# Copy write – Brian Ehlert

# Ask the path of the VHD and test it
Do {
    $imagePath = Read-Host "Please enter the full path to your VHD.  i.e. D:\VMs\MyVhd.vhd "
} until ((Test-Path -Path $imagePath ) -eq $true)  # Mount the VHD that will be getting resized

$orgVhd = Mount-DiskImage -ImagePath $imagePath -PassThru
$orgVhd = Get-DiskImage -ImagePath $orgVhd.ImagePath  # Get the partitions
$orgParts = Get-Partition -DiskNumber $orgVhd.Number  # Use DISM command line to capture the VHD one WIM file. Each partition with a different name.
$wimName = ($orgvhd.ImagePath.Split(".")[0] + ".wim") 
foreach ($part in $orgParts) {
    if ($part.Size -gt 524288000){ # skip the partition if it is less than 500MB, most likely there is no OS or it is the System Reserved partition.
        $capDir = $part.DriveLetter + ":\"
        $partNum = $part.PartitionNumber
        "Be patient, this could take a long time"
        & dism /capture-image /ImageFile:$wimName /CaptureDir:$capDir /Name:$partNum
    }
}
# dismount the VHD that was just captured
Dismount-DiskImage -ImagePath $orgvhd.ImagePath

# Change the extension
Switch ($orgvhd.ImagePath.Split(".")[1]){
    vhd {$diskFormat = "vhdx"}
    vhdx {$diskFormat = "vhd"}
}

foreach ($part in $orgParts)
    {
    if ($part.Size -gt 367001600){ # skip the partition if it is less than 350MB.
        $capDir = $part.DriveLetter + ":\"
        $partNum = $part.PartitionNumber
        $newVhdPath = $orgvhd.ImagePath.Split(".")[0] + $partNum + "." + $diskFormat           
        $newSize = (([uint64]$orgVhd.Size) /1024 /1024)
        $diskPart = @"
        create vdisk file="$newVhdPath" type=expandable maximum=$newSize
        select vdisk file="$newVhdPath"
        attach vdisk
        create partition primary
        active
        format fs=ntfs quick
        assign
"@
        $diskPart | diskpart
        $newVhd = Get-DiskImage -ImagePath $newvhdPath
        $newVhdDrive = (Get-Partition -DiskNumber $newVhd.Number)
        $newVhdLetter = (Get-Partition -DiskNumber $newVhd.Number).DriveLetter + ":"         
        "Be patient, this could take a long time"
        & dism /apply-image /ImageFile:$wimName /ApplyDir:$newVhdLetter /Name:$partNum
        New-PSDrive -PSProvider FileSystem -Name $newVhdDrive.DriveLetter -root ($newVhdDrive.DriveLetter + ":\") # Make the new volume known to your PowerShell session
        # if \Windows then assume a boot volume and create the BCD
        if ((Test-Path -Path ($newVhdLetter + "\Windows") -PathType Container) -eq $true)
        {
            bcdboot $newVhdLetter\Windows /s $newVhdLetter
        }
         Remove-PSDrive -PSProvider FileSystem -Name $newVhdDrive.DriveLetter
        Start-Sleep 10  # Settling time
        Dismount-DiskImage -ImagePath $newVhdPath
    }
}

Wednesday, February 13, 2013

SCVMM Service Template for the first DC in a Forest – the bits

So here goes nothing.  Why not hand it over just to see how this Service Template Export and Import process really works.

In theory, you simply download my Service Template and then Import it.  It will give you the custom application resource folder and the scripts.

You need to bring the Server 2012 evaluation VHD image and place it in your Library.

Be sure to hook your logical network settings, connect to the VHD image.

When you configure your deployment you should have to fill in the domain FQDN, NetBIOS name, and recovery password.  Then click go. 

That is the theory.

Why not give it a go?

I ask one thing though, if you download and try – can you please leave comments and feedback about the experience.

Thanks!

 

BATCH Script Domain Controller Service Template

PowerShell Script Domain Controller Service Template

Tuesday, February 12, 2013

SCVMM Service Template for the first DC in a Forest – part 2

Okay, so I posted the traditional way of handling this, with a BATCH file.

But, in reality all I did was sue a BATCH file to in turn process a PowerShell script.  I considered this silly.  There must be a way to process the PowerShell script without having to use the BATCH script.

I mean, come on.  This is Server 2012 I am using and PowerShell v3.  Yes, I know there are some advanced things that ca be done with BATCH scripting (I have done some in my history), but think out of the box here.

So, I spent bunches of time playing around with this (so you wouldn’t have to (if you stumbled on my post)).

In the end, it wasn’t that difficult, just had to think about things a bit differently.

Oh, and one important thing I left out of my previous post.  Use a local administrator Run As account for adding the local administrator admin credentials to the OS and the same Run As account a second time to process the scripts.

So, here is the script the PowerShell way:

param (
[string]$domainName,
[string]$netbiosName,
[string]$safeModePass
)

# Build a domain controller and the test domain.

# Add the RSAT tools
Add-WindowsFeature RSAT-AD-Tools

# Add the features
Add-WindowsFeature AD-Domain-Services -IncludeAllSubFeature -IncludeManagementTools
Add-WindowsFeature DNS -IncludeAllSubFeature -IncludeManagementTools
Add-WindowsFeature GPMC -IncludeAllSubFeature -IncludeManagementTools

# convert the password to a secure string as required
$secPass = ConvertTo-SecureString -String $safeModePass -AsPlainText -Force

# Create the Forest and Domain
Install-ADDSForest -CreateDnsDelegation:$false -DomainMode Win2012 -DomainName $domainName -DomainNetbiosName $netbiosName -ForestMode Win2012 -InstallDns -Force -SafeModeAdministratorPassword $secPass

I know what you are thinking, that can be shortened.  And my reply; yes, it can.  And you advanced folks, go right ahead.

Now, in the Application Configuration of the Tier in the Service.  Two pre-install scripts.

The first pre-install script is to set script execution to RemoteSigned:

The executable program is: %WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe

And the Parameters are: -command set-executionpolicy remotesigned –force 

(I don’t have a Run As account defined BTW).

image

The second pre-install script is everything above.  But those are included in the Custom Resource Package as a .ps1 file.

The executable program is the same.  The Parameters are different: -file .\DomainController.ps1 @DomainName@ @DomainNetbiosName@ @SafeModeAdministratorPassword@

And the Run As account is my local admin run as account profile.  And the timeout needs to be turned up to about 600 seconds.

image

That is it.  I tried it a few times.  It works.