Thursday, August 20, 2015

Enabling Hyper-V causes continuous reboot

This is a post that I am getting out to pull together a symptom that I am seeing in the TechNet forums and is spread across multiple threads.

I will be updating this thread as I follow things unfolding.  If you have this issue, please follow this post or the thread I mention below.

A bit of background; Hyper-V takes advantage of hardware virtualization features.
As new releases come out it is not unusual for the platform to take advantage of some hardware feature that is not properly or fully implemented in hardware.  This has played in historic releases.

Now, I am not being critical by pointing that out.  What is am saying is this:
Step 1: check for BIOS updates from your system / motherboard manufacturer and update the BIOS.

As revealed in this thread:  Windows 10 x64 Pro infinite reboot loop with Hyper-V enabled this pattern has played out again.

There is a new hardware feature that has been implemented with Windows 10: IOMMU
According to Wikipedia, iommu is a memory management feature that is present in both Northbridge and Southbridge Intel processors.
Quite honestly, I see folks with i7's reporting this problem.

That said, I have long mentioned that manufacturers release processors in families.  And within a family a feature may exist, but not across all processors in the family.  So you must always check your particular chipset with the manufacturer to ensure that your chipset actually implemented the feature that you think you have.

Manufacturers do this so that they can offer a range of prices for end products.  Be mindful of that.

I bring that up because I cannot tell you hoe many times I have helped folks wit an issue that turned out to be related to them thinking they have a feature (the motherboard implemented it) but the chipset actually lacked it (the particular processor they had didn't implement it, but the processor family did).

Now, here is the latest advice from MSFT folks:

There is a known issue where the machine will fail to boot when Hyper-V is installed but DEP/NX/XD is disabled in BIOS. You mentioned that you have enabled these options, but you are continuing to see the same problem.
One other thing we can try is disabling the IOMMU policy and see if that helps. (The hypervisor's usage of the IOMMU device by default is new in Windows 10, and might explain why you are seeing this only on Windows 10).
You can disable IOMMU usage by the hypervisor by running the following command from an elevated cmd window & rebooting your machine:
bcdedit /set {default} hypervisoriommupolicy disable
Can you try this and let me know if it helps?
If you can also share msinfo32 information with us, that will be helpful with the investigation.

  • One poster reported resolution when they simply disabled Data Execution Prevention and enabled it again (this requires a cold boot).

That said, iommu has (actually) been around for a long time.  And, most likely has not been taken advantage of, so I can understand the recommendation of disabling it in the bootloader.
That said, when do you set it?  Before or after enabling Hyper-V?
Did you update your BIOS?

As MSFT reports this issue to manufacturers, the BIOS update will become more relevant.
If you want to understand how important this is, search the Hyper-V forum for "Sony Viao" - Sony chose to release a system that reported chipset virtualization as being enabled, when in fact it was not.

More to come.
If something works for you, please post your details in the comments.


Wednesday, July 22, 2015

Docker Containers to Images

I am still learning containers and what Docker gives to aide in managing containers.
It seems that each time that I re-visit doing things with containers that I discover something new and curiously wonderful.

I totally understand why folks get container excited, and why developers love them so much, and why operations folks should totally love Hyper-V containers, and, and...  There I go getting excited.

If you have read my previous posts of containers I tired to relay some conceptual ideas of what a 'container' actually is.  It is not a process, it is not a VM, it is not a session.  It is a little bit of all of them which is what makes describing a container not a straight forward thing.

And, you will recall that a container is the running state of an image.  And an image is a flat file structure that represents the application and everything it needs to run.

A container is more than a running service copy of an image.  It is that image, plus all the settings you gave it when you told Docker to run - create container 'foo' from image 'bar' with all of these settings.

The Docker tutorials really don't cover this well.  They just toss you out there and say, pull this image, run a container from this image, look - you did it.  Conceptually, there is a lot happening that Docker abstracts away, saves for you, manages for you (which is why folks have caught onto Docker).

All of those settings that you give are meta information that define that container.

After you run that container (with all of those settings defined) you can simply stop it.  Then when you start it later, all of those run parameters that you defined are magically applied out of the configuration - you never have to define all of those parameters again.

If you then stop your container and then commit that container to a new image, all of that meta information is saved.

If you inspect a container or an image you can see all of this meta information that defines what happens when that container is started or that image is run.

Then, if you share this image and someone else runs it, they get all of your defined configuration applied.

Let me put all of this together with a simple walkthrough.

First: run a container.
sudo docker run -it ubuntu:latest /bin/bash

Breaking that command back apart:
run an instance of a container (with a random name and id), interactively, using the Ubuntu image (from the Docker Hub) of the latest version, then run the bash shell application.
The prompt that you get back is root@

Second: stop that container
sudo docker stop

While that container ran, anything you did was persisted within its file system.

Third: list all containers
sudo docker ps -a

The Container is the runtime process.  To see ones that are not running you add the all switch.

Fourth: start that container
sudo docker start

notice that the image and command to run did not have to be defined.  But I did not define how to connect to the process.  That is what -it does.  So it is now running in the background.  Stop it again and add -it before the container id and you are back in.

Then stop it again before the next step.

If you wanted to see that your commands were in there, just inspect the container.
sudo docker inspect

Fifth: commit that container to an image
sudo docker commit name:version

Now, you can create duplicates of your container by running container instances of your image.

And, you an use inspect against the images as well.

And there you have it.

In the next container post, I am going to use a real world application in a container and discuss configurations and variables.


Tuesday, July 21, 2015

Disabling that XfinityWifi home hot spot wireless interference

I got into a wireless networking troubleshooting a few years back as a Field Technical Advisor for First Technology Challenge with the FIRST organization.

The system that they had at the time (recently replaced) was based on 802.11b using a relatively low powered device on the robot that was named Samantha.  John Toebes was the brain behind Samantha, and at the time she was a revolutionary step over using Bluetooth to control the FTC robots.

Being 802.11b based, she would work with most any 2.4ghz router on the market (in theory, and there are interesting reasons why she didn't work with them all).  The other thing about 802.11b is that it was early in the standards for wireless, before it got really popular and the ways to deal with interference got much smarter.

As the spectrum becomes more crowded, signals are pushed out.  If someone is streaming something (like video) that signal actually gets precedence in the local airwaves.  In other words, activity in the local airspace interferes with other activity in the local airspace.

Why am I digressing so far from the title of the post?  I recently read through an article by Xfinity: "10 ways you could be killing your home Wi-Fi signal"

It has a number of harmless suggestions, such as:  Get your router off the floor, put it on the floor where there is the most use, don't put it in windows or behind your TV, etc.

All of the suggestions are about maintaining line of sight with your router.  Frankly, advice that we gave long before home Wi-Fi routers got to the great performance levels that they are at today.

Not once do they mention interference with other wireless signals.  Maybe because they (xfinity) create one of the biggest problems with their own xfinitywifi open access point.

I have had all kinds of trouble with xfinity wireless throughput since they started this open wifi program.  I have changed routers, purchased my own cable modems, moved up to low end professional equipment, replaced the splitters on the cable, used di-electric grease on the outside cable junctions, etc.

I got the performance to the point where when I was wired, I got the full throughput that we paid for.  But as soon as I went wireless I got 1/4 of the throughput.  It made no sense.  Especially since we used to have far better throughput on wireless.

Since I run my own router, I don't use the open wifi connection that Xfinity forces on to you.  Needless to say, I just don't trust them.

Believe it or not, they let you self service turn that off.  So you can be sure that your neighbors are not sponging off the bandwidth that you pay good money for (they can be beholden to the great Comcast too if they really want broadband).

Anyway, thanks for reading all this.  But I know what you really wan is this link: http://customer.xfinity.com/help-and-support/internet/disable-xfinity-wifi-home-hotspot

And just in case they move it or something else, I am going to copy it as well:

  1. Navigate to https://customer.xfinity.com/WifiHotspot. This site can also be reached by following these steps: 
    • Navigate to the My Services section of My Account.
    • Under the XFINITY Internet tab, click the Manage your home hotspot link.
  2. A new window appears indicating, "If you choose to enable your XFINITY WiFi Hotspot feature, a separate network called ‘xfinity wifi’ will be created for your guests - at no additional charge. Never give out your home network password again, so your private WiFi network will always remain secure. Learn more".
  3. Under the Manage XFINITY WiFi Home Hotspot option, if your wireless gateway is enabled with the Home Hotspot feature, the Enable my XFINITY WiFi Home Hotspot feature radio button will be pre-selected. If your Home Hotspot feature is disabled, the Disable my XFINITY WiFi Home Hotspot feature radio button will be pre-selected.
  4. To enable or disable the feature, choose the Enable my XFINITY WiFi Home Hotspot feature radio button or the Disable my XFINITY WiFi Home Hotspot feature radio button.
  5. Click Save.
    • Disabling the feature takes effect within a few minutes.
    • However, enabling the device will take up to 24 hours.
  6. You will be presented with a confirmation message at the top of the My Services page that says, "Thank you! Your hotspot has now been disabled."

Monday, July 13, 2015

Identifying and running workflows with the Octoblu API and PowerShell

If you are not familiar with Octoblu; it is an IoT messaging system, a protocol translation system, and a message transformer for IoT all rolled into one product.

Since last year I have been spending quite a bit of my time with their systems and platform.
Everything in their system is a device, your user account, a software agent that we demonstrated on the Synergy 2015 stage day 2 keynote, each node that you see in the designer, even that running workflow.  They are all devices and they all have messages bouncing around between them.

One thing that I have come to rely on are their workflows.  I use the flows as a message router / message translator.
By that I mean that I formulate a JSON message and send that to some endpoint (I frequently send a REST message to a trigger using PowerShell).  And the flow will do something with that message - it might change it, filter it, route it to one or many other devices (seen in the flow designer as 'nodes').

All that said, I will probably post about sending and transposing messages later.  It is actually one of the fundamental things that any device does in the IoT world.
I am pretty loose with the concept of what a 'device' is: it can be the Arduino that I have on my desk that runs Microblu, it can be a Node.js application that I run on my workstation, it can be a personal interface to Github (the Github node in Octoblu).  A device is anything that can either send or receive a message.

Back to the title of this post.

I just finished running a long duration test against a 'device' and during this test I wanted to ensure that my workflow remained running.

When you begin to rely on workflows you realize that it is a cloud service and things happen. Sometimes flows get knocked offline.
Over time I have dreamed up a couple approaches to evaluating flows from a 'health' perspective. One of them (my v1) I am using as my base for this post.

This is a really simple approach; make an API call that determines if s flow is running.
If it isn't running, I start it.  Simple as that.

The complexity comes with two different APIs being involved; as there are two different systems of the service at play. 
There is the Octoblu API - this is the Octoblu designer and the GUI and those pretty things that you visually interact with.
And there is the Meshblu API - this guy is the messaging meat of the infrastructure.  He handles routing, security, and devices.  When a flow is run for the first time it becomes instantiated over on Meshblu and becomes a device of the ecosystem.

The code is in my Github Octoblu PowerShell repo here: https://github.com/brianehlert/OctoPosh
The particular script behind this post is: "FlowWatcher.ps1"

Though I have comments in my script allow me to describe a bit more of what is happening.

Invoke-RestMethod -URI ("http://meshblu.octoblu.com/devices/" + $flowId) -Headers $meAuthHeader -Method Get

This is a Meshblu API call to fetch the properties of an individual device.  Note the $flowId GUID string in the URI path.  Leave that GUID out and you get back an array of all of the devices that you 'own'.

Invoke-RestMethod -URI ("https://app.octoblu.com/api/flows/" + $flowId) -Headers $meAuthHeader -Method Get

This is an Octoblu API call to fetch an individual flow / workflow.  Just as happens if you open one in the designer, you get all of its properties.

Invoke-RestMethod -URI ("https://app.octoblu.com/api/flows/" + $flowId + "/instance") -Headers $meAuthHeader -Method Post

This is another Octoblu API call to start a flow.  What happens is that a flow device instance gets instantiated in Meshblu (this way it can receive messages).  This is why I call the Meshblu API to see if it is 'running'.

Invoke-RestMethod -URI ("https://app.octoblu.com/api/flows/" + $flowId + "/instance") -Headers $meAuthHeader -Method Delete

This is a Meshblu API call to stop flow.  What it does is delete the running instance of the device.  If you query this particular device in Meshblu (after you have run it once) you will find it in Meshblu, but it may not be running.  If it is running, it is a little process within the infrastructure, when not running it is still defined as a device.

I hope you find the script and this little API tutorial to be useful.

Thursday, July 9, 2015

Indexing Format-Table output

Today, I had the crazy idea of outputting an array in PowerShell as a table, and I wanted to show the index of each array value.

In laymen's terms: I wanted my table output to be line numbered.  And I wanted to line numbers to correspond to the position in the array.

Why? because I didn't want the user to type in a name string or a GUID string that they might typo, they could simply enter the index of the item(s).
Trying to solve potential problems upfront, without a bunch of error handling code.

I started out with a PowerShell array that looked something like this:

PS\> $allFlows | ft name, flowid -AutoSize

name                     flowId
----                     ------
bjeDemoFlow_working      70bd3881-8224-11e4-8019-f97967ce66a8
bje_cmdblu               3e155fe0-dc9a-11e4-9dfc-f7587e2f6b74
Pulser_WorkerFlow_Sample f945f94f-fb33-4181-864d-042548497270
Flow d59ae1e8            d59ae1e8-0220-4fd2-b40f-fba971c9cf42
bjeConnectTheDots.io     204b5897-2182-4aef-84fe-1251f1d4943b
StageFlow_1              796d0ff4-94d6-4d1a-b580-f83ab98c7e15
Flow f26aab2f            f26aab2f-783b-4c09-b1fc-9e6433e8ab37
Flow c983c204            c983c204-5a87-4947-9bd2-435ac727908a
v2VDA Test               ba5f77af-98d1-4651-8c35-c502a72ccea8
Demo_WorkerFlow          e7efdac4-663d-4fb6-9b29-3a13aac5fb97


Now for the strange part.  How do I number the lines in a way that they correspond to each items position in the array?

Search did not fail me today, but it took a bit of effort to discover an answer in StackOverflow from PowerShell MVP Keith Hill.
And, also looking at Get-Help Format-Table -Examples and realizing that there is an 'expression' option to calculate the value of a field in the table output.

PS\> $allFlows | ft @{Label="number"; Expression={ [array]::IndexOf($allFlows, $_) }}, name, flowid -AutoSize

number name                     flowId
------ ----                     ------
0      bjeDemoFlow_working      70bd3881-8224-11e4-8019-f97967ce66a8
1      bje_cmdblu               3e155fe0-dc9a-11e4-9dfc-f7587e2f6b74
2      Pulser_WorkerFlow_Sample f945f94f-fb33-4181-864d-042548497270
3      Flow d59ae1e8            d59ae1e8-0220-4fd2-b40f-fba971c9cf42
4      bjeConnectTheDots.io     204b5897-2182-4aef-84fe-1251f1d4943b
5      StageFlow_1              796d0ff4-94d6-4d1a-b580-f83ab98c7e15
6      Flow f26aab2f            f26aab2f-783b-4c09-b1fc-9e6433e8ab37
7      Flow c983c204            c983c204-5a87-4947-9bd2-435ac727908a
8      v2VDA Test               ba5f77af-98d1-4651-8c35-c502a72ccea8
9      Demo_WorkerFlow          e7efdac4-663d-4fb6-9b29-3a13aac5fb97


The values for the column are defined as a hashtable @{}
With the Label of the column and the Expression that defines the value.

Pretty nifty new trick to add to my repertoire.