Monday, April 20, 2015

Ports and Docker containers

This is the feature of containers that the system administrator in me gets all excited about.

Ports.

This is more than security through obscurity, this is actually about isolated networking.
Docker has an internal network stack all to itself.

You can see it if you type ifconfig.  You see a Docker interface (not unlike a Hyper-V Internal network vNIC).
If you have a container running you can use sudo docker ps and one of the columns is 'ports'.

Lets begin with the intermediate of exposing ports.

For my examples I have been using my meshblu image and opening a bash shell and even mapping local file paths into it.  I am going to leave those out at the moment.

Now that I have a working model, I want to test it.  So I want to expose the listener port of the service running in my container.

sudo docker run -t -t -p 3000:3000 --name meshblu meshblu_appliance

The -p command allows the mapping of ports.
If I only defined -p 3000 I would be allowing port 3000 of the container to be mapped to some random port of the container host.  Instead I defined 3000:3000 - so as not to confuse myself.
What this does is map port 3000 of my container to port 3000 of my container host.

If I open any application on my container host or from a remote machine I can now access the application in my container on port 3000.  Just like opening a single port in a firewall.

Now.  My container has a number of supporting services such as Redis and MongoDB and other ports that the application will be listening on..  I would like to expose these as well.  They are there, in the container, running and responding and entirely hidden at the moment.
This is one that I did not consider intuitive.

sudo docker run -t -t -p 3000:3000 -p 1883:1883 -p 5683:5683 --name meshblu meshblu_appliance

Now I have mapped two additional ports.  Initially I tried using a single long string or an array (it just made sense) but you need to use individual port commands.

Just some other nifty stuff.  And I have not gotten beyond a single container yet.

Tuesday, April 14, 2015

Local file systems and Docker containers

Now I am going to move into the realm of areas that I consider fun.
What sort of trickery can I use containers for so that I can avoid installing 'stuff' on my development workstation?

Here is my scenario:  I am pulling some source from GitHub.  I might modify some scripts and need to test those.  I want to quickly test my changes within an installed and running instance of the application.  I don't want to 'install' the application on my workstation.

So, lets work thorough my real life example.  No modifications, just pulling the source and getting that into a container, without any need to build a custom container nor image.

Docker allows you to expose paths of the container host and map those into specific paths of the VM.  And this is where you can do some nifty things.

Say that you have some Node.js application.  And you want to run multiple instances of it, or you want to run and instance and have the logs write back to your development station. 
(This could be any number of combinations).

Lets run through an example:

Previously I ended with the example:
sudo docker run -i -t --name meshblu node:latest /bin/bash

If I extend that with this scenario we will see some new options.

sudo docker start -i -t -v "/home/brianeh/GitHub":"/home":ro --name meshblu node:latest /bin/bash

What I have added is the "-v" option.  this defines a path mapping.
"/home/brianeh/GitHub" is the GitHub folder of my user home path.  After the colon is the path in the container that this is mapped to.  "ro" means Read Only.  Or I could define that as "rw" - Read Write.

the neat thing is that once I run my container and enter its console I can type ls -l /home and I will see all of the files I have downloaded to my GitHub folder of my development machine.

This gives me a runspace within the container that is separate from my workstation where I can install applications but run the latest version of my code straight out of the development path.

(One reason why developers love containers)

Friday, April 10, 2015

Giving of your Octoblu devices

I realize that the title is not very searchable, and I am okay with that.

After all, this post is all about giving.  Actually moving ( handing over, granting, getting rid of ) a device on the Octoblu IoT platform.

To stay true to my Windows roots, I did this in PowerShell  ;-)

Here is my scenario, I have been working with a 'device' in Octoblu.  I have it all setup, it is working well, there is a flow tied to it, etc.  The world is a happy place.

Now, I am done with this.  But Joe wants to carry it on and take it to a show and demonstrate it, possibly even enhance the flow.
I developed the entire thing under my account, and I can't give Joe my password.

I could use the template feature and give Joe a device scrubbed version of my flow.  But that still requires that the device be deleted from my account and re-created under Joe's.  Or Joe has to take the device, reconfigure it and then personalize the flow.

Why not just give Joe my device - and I mean really give it.  Not just the physical device, but its representation on the platform and the flow that goes along with it.

To accomplish this I only have to do a few things:  I have to add Joe's user account to the device, then Joe needs to claim (take ownership) of that device.

In Octoblu everything is a 'device'; users, things, flows, running flows, etc.  A user is just a special type of device.

Octoblu also has a whitelist / blacklist security model that grants or denies access at the very granular device level.  You can literally have a device that can only talk to one other device (such as the flow it interacts with) and nothing else. 

By default, this is all locked down and your devices are only available to you and to the flows to link them to in the designer.

At the same time, there is no way to 'give this to Joe'.

Let me get to the meat and walk through a few things.

First of all, we will do all of this through the REST API.  And the REST API uses your user UUID and secret (they call a token) to authenticate you.

Using this you can authenticate and list your devices.
And get the UUID of the device (or UUIDs of devices) that you want to give.

The next piece of information you need is the UUID of the user you want to give the device to.

The rest is in the script (with some error handling of course).
You can find the source over here: https://github.com/brianehlert/moveOctobluDevice

And you can find the first version below:

# What is the UUID and token of the current device owner?
# and test that it is actually good
Do {
    Do {
        if (!$deviceOwner) { $deviceOwner = Read-Host -Prompt "What is the uuid of the current device owner? (your account)" }
        if (!$deviceOwnerSecret) { $deviceOwnerSecret = Read-Host -Prompt "What is the secret token of the current device owner? (your account)" }
        $meAuthHeader = @{
            meshblu_auth_uuid = $deviceOwner  
            meshblu_auth_token = $deviceOwnerSecret
        }
        $me = Invoke-RestMethod -URI ("http://meshblu.octoblu.com/devices/" + $deviceOwner) -Headers $meAuthHeader -Method Get -ErrorAction SilentlyContinue
    } Until ( $me )
    # echo back what you find
    "The current device owner is: " + $me.devices[0].octoblu.email
} until ( (Read-Host -Prompt "Is this correct? Type 'YES'") -eq 'YES' )

# What is the UUID of the user you will be moving your devices to?
# and test that it is actually good.
Do {
    Do {
        if (!$deviceNewOwner) { $deviceNewOwner = Read-Host -Prompt "What is the uuid of the new device owner? " }
        if (!$deviceNewOwnerSecret) { $deviceNewOwnerSecret = Read-Host -Prompt "What is the secret token of the new device owner? " }
        $youAuthHeader = @{
            meshblu_auth_uuid = $deviceNewOwner  
            meshblu_auth_token = $deviceNewOwnerSecret
        }
        If ($deviceNewOwnerSecret) {
            $you = Invoke-RestMethod -URI ("http://meshblu.octoblu.com/devices/" + $deviceNewOwner) -Headers $youAuthHeader -Method Get -ErrorAction SilentlyContinue
        } else { $you.devices.octoblu.email = "No token provided. Unable to validate" }
    } until ($you)
   
    # echo back what you find
    "The new device owner will be: " + $you.devices[0].octoblu.email
} until ( (Read-Host -Prompt "Is this the correct new device owner? Type 'YES'") -eq 'YES' )

# List all of 'my devices' in a nice, neat way with the important bits - name, uuid, device type
$devices = Invoke-RestMethod -URI http://meshblu.octoblu.com/mydevices -Headers $meAuthHeader -Method Get
# Which device will you be moving to another user?
# base on device name as everything associated in the case of Gateblu needs to go.
Do {
    $devices.devices | Sort-Object Name | Format-Table -AutoSize name, uuid, type, subtype, online

    Do {
        if (!$nameToMove) { $nameToMove = Read-Host -Prompt "What is the name of the device you will be moving to the other user? (this is a match)" }
        $deviceToMove = $devices.devices -match $nameToMove }
    Until ( $deviceToMove )
    "The following device(s) matched: "
    $deviceToMove | Format-Table -AutoSize Name, UUID
} until ( (Read-Host -Prompt "proceed to move your device(s)? Type 'YES'") -eq 'YES' )
# The device only needs to be discoverable to take ownership.
foreach ( $device in $deviceToMove ) {
   
    If ( $device.discoverWhitelist ) {
        $device.discoverWhitelist += $deviceNewOwner
        $json = @{
            "discoverWhitelist" = $device.discoverWhitelist
        }
    } else {
        $json = @{
            "discoverWhitelist" = $deviceNewOwner
        }
    }
    $json = $json | ConvertTo-Json
    # make the device discoverable by the new owner
    Invoke-RestMethod -URI ( "http://meshblu.octoblu.com/devices/" + $device.uuid ) -ContentType "application/json" -Body $json -Headers $meAuthHeader -Method Put
    If ( $youAuthHeader.meshblu_auth_token ) {
        # claim the device as the new owner
        # only if you know the token - otherwise the other user will need to do that
        Invoke-RestMethod -URI ("http://meshblu.octoblu.com/claimdevice/" + $device.uuid ) -ContentType "application/json" -Headers $youAuthHeader -Method Put
    }
}

Monday, March 9, 2015

Running a container from a Docker image

In the previous post I pulled three Docker images. 

If you are familiar with machine virtualization think of these as templates.  An image is used as the base of a container.

So, when a container is 'run' it is essentially a differencing file system that is linked to an image file system.  This allows something to happen within the container and for that to be written back to disk in a unique place.

This is really no different than the concept to using a single virtual disk and creating multiple virtual machines from it using differencing disks.  The differencing disk contains the unique character of each machine.  In this case the container contains any uniqueness.

Let me give a bit of background here.  If you were not aware, I am employed as a software tester.  I also do some development.  But, I like to keep my workstations clean.  And containers give me that in a tidy way.
Lately, I am working with a NodeJS application.  And I don't want to have to install Node and all of its dependencies on my workstation.  This is how I see many developers get themselves into the classic 'works on my machine' trap. 
At some point in time they installed some strange DLL or package ( or security settings or anything ) and then later took that as a dependency and never realized it.  Then they distribute the package, I put it on a different system, and wham, no workie.
So I am actually going to do a few tricks that containers easily enable.  And I will also use a real example that I am working with and you could as well.

Enough of that.  Lets take a moment to look at the syntax of the container run command
( the entire run reference is here:  http://docs.docker.com/reference/run/ )

I am going to begin with the Node image that I pulled down.  This image is based on Ubuntu and already has NodeJS and the NPM package manager installed.  This saves me a bit of time with installing Node, its dependent libraries

sudo docker run -i -t --name meshblu node:latest /bin/bash
Lets break this apart:
  1. sudo - this is Ubuntu and Docker runs as root (or local system) so in order to interact with it, you must elevate yourself.  Windows admins; think RunAs.
  2. docker - call the docker service then pass it a command
  3. run - self explanatory, but I want to run a container
  4. -i  - interactive.  This will put the container console into my current console.
  5. -t  - virtual tty.  This gives you a tty console session for STDIN.  Important if you want to interact with the container in a console way.
  6. --name  - this will be the name of the container, and simply helps you to keep them straight.  Without it a random (and silly) name is given.  And you will have to keep them straight.  The key here is that this is just a quick way to locate the container ID, which is the really important thing.
  7. node:latest - th is shte name of the image I pulled / want to use.  It is check if you have this local, if not it will look to the configured hub and try to find it and pull it.
  8. /bin/bash - this is the command plus any arguments to run.  Everything after the image name will be executed within the container.  So you can have any command plus parameters at the end.  /bin/bash is simply a bash shell - pretty much were you are at any Linux console.
Go ahead, execute the command.
Notice that your prompt changed.  Because now your command window is connected to the container process and you are essentially "within" the container.  For me this is:  root@06b874873e86:/#
I am in the container as root, and the container ID just happens to be '06b874873e86'.

Now.  Exiting.
If you type 'exit' the container stops running and you drop back to your command prompt.
If you type 'ctrl+p' then 'ctrl+q' you drop back to your command prompt, but the container stays running.

To see any running containers type (at the Docker host):  sudo docker ps
To see all containers (running and stopped / paused): sudo docker ps -a
If you want back into a running container use:  sudo docker attach 06b87
(notice that I did not type the entire container id,  just enough to uniquely identify it.  A nifty usability feature)
Lastly, to start a container that has been stopped:  sudo docker start -i 06b87
(the -i connects you to it interactively)

Tuesday, March 3, 2015

Pulling Docker images

We have some Docker basics, and ran a bash command inside of a container.  And you possibly poked around a bit more.

After downloading that Ubuntu image on demand, you may have noticed that it looks like you have multiple images locally.  Where you really have the single image shown by the IMAGE ID but with multiple tags.

If you wonder where these images magically come from, it is a place called the Docker Hub ( hub.docker.com ).  Go to the hub and look around.  Notice that there are 'official' images and community images.  I, personally, stick with the official images as I know who is behind that image creation - Canonical is the source of the 'official' Ubuntu image.
Accountability, I like that.

Now I want a few images, I don't want to run them straight off, I want to download some official images and have them locally and then do some other things with them.  Also, this way I have them for offline use.

If you look at Ubuntu in the Docker Library ( https://registry.hub.docker.com/_/ubuntu/ ) you will notice the supported tags section.  In the previous post I referenced ubuntu:latest - looking at the tags you can see that this translates to trusty, and trusty ( I just happen to know ) is 14.04 LTS.

I could also pull Ubuntu 14.10 by defining ubuntu:utopic or get really experimental and use ubuntu:vivid

This is handy for many developers as they can define a version dependency, no different than a specific version of a DLL or a module.  Test can stabilize on a specific OS release, and so on.

So, lets pull the Mongo, Redis, and Node images.  Since I need a base MongoDB server, a Redis Server, and a place to run my NodeJS application.  This way I can work with these offline from Docker hub.

First node.  sudo docker pull node:latest
Notice that multiple images were downloaded.  At the time I wrote this there were 11.
All of these together will form the resulting image.  Kind of like using differencing disks and making changes and linking them together - one builds upon the previous.

After the download is complete, take a look  sudo docker images
And you see one image id.

If you want to know what is happening under the hood in Docker itself.  I found an excellent explanation to save me a bunch of typing: http://blog.thoward37.me/articles/where-are-docker-images-stored/
Now, the file locations are relative, but no longer exact due to updates to Docker.

But, as you can see from that post, this is Linux, so everything is simply right there on the file system of the Ubuntu Docker host.  Like a folder of files.  Not contained within some virtual disk ( which could be secured with permissions or BitLocker ). 
This is why we consider the host that runs Docker to be a physical security boundary and the running containers more of a process / network boundary.

Virtual Machines in themselves are considered physical security boundaries.  And the hypervisor system is designed to support and enforce that.

I will get deeper into that in a post or two just to show what you can do with this.  Basically, play a few virtualization tricks.

I had mentioned also pulling MongoDB and Redis; so lets go ahead and do that:
sudo docker pull redis:latest
sudo docker pull mongo:latest

At this point in time we should have pulled all of the images.  And next time we will do something more interesting.