Monday, March 9, 2015

Running a container from a Docker image

In the previous post I pulled three Docker images. 

If you are familiar with machine virtualization think of these as templates.  An image is used as the base of a container.

So, when a container is 'run' it is essentially a differencing file system that is linked to an image file system.  This allows something to happen within the container and for that to be written back to disk in a unique place.

This is really no different than the concept to using a single virtual disk and creating multiple virtual machines from it using differencing disks.  The differencing disk contains the unique character of each machine.  In this case the container contains any uniqueness.

Let me give a bit of background here.  If you were not aware, I am employed as a software tester.  I also do some development.  But, I like to keep my workstations clean.  And containers give me that in a tidy way.
Lately, I am working with a NodeJS application.  And I don't want to have to install Node and all of its dependencies on my workstation.  This is how I see many developers get themselves into the classic 'works on my machine' trap. 
At some point in time they installed some strange DLL or package ( or security settings or anything ) and then later took that as a dependency and never realized it.  Then they distribute the package, I put it on a different system, and wham, no workie.
So I am actually going to do a few tricks that containers easily enable.  And I will also use a real example that I am working with and you could as well.

Enough of that.  Lets take a moment to look at the syntax of the container run command
( the entire run reference is here:  http://docs.docker.com/reference/run/ )

I am going to begin with the Node image that I pulled down.  This image is based on Ubuntu and already has NodeJS and the NPM package manager installed.  This saves me a bit of time with installing Node, its dependent libraries

sudo docker run -i -t --name meshblu node:latest /bin/bash
Lets break this apart:
  1. sudo - this is Ubuntu and Docker runs as root (or local system) so in order to interact with it, you must elevate yourself.  Windows admins; think RunAs.
  2. docker - call the docker service then pass it a command
  3. run - self explanatory, but I want to run a container
  4. -i  - interactive.  This will put the container console into my current console.
  5. -t  - virtual tty.  This gives you a tty console session for STDIN.  Important if you want to interact with the container in a console way.
  6. --name  - this will be the name of the container, and simply helps you to keep them straight.  Without it a random (and silly) name is given.  And you will have to keep them straight.  The key here is that this is just a quick way to locate the container ID, which is the really important thing.
  7. node:latest - th is shte name of the image I pulled / want to use.  It is check if you have this local, if not it will look to the configured hub and try to find it and pull it.
  8. /bin/bash - this is the command plus any arguments to run.  Everything after the image name will be executed within the container.  So you can have any command plus parameters at the end.  /bin/bash is simply a bash shell - pretty much were you are at any Linux console.
Go ahead, execute the command.
Notice that your prompt changed.  Because now your command window is connected to the container process and you are essentially "within" the container.  For me this is:  root@06b874873e86:/#
I am in the container as root, and the container ID just happens to be '06b874873e86'.

Now.  Exiting.
If you type 'exit' the container stops running and you drop back to your command prompt.
If you type 'ctrl+p' then 'ctrl+q' you drop back to your command prompt, but the container stays running.

To see any running containers type (at the Docker host):  sudo docker ps
To see all containers (running and stopped / paused): sudo docker ps -a
If you want back into a running container use:  sudo docker attach 06b87
(notice that I did not type the entire container id,  just enough to uniquely identify it.  A nifty usability feature)
Lastly, to start a container that has been stopped:  sudo docker start -i 06b87
(the -i connects you to it interactively)

Tuesday, March 3, 2015

Pulling Docker images

We have some Docker basics, and ran a bash command inside of a container.  And you possibly poked around a bit more.

After downloading that Ubuntu image on demand, you may have noticed that it looks like you have multiple images locally.  Where you really have the single image shown by the IMAGE ID but with multiple tags.

If you wonder where these images magically come from, it is a place called the Docker Hub ( hub.docker.com ).  Go to the hub and look around.  Notice that there are 'official' images and community images.  I, personally, stick with the official images as I know who is behind that image creation - Canonical is the source of the 'official' Ubuntu image.
Accountability, I like that.

Now I want a few images, I don't want to run them straight off, I want to download some official images and have them locally and then do some other things with them.  Also, this way I have them for offline use.

If you look at Ubuntu in the Docker Library ( https://registry.hub.docker.com/_/ubuntu/ ) you will notice the supported tags section.  In the previous post I referenced ubuntu:latest - looking at the tags you can see that this translates to trusty, and trusty ( I just happen to know ) is 14.04 LTS.

I could also pull Ubuntu 14.10 by defining ubuntu:utopic or get really experimental and use ubuntu:vivid

This is handy for many developers as they can define a version dependency, no different than a specific version of a DLL or a module.  Test can stabilize on a specific OS release, and so on.

So, lets pull the Mongo, Redis, and Node images.  Since I need a base MongoDB server, a Redis Server, and a place to run my NodeJS application.  This way I can work with these offline from Docker hub.

First node.  sudo docker pull node:latest
Notice that multiple images were downloaded.  At the time I wrote this there were 11.
All of these together will form the resulting image.  Kind of like using differencing disks and making changes and linking them together - one builds upon the previous.

After the download is complete, take a look  sudo docker images
And you see one image id.

If you want to know what is happening under the hood in Docker itself.  I found an excellent explanation to save me a bunch of typing: http://blog.thoward37.me/articles/where-are-docker-images-stored/
Now, the file locations are relative, but no longer exact due to updates to Docker.

But, as you can see from that post, this is Linux, so everything is simply right there on the file system of the Ubuntu Docker host.  Like a folder of files.  Not contained within some virtual disk ( which could be secured with permissions or BitLocker ). 
This is why we consider the host that runs Docker to be a physical security boundary and the running containers more of a process / network boundary.

Virtual Machines in themselves are considered physical security boundaries.  And the hypervisor system is designed to support and enforce that.

I will get deeper into that in a post or two just to show what you can do with this.  Basically, play a few virtualization tricks.

I had mentioned also pulling MongoDB and Redis; so lets go ahead and do that:
sudo docker pull redis:latest
sudo docker pull mongo:latest

At this point in time we should have pulled all of the images.  And next time we will do something more interesting.

Thursday, February 26, 2015

Doing something simple with Docker

A couple posts back I walked you through setting up an Ubuntu VM, and installing the latest version of Docker.  Then, I left you hanging.

Docker is interesting.  These container things are a cross between a VM and a user process.  There is still a base OS there (of some type) to bootstrap the application.  What I thought was interesting when I first poked Docker is that each container is a network isolation zone.

Docker has a great tutorial: https://www.docker.com/tryit/

And that was fine.  I did something.  But I really didn't understand it until I tried to really use it.

What does it take to get an application into an image and run it?  And this Docker Hub that is chock full of images, and Dockerfile - what is that?

Lets begin with an easy example as we get the language down.

I want to run a Docker container.  I want the OS in this Docker container to be Ubuntu (yes, Ubuntu within Ubuntu).

Returning to my Ubuntu VM from before, I logon as my user and try a couple Docker commands:
sudo docker images  - this lists the Docker images that have been built / downloaded to this machine and these images are used to run containers.

Notice that language - a container is a running instance of an image.  The image is analogous to a virtual disk with something in it.  The image consumes space on the disk of my Ubuntu VM.

sudo docker ps - If you have been around Linux before you have run across ps - processes.  The ps command lists the containers, and containers being processes only exist when they run.

Enough of that, lets get confusing and run an instance of Ubuntu on Ubuntu in the same command window where I ran my Docker command (the key here, watch the bouncing command prompt).

sudo docker run -i -t ubuntu:latest /bin/bash

Like most commands, lets read this on from right to left (not left to right).
Run a BASH shell, in the image 'ubuntu:latest', run a tty session, keep STDIN open (send input).
What this accomplishes is: the image is checked if it is local, if not it is pulled from the hub.  Then open the tty in the console session (where I ran the command) and run bash.

Notice when you do this that your prompt changed at your console.  That console window is now the container process.  What you do now is inside the container process and image.

I you really want to realize that you are somewhere else, type ifconfig at the prompt.  By default you should get a class B address in the 172 range.  There will be more later on this, but right now that VM can get out, but there are no incoming ports open to it.

When you are ready to get out of the image use exit
This actually stops the container in this case.  Since it closes the tty.

Monday, February 23, 2015

Migrating VMs from Hyper-V 2008 or 2008 R2 to Hyper-V 2012 R2

There has been a recent explosion of questions around this in the Hyper-V TechNet forum over the past two weeks.  So I decided that I would blog about this a bit.

The primary question is: How can I migrate from Hyper-V 2008 (or Hyper-V 2008 R2) to Hyper-V 2012 R2.

There are lots of very well meaning folks that provide the advice of: "Export your VMs from your 2008 era Hyper-V Server and then Import those VMs to your 2012 R2 Hyper-V Server."

Obviously, they never tested this.  Because, IT DOES NOT WORK.

First of all, lets test this common advice:  Export a VM from Hyper-V 2008 / 2008 R2 and import direct to Hyper-V 2012 R2.

  1. Create your VM Export
  2. copy the export folder to a Hyper-V 2012 R2 system
  3. attempt to import.
You will instantly get this:  "Hyper-V did not find virtual machines to import from location"
And you look, and everything is right there in that folder.  What gives!

The next piece of well meaning advice is to create a new VM configuration using the existing VHD in that export folder. 
(this will work, but if you have snapshots you are screwed - all of that snapshot history is lost, and lots of folks connect to the incorrect virtual disk and freak out that years of history was lost.)
 
If you were going to do this in the first place, why not just copy out the VHDs and save yourself some effort and be done with it.  This is viable option 1.

Here is the option that many folks overlook / are not aware of (as it was a new feature of Hyper-V 2012 R2:

Copy the VM folder direct from the Hyper-V 2008 R2 system to the Hyper-V 2012 R2 system and Import. 

Hyper-V 2012 R2 reads the XML configuration and imports the VM asking you a couple questions to fix things up. 
This is viable option 2 (actually the easiest if you have additional hardware with Hyper-V 2012 R2 already built).

We could stop there but not to be left without choices;  you can in-place upgrade from your Hyper-V 2008 / 2008 R2 era system to Hyper-V 2012 and then again to Hyper-V 2012 R2.  This will update the VM configurations as you go, and you will be all good.  Now we have a viable option 3.

Suppose that all you have is a VM Export.  Then what? 
Remember that error message at the beginning; Hyper-V 2012 R2 cannot read the VM export from Hyper-V 2008 / 2008 R2.  Now, we have other options.

Take your VM folder that you exported from your Hyper-V 2008 R2 system and copy it to a Hyper-V 2012 system.  Then import.  Success!

Now what?  You want Hyper-V 2012 R2.  You have a few viable options to take this from Hyper-V 2012 to Hyper-V 2012 R2: 

In-place upgrade the Hyper-V 2012 system to Hyper-V 2012 R2.  This is viable option 4.
Export the VMs, then import them to your Hyper-V 2012 R2 system.  This is viable option 5.

Thinking out of the box, are there other options?

I am always assuming that you have backups of your systems.  And you have tested restoring those backups, and you know those backups are indeed good and useful.  This gives another option. 
Restore your VMs to the Hyper-V 2012 R2 system as new VMs.  This becomes viable option 6.

There you have it.  Six options to test and choose from.  All of which are considered supported. And will save you the panic of realizing that going straight from a Hyper-V 2008 / R2 VM Export to 2012 R2 will not work.





Thursday, February 12, 2015

Docker on Ubuntu on Hyper-V 2012 R2

I recently read through an MSDN article that described running Docker in a VM on Hyper-V.

Frankly, I was less than impressed at the complexity of the solution.  Especially since the concept here is not a huge leap.

The basic steps are:
  1. Build a VM on Hyper-V
  2. Install Docker into that VM
  3. Run containers in that VM
This achieves a couple things.
  • Your Docker containers are isolated within a VM. 
This is actually an important thing.  Docker has its own networking stack, but it also allows exposing the underlying storage to the VM to support things like databases and configurations or even updating source easily. 
The model here is one VM per tenant.  Thus forming that boundary and still getting the flexibility of both containers and VMs.
  • You can run the OS of your choice.
In my experimentation I have been using Ubuntu.  Mainly because it has good support, but primarily because they are right up to date with the kernel.  This gives me the latest Hyper-V support within that VM.

So, you want to setup Docker in a VM.  There are a few steps as I am outlining this in gory detail.  Here is goes:

  1. Install Ubuntu in the VM (14.04 LTS Server) or 14.10
  2. Add OpenSSH Server
  3. Determine IP
  4. Connect over SSH
  5. Update
    1. sudo apt-get update
  6. Upgrade the components (aka patch the OS)
    1. sudo apt-get upgrade -y
  7. Add Docker gpg key (that is 'qO' not 'qZero')
    1. sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -" 
  8. Update the app list
    1. sudo sh -c "echo deb http://get.docker.io/ubuntu docker main\ >> /etc/apt/sources.list.d/docker.list"
  9. Update the local apt repository after adding the docker reference
    1. sudo apt-get update
  10. Install (latest) Docker (on 12/15/14 this is 1.4.0)
    1. sudo apt-get install lxc-docker -y
Now you are ready to play with the magic of Containers.