Wednesday, July 1, 2015

A tale of containers

Containers.  It is a word that we keep hearing about lately.
And in the popular vernacular a container refers to a "docker style container".

You say: "But Docker doesn't do containers"  And, you are right.
These containers are what was originally known as (and still are) LXC containers and everyone associates with Docker
Docker is not the container technology, Docker is container management and a container ecosystem.  They only made containers easy.

Now, in the virtualization world folks have used this container word for a long time.  It has been used to describe the isolation models themselves.
I really wish we had a better word for this type of container, other than 'container'.

With the modern Windows OS we have:
  • Process containers: this is a process, it runs in its own memory space, it inherits a security context from either a user or the system, and it shares all aspects of the OS resources.  If it has a TCP listener, it must be unique so it does not conflict with others, it has to use RAM nicely or it overruns other processes, and so on.
  • Session containers: This is a user session.  Enabled by the multi-user kernel.  A session is a user security context container and within it are processes.  The user is the security boundary.
  • machine containers: This is a virtual machine.  It can be likened to a bare metal installation.  It is very heavy weight in that it is an entire installation.  Within it run session containers and process containers.  It is a very hard security boundary.  It has a networking stack, it does not share resources (file system, RAM, CPU) but it can consume shared resources when running on a hypervisor.

Now, 'container' containers.

A container is a bounded process that can contain processes. 
A container is a file system boundary. 
And, a container has its own networking stack.
A container shares the kernel and other processes with the machine on which it runs.

The processes in one container cannot see the process in another container.
Container processes interact with each other through the networking stack, just like applications on different machines are required to.

But, to be technical with the language; only the running process is a 'container'.  When it is not running it is a container image.
And a container image is similar to an OS image.  It has kernel, bootloader, files, and applications.

Now lets complex-ify all of this.

Linux currently has one type of container, LXC.

Windows is actually introducing two types of containers.
  • Windows containers - this type of container runs like a process on your workstation.  It consumes your available RAM and CPU and a folder full of files.  It smells like any application process, except; it has a network stack, and it cannot directly interact with other processes, it can only see its folder on the file system.  It is a process in a can.  Hence, container.
  • Hyper-V containers  - this type of container is just like the one above but with a more solid isolation boundary.  It gets the benefit of hypervisor CPU and RAM management (fair share), it is forced to play well as a process.  And, its meets isolation compliance standards just like a VM does.  No shared OS, the image contains the kernel.
The difference between the two is only the container (remember that a container only is when a container image is running).  You could think of the difference as two runtime options for a container image.  You can run it at your Windows workstation (Ms. Developer) or you can deploy it to a Hyper-V Server (Mr. Operations).  Between the two, there is a fit for your deployment requirements.

Images are another interesting aspect of containers.

If you have played with installing an application with Docker (such as creating a Docker build file) you begin with a base OS (preferably from a trusted source such as Canonical for Ubuntu).  Then you layer on OS settings, and application downloads and installations.

In the end, you have this image.  And this image is made up of chained folders, similar to the idea of checkpoints (VM snapshots or differencing disks).

However, in the container world, it is files and a file system.  No virtual block devices as is said in virtualization circles.  A virtual block device is a representation of a hard drive block layout.  It is literally raw blocks, just like a hard drive.

Now, does this mean that since Canonical produces a 'docker' image for Ubuntu, that Microsoft will produce a 'docker' image for Windows Server?  Most likely in some form.

Nano Server would make a neat base container image, Server Core as well.
Shell based applications would be a bit hairier.  And a considerably larger base image since you have all of that Windows Shell in there.

But remember, a container image is a file base system.  Now, just think about maintaining that image.  The potential of swapping out one of the layers of the image to add an OS patch, or an application update.  Not having to destroy, update, and deploy.

Oh, so exciting!


Debko82 said...

Good article...but doesn't go far enough. It would have been more complete incorporating Glassware 2.0, since it is not only being embraced as the container solution for MS and is being integrated into Azure and Win10, but also leading the NextCloud for MS. It is all about Glassware Containers:

BrianEh said...

I received all three comments that you posted regarding this article. They were amazingly similar to each other, however I have allowed one through.

Sphere3d you ask about. I can't find any technical details to substantiate any of the market-tecture on Sphere3D's website.

Without knowing any technical details, I put them in the bucket of an isolated process. Not fundamentally differed than AppV or similar application virtualization technologies.

These are processes that are isolated with specific registry space. They don't touch the OS like an 'installed' application does, they still run within a user session container. Thus their isolation boundary is at the session level.

Again, Sphere3D has a website full of market-tecture and no technical details. It is obvious they believe they have something special, but it is not evident.

Yonderbox said...

Brian, thanks for your thoughts. I'm trying to get my head around containers and nanoserver, and best strategies for what combinations make the most sense for given scenarios. The biggest mind-stretch for me is the concept of the Hyper-V container. It's like looking at a tesseract, as opposed to 3d space.
It's kind of interesting seeing Microsoft doing containers - from Windows NT on, I've seen MS as object-based, where *NIX has been file-based. From your explanation, I'm seeing containers as file-defined.
Pretty exciting times!