And in the popular vernacular a container refers to a "docker style container".
You say: "But Docker doesn't do containers" And, you are right.
These containers are what was originally known as (and still are) LXC containers and everyone associates with Docker .
Docker is not the container technology, Docker is container management and a container ecosystem. They only made containers easy.
Now, in the virtualization world folks have used this container word for a long time. It has been used to describe the isolation models themselves.
I really wish we had a better word for this type of container, other than 'container'.
With the modern Windows OS we have:
- Process containers: this is a process, it runs in its own memory space, it inherits a security context from either a user or the system, and it shares all aspects of the OS resources. If it has a TCP listener, it must be unique so it does not conflict with others, it has to use RAM nicely or it overruns other processes, and so on.
- Session containers: This is a user session. Enabled by the multi-user kernel. A session is a user security context container and within it are processes. The user is the security boundary.
- machine containers: This is a virtual machine. It can be likened to a bare metal installation. It is very heavy weight in that it is an entire installation. Within it run session containers and process containers. It is a very hard security boundary. It has a networking stack, it does not share resources (file system, RAM, CPU) but it can consume shared resources when running on a hypervisor.
Now, 'container' containers.
A container is a bounded process that can contain processes.
A container is a file system boundary.
And, a container has its own networking stack.
A container shares the kernel and other processes with the machine on which it runs.
The processes in one container cannot see the process in another container.
Container processes interact with each other through the networking stack, just like applications on different machines are required to.
But, to be technical with the language; only the running process is a 'container'. When it is not running it is a container image.
And a container image is similar to an OS image. It has kernel, bootloader, files, and applications.
Now lets complex-ify all of this.
Linux currently has one type of container, LXC.
Windows is actually introducing two types of containers.
- Windows containers - this type of container runs like a process on your workstation. It consumes your available RAM and CPU and a folder full of files. It smells like any application process, except; it has a network stack, and it cannot directly interact with other processes, it can only see its folder on the file system. It is a process in a can. Hence, container.
- Hyper-V containers - this type of container is just like the one above but with a more solid isolation boundary. It gets the benefit of hypervisor CPU and RAM management (fair share), it is forced to play well as a process. And, its meets isolation compliance standards just like a VM does. No shared OS, the image contains the kernel.
Images are another interesting aspect of containers.
If you have played with installing an application with Docker (such as creating a Docker build file) you begin with a base OS (preferably from a trusted source such as Canonical for Ubuntu). Then you layer on OS settings, and application downloads and installations.
In the end, you have this image. And this image is made up of chained folders, similar to the idea of checkpoints (VM snapshots or differencing disks).
However, in the container world, it is files and a file system. No virtual block devices as is said in virtualization circles. A virtual block device is a representation of a hard drive block layout. It is literally raw blocks, just like a hard drive.
Now, does this mean that since Canonical produces a 'docker' image for Ubuntu, that Microsoft will produce a 'docker' image for Windows Server? Most likely in some form.
Nano Server would make a neat base container image, Server Core as well.
Shell based applications would be a bit hairier. And a considerably larger base image since you have all of that Windows Shell in there.
But remember, a container image is a file base system. Now, just think about maintaining that image. The potential of swapping out one of the layers of the image to add an OS patch, or an application update. Not having to destroy, update, and deploy.
Oh, so exciting!