Logging Into Windows Server Containers

How do you log into a container to install software? Ah … you don’t actually log into a container because a container is not a virtual machine. Confusing? Slightly!

What you actually do is remotely execute commands inside of a container; this is actually something like PowerShell Direct, a new feature in Windows Server 2016 (WS2016).

There are two ways to run commands inside of a container.

Which Container?

In Technical Preview 3 (TPv3), the methods we will use to execute commands inside of a container don’t use the name of the container; instead they use a unique container ID. This is because containers can have duplicate names – I really don’t like that!

So, if you want to know which container you’re targeting then do something along the lines of the following to store the container ID. The first creates a new container and stores the resulting container’s metadata in a variable object called $container.

$Container = New-Container -Name TestContainer -ContainerImageName WindowsServerCore

Note that I didn’t connect this container to a virtual switch!

The following example retrieves a container, assuming that it has a unique name.

$Container = Get-Container TestContainer


If you want to fire a single command into a container then Invoke-Command is the cmdlet to use. This method sends a single instruction into a virtual machine. This can be a command or a script block. Here’s a script block example:

Invoke-Command -ContainerID $Container.ContainerId -RunAsAdministrator -ScriptBlock { New-Item -Path C:\RemoteTest -ItemType Directory }

Note how I’m using the ContainerID attribute of $Container to identify the container.

The nice thing about Invoke-Command is that it is not interactive; the command remotely runs the script block without an interactive login. That makes Invoke-Command perfect for scripting; you write a script that deploys a container, starts it, does some stuff inside of the container, and then configures networking in the VM host. Lots of nice automation, there!


If you want an interactive session with a container then Enter-PSSession is the way to go. Using this cmdlet you get a PowerShell session in the container where you can run commands and see the results. This is great for once-off stuff and troubleshooting, but it’s no good for automation/scripting.

Enter-PSSession -ContainerID $Container.ContainerId –RunAsAdministrator

Warning – In TPv3 we’ve seen that rushing into running this cmdlet after creating your new container can lead to an error. Wait a few seconds before trying to connect to the VM.

No Network Required!

These methods are using something PowerShell Direct, a new feature in WS2016 – it’s actually PowerShell via a named pipe. The above example deliberately created a VM that has no networking. I can still run commands inside of the container or get an interactive PowerShell session inside of the container without connectivity – I just need to be able to get onto the VM host.

Creating & Deploying Windows Server Containers Using NAT and PowerShell

This post will show you how to use PowerShell to deploy Windows Server Containers using Windows Server 2016 (WS2016) Technical Preview 3 (TPv3).

Note: I wanted to show you how to deploy IIS, but I found that IIS would only work on my first container, and fail on the others.

This example will deploy multiple containers running nginx web server on the same VM host. NAT will be used to network the VMs using a private IP range on the VM host’s internal virtual switch.

Note: The VM host is created at this point, with a working NATing virtual switch that has an IP range of, with assigned to the VM host.

Create the nginx Container Image

The beauty of containers is that you create a set of reusable container images that have a parent child relationship. The images are stored in a flat file repository.

Note: In TPv3, the repository is local on the VM host. Microsoft will add a shared repository feature in later releases of WS2016.

Log into the VM host (which runs Server Core) and launch PowerShell


In this example I will create a new container using the default WindowsServerCore container OS image. Note that I capture the instance of the new container in $Container; this allows me to easily reference the container and it’s attributes in later cmdlets:

$Container = New-Container -Name nginx -ContainerImageName WindowsServerCore -SwitchName "Virtual Switch"

The container is linked to the virtual switch in the VM host called “Virtual Switch”. This virtual switch is associated with the VM’s sole virtual NIC, and sharing is enabled to allow the VM to also have network connectivity. The switch is enabled for NATing, meaning that containers that connect to the switch will have an IP of 192.168.250.x (in my setup). More on this stuff later.

Start the new container:

Start-Container $Container

Wait 30 seconds for the container to boot up and then remote into it:

Enter-PSSession -ContainerId $Container.ContainerId -RunAsAdministrator

I would normally use IIS here, but I had trouble with IIS in Windows Server Containers (TPv3). So instead, I’m going to deploy nginx web server. Run the following to download the installer (zip file):

WGet -Uri 'http://nginx.org/download/nginx-1.9.3.zip' -OutFile "c:\nginx-1.9.3.zip"

The next command will expand the zip file to  c:\nginx-1.9.3\

Expand-Archive -Path C:\nginx-1.9.3.zip -DestinationPath c:\ -Force

There isn’t really an installer. nginx exists as an executable that can be run, which you’ll see later. The service “install” is done, so now we’ll exit from the remote session:


We now have a golden container that we want to capture. To do this, we must first shut down the container:

Stop-Container $Container

Now we create a new reusable container image called nginx:

New-ContainerImage -Container $Container -Publisher AFinn -Name nginx -Version 1.0

The process only captures the differences between the original container (created from the WindowsServerCore container OS image) and where the machine is now. The new container image will be linked to the image that created the container. So, if I create a container image called nginx, it will have a parent of WindowsServerCore.


I’m done with the nginx container so I’ll remove it:

Remove-Container $Container –Force

Deploying A Service Using A Container

The beauty of containers is how quick it is to deploy a new service. We can deploy a new nginx web server by simply deploying a new container from the nginx container image. All dependencies, WindowsServerCore in this case, will also be automatically deployed in the container.

Actually, “deploy” is the wrong word. In fact, a link is created to the images in the repository. Changes are saved with the container. So, if I was to add content to a new nginx container, the container will contain the web content, and use the service and OS data from the nginx container image in the repository, and OS stuff from the VM host and the container OS image in the repository.

Let’s deploy a a new container with nginx. Once again I will store the resulting object in a variable for later use:

$Web2 = New-Container -Name Web2 -ContainerImageName nginx -SwitchName "Virtual Switch"

Then we start the container:

Start-Container $Web2

Wait 30 seconds before you attempt to remote into the container:

Enter-PSSession -ContainerId $Web2.ContainerId –RunAsAdministrator

Now I browse into the extracted nginx folder:

cd c:\nginx-1.9.3\

And then I start up the web service:

start nginx

Yes, I could have figured out how to autostart ngnix in the original template container. Let’s move on …

I want to confirm that nginx is running, so I check what ports are listening using:

NetStat –AN

I then retrieve the IP of the container:


Remember that the container lives in the NAT network of the virtual switch. In my lab, the LAN is My VM host has configured (Install-ContainerHost.ps1) as the NAT range. In this case, the new container, Web2 has an IP of

I then exit the remote session:


There’s two steps left to allow HTTP traffic to the web service in the container. First, we need to create a NAT rule. The container will communicate on the LAN via the IP of the VM host. We need to create a rule that says that any TCP traffic on a select port (TCP 82 here) will be forwarded to TCP 80 of the container ( Run this on the VM host:

Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress -InternalIPAddress -InternalPort 80 -ExternalPort 82

Finally, I need to create a firewall rule in the VM host to allow inbound TCP 82 traffic:

New-NetFirewallRule -Name "TCP82" -DisplayName "HTTP on TCP/82" -Protocol tcp -LocalPort 82 -Action Allow -Enabled True

Now if I open up a browser on the LAN, I should be able to browse to the web service in the container. My VM host has an IP of so I browse to and the default nginx page appears.

Deploy More of the Service

OK, we got one web server up. The beauty of containers is that you can quickly deploy lots of identical services. Let’s do that again. The next snippet of code will deploy an additional nginx container, start it, wait 30 seconds, and then log into it via session remoting:

$Web3 = New-Container -Name Web3 -ContainerImageName nginx -SwitchName "Virtual Switch"

Start-Container $Web3

Sleep 30

Enter-PSSession -ContainerId $Web3.ContainerId -RunAsAdministrator

I then start nginx, verify that it’s running, and get the NAT IP of the container (

cd c:\nginx-1.9.3\

start nginx

NetStat -AN



Now I can create a NAT mapping for the container in the networking of the VM host. In this case we will forward traffic to TCP 83 to (the container):

Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress -InternalIPAddress -InternalPort 80 -ExternalPort 83

And then we open up a firewall rule on the VM host to allow inbound traffic on TCP 83:

New-NetFirewallRule -Name "TCP83" -DisplayName "HTTP on TCP/83" -Protocol tcp -LocalPort 83 -Action Allow -Enabled True

Now I can browse to identical but independent nginx web services on container 1 on and, all accomplished with very little work and a tiny footprint. One might be production. One might be test. I could fire up another for development. And there’s nothing stopping me firing up more to troubleshoot, branch code, and test upgrades, and more, getting a quick and identical deployment every time that I can dump in seconds:

Remove-Container $Web2, $Web3

If you have apps that are suitable (stateless and no AD requirement) then containers could be very cool.

“Install-WindowsFeature : An unexpected error has occurred” Error When You Run Install-WindowsFeature In A Windows Server Container

This is one of those issues that makes me question a lot of step-by-step blog posts on Windows Server Containers that are out there – plenty of people were quick to publish guides on containers and didn’t mention encountering this issue which I always encounter; I suspect that there’s a lot of copy/pasting from Microsoft sites with little actual testing in a rush to be first to publish. It’s clear that many bloggers didn’t try to install things in a container that required administrator rights, because UAC was blocking those actions. In my case, it was installing IIS in a Windows Server 2016 (WS2016) Technical Preview 3 (TPv3) container.

In my lab, I created a new container and then logged in using the following (I had already populated $Container by  returning the container object into the variable):

Enter-PSSession -ContainerId $Container.ContainerId -RunAsAdministrator

And then I tried to install some role/feature, such as IIS using Install-WindowsFeature:

Install-WindowsFeature -Name Web-Server

I logged in using -RunAsAdministrator so I should have no issues with UAC, right? Wrong! Because the installation fails as follows:

Install-WindowsFeature : An unexpected error has occurred. The system cannot find the file specified.  Error: 0x80070002
+ CategoryInfo          : InvalidResult: (@{Vhd=; Credent…Name=localhost}:PSObject) [Install-WindowsFeature], Exception
+ FullyQualifiedErrorId : RegistryKey_OpenSubKey_Failed,Microsoft.Windows.ServerManager.Commands.AddWindowsFeatureCommand


What’s the solution? When you are remoted into the container you need to raise your administrator privileges to counter UAC. You can do this as follows, after you log into the container:

Start-Process Powershell.exe -Verb runAs

Run Install-WindowsFeature now and it will complete.



Note: I have found in my testing that IIS behaves poorly in TPv3. This might be why Microsoft’s getting started guides on MSDN use nginx web server instead of IIS! I’ve confirmed that nginx works perfectly well.

Why Are My Windows Server Containers Not On The Network?

I find containers are easy to create and it’s pretty simple to build a library of container images. At least, that’s what I found when I got to play with containers for the first time on a pre-build Windows Server 2016 (WS2016) Technical Preview 3 (TPv3) lab. But I started playing with containers for the first time in my own lab in the last few days and I had some issues; the thing I had never done was create a host, a VM host to be precise (a Hyper-V VM that will host many containers), by myself. In this post I’ll explain how, by default, my containers were not networked and how I fixed it. This post was written for the TPv3 release, and Microsoft might fix things in later releases, but you might find some troubleshooting info that might be of help here.

Some Theory

I guess that most people will deploy Windows Server containers in virtual machines. If you work in the Hyper-V world then you’ll use Hyper-V VMs. In this timeframe the documented process for creating a VM host is to download and run a script called New-ContainerHost.PS1. You can get that by running:

wget -uri https://aka.ms/newcontainerhost -OutFile New-ContainerHost.ps1

You’ll get a script that you download and then you’re told by Microsoft and every other blog that copied & pasted without testing to run:

.\New-ContainerHost.ps1 –VmName <NewContainerHostVMName> -Password <NewContainerHostVMPassword>

What happens then?

  • A bunch of stuff is downloaded in a compressed file, including a 12 GB VHD called WindowsServer_en-us_TP3_Container_VHD.vhd.
  • The VHD is mounted and some files are dropped into it, including Install-ContainerHost.ps1
  • A new VM is created. The C: drive is a differencing VHD that uses the downloaded VHD as the parent
  • The VM is booted.
  • When the VM is running, Install-ContainerHost is run, and the environment is created in the VM.
  • Part of this is the creation of a virtual switch inside the VM. Here’s where things can go wrong by default.
  • The script completes and it’s time to get going.

What’s the VM switch inside a VM all about? It’s not just your regular old VM switch. It’s a NATing switch. The idea here is that containers that will run inside of the VM will operate on a private address space. The containers connect to the VM switch which provides the NATing functionality. The VM switch is connected to the vNIC in the VM. The guest OS of the VM is connected to the network via a regular old switch sharing process (a management OS vNIC in the guest OS).

What Goes Wrong?

Let’s assume that you have read some blogs that were published very quickly on the topic of containers and you’ve copied and pasted the setup of a new VM host. I tried that. Let’s see what happened … there were two issues that left me with network-disconnected containers:

Disconnected VM NIC

Almost every example I saw of New-ContainerHost fails to include 1 necessary step: specify the name of a virtual switch on the host to connect the VM to. You can do this after the fact, but I prefer to connect the VM straight away. This cmdlet adds a flag to specify which host to connect the VM to. I’ve also added a cmdlet to skip the installation of Docker.

.\New-ContainerHost.ps1 –VmName <newVMName> –Password <NewVMPassword> -SkipDocker –SwitchName <PhysicalHostSwitch>

This issue is easy enough to diagnose – your VM’s guest OS can’t get a DHCP address so you connect the VM’s vNIC to the host’s virtual switch.

New-NetNat Fails

This is the sticky issue because it deals with new stuff. New-NetNat will create:

… a Network Address Translation (NAT) object that translates an internal network address to an external network address. NAT modifies IP address and port information in packet headers.

Fab! Except it kept failing in my lab with this error:

Net-NetNat : No Matching interface was found for prefix (null).


This wrecked my head. I was about to give up on Containers when it hit me. I’d already tried building my own VM and I had downloaded and ran a script called Install-ContainerHost in a VM to enable Containers. I logged into my VM and there I found Install-ContainerHost on the root of C:. I copied it from the VM (running Server Core) to another machine with a UI and I edited it using ISE. I searched for and found a bunch of stuff for parameters. A variable called $NATSubnetPrefix was set to “”.

There was the issue. My lab’s network address is; this wasn’t going to work. I needed a different range to use behind the NATing virtual switch in the container VM host. I edited the variable to define a network address for NATing of “”:



I removed the VM switch and then re-ran Install-ContainerHost in the VM. The script ran perfectly. Let’s say the VM had an address of I logged in and created a new container (the container OS image is on the C:). I used Enter-PSRemote to log into the container and I saw the container had an IP of This was NATed via the virtual switch in the VM, which in turn is connected to the top-of-rack switch via the physical host’s virtual switch.

Sorted. At least, that was the fix for a broken new container VM host. How do I solve this long term?

I can tell you that mounting the downloaded WindowsServer_en-us_TP3_Container_VHD.vhd and editing New-ContainerHost there won’t work. Microsoft appears to download it every time into the differencing disk.

The solution is to get a copy of Install-ContainerHost.PS1 (from the VHD) and save it onto your host or an accessible network location. Then you run New-ContainerHost with the –ScriptPath to specify your own copy of Install-ContainerHost. Here’s an example where I saved my edited (the new NAT network address) copy of Install-ContainerHost on CSV1:

.\New-ContainerHost.ps1 –VmName NewContainerVM –Password P@ssw0rd -SkipDocker -SwitchName SetSwitch -ScriptPath "C:\ClusterStorage\CSV1\Install-ContainerHost.ps1"

That runs perfectly, and no hacks are required to get containers to talk on the network. I then successfully deployed IIS in a container, enabled a NATing rule, and verified that the new site was accessible on the LAN.

Introducing Windows Server Containers

Technical Preview 3 of Windows Server 2016 is out and one of the headline feature additions to this build is Windows Server Containers. What are they? And how do they work? Why would you use them?


Windows Server Containers is Microsoft’s implementation of an open source world technology that has been made famous by a company called Docker. In fact:

  • Microsoft’s work is a result of a partnership with Docker, one which was described to me as being “one of the fastest negotiated partnerships” and one that has had encouragement from CEO Satya Nadella.
  • Windows Server Containers will be compatible with Linux containers.
  • You can manage Windows Server Containers using Docker, which has a Windows command line client. Don’t worry – you won’t have to go down this route if you don’t want to install horrid prerequisites such as Oracle VirtualBox (!!!).

What are Containers?

Containers is around a while, but most of us that live outside of the Linux DevOps world won’t have had any interaction with them. The technology is a new kind of virtualisation to enable rapid (near instant) deployment of applications.

Like most virtualisation, Containers take advantage of the fact that most machines are over-resourced; we over-spec a machine, install software, and then the machine is under-utilized. 15 years ago, lots of people attempted to install more than one application per server. That bad idea usually ended up in p45’s (“pink slips”) being handed out (otherwise known as a “career ending event”. That because complex applications make poor neighbours on a single operating system with no inter-app isolation.

Machine virtualisation (vSphere, Hyper-V, etc) takes these big machines and uses software to carve the physical hosts into lots of virtual machines; each virtual machine has its own guest OS and this isolation provides a great place to install applications. The positives are we have rock solid boundaries, including security, between the VMs, but we have more OSs to manage. We can quickly provision a VM from a template, but then we have to install lots of pre-reqs and install the app afterwards. OK – we can have VM templates of various configs, but a hundred templates later, we have a very full library with lots of guest OSs that need to be managed, updated, etc.

Containers is a kind of virtualisation that resides one layer higher; it’s referred to as OS virtualization. The idea is that we provision a container on a machine (physical or virtual). The container is given a share of CPU, RAM, and a network connection. Into this container we can deploy a container OS image. And then onto that OS image we can install perquisites and an application. Here’s the cool bit: everything is really quick (typing the command takes longer than the deployment) and you can easily capture images to a repository.

How easy is it? It’s very easy – I recently got hands-on access to Windows Server Containers in a supervised lab and I was able to deploy and image stuff using a PowerShell module without any documentation and with very little assistance. It had helped that I’d watched a session on Containers from Microsoft Ignite.

How Do Containers Work?

There are a few terms you should get to know:

  • Windows Server Container: The Windows Server implementation of containers. It provides application isolation via OS virtualisation, but it does not create a security boundary between applications on the same host. Containers are stateless, so stateful data is stored elsewhere, e.g. SMB 3.0.
  • Hyper-V Container: This is a variation of the technology that uses Hyper-V virtualization to securely isolate containers from each other – this is why nested virtualisation was added to WS2016 Hyper-V.
  • Container OS Image: This is the OS that runs in the container.
  • Container Image: Customisations of a container (installing runtimes, services, etc) can be saved off for later reuse. This is the mechanism that makes containers so powerful.
  • Repository: This is a flat file structure that contains container OS images and container images.

Note: This is a high level concept post and is not a step-by-step instructional guide.

We start off with:

  • A container host: This machine will run containers. Note that a Hyper-V virtual switch is created to share the host’s network connection with containers, thus network-enabling those containers when they run.
  • A repository: Here we store container OS images and container images. This repository can be local (in TPv3) or can be an SMB 3.0 file share (not in TPv3, but hopefully in a later release).


The first step is to create a container. This is accomplished, natively, using a Containers PowerShell module, which from experience, is pretty logically laid out and easy to use. Alternatively you can use Docker. I guess System Center will add support too.

When you create the container you specify the name and can offer a few more details such as network connection to the host’s virtual switch (you can add this retrospectively), RAM and CPU.

You then have a blank and useless container. To make it useful you need to add a container OS image. This is retrieved from the Repository, which can be local (in a lab) or on an SMB 3.0 file share (real world). Note that an OS is not installed in the container. The container points at the repository and only differences are saved locally.

How long does it take to deploy the container OS image? You type the command, press return, and the OS is sitting there, waiting for you to start the container. Folks, Windows Server Containers are FAST – they are Vin Diesel parachuting a car from a plane fast.


Now you can use Enter-PSSession to log into a container using PowerShell and start installing and configuring stuff.

Let’s say you want to install PHP. You need to:

  1. Get the installer available to the container, maybe via the network
  2. Ensure that the installer either works silently (unattended) or works from command line

Install the program, e.g. PHP, and then configure it the way you want it (from command line).


Great, we now have PHP in the container. But there’s a good chance that I’ll need PHP in lots of future containers. We can create a container image from that PHP install. This process will capture the changes from the container as it was last deployed (the PHP install) and save those changes to the repository as a container image. The very quick process is:

  1. Stop the container
  2. Capture the container image

Note that container image now has a link to the guest OS image that it was installed on, i.e. there is a dependency link and I’ll come back to this.

Let’s deploy another container with a guest OS image called Container2.


For some insane reason, I want to install the malware gateway known as Java into this container.


Once again, I can shut down this new container and create a container image from this Java installation. This new container image also has a link to the required container OS image.


Right, let’s remove Container1 and Container2 – something that takes seconds. I now have a container OS image for Windows Server 2012 R2 and container images for Java and Linux. Let’s imagine that a developer needs to deploy an application that requires PHP. What do they need to do? It’s quite easy – they create a container from the PHP container image. Windows Server Containers knows that PHP requires the Windows Server container OS image, and that is deployed too.

The entire deployment is near instant because nothing is deployed; the container links to the images in the repository and saves changes locally.


Think about this for a second – we’ve just deployed a configured OS in little more time than it takes to type a command. We’ve also modelled a fairly simple application dependency. Let’s complicate things.

The developer installs WordPress into the new container.


The dev plans on creating multiple copies of their application (dev, test, and production) and like many test/dev environments, they need an easy way to reset, rebuild, and to spin up variations; there’s nothing like containers for this sort of work. The dev shuts down Container3 and then creates a new container image. This process captures the changes since the last deployment and saves a container image in the repository – the WordPress installation. Note that this container doesn’t include the contents of PHP or Windows Server but it does link to PHP and PHP links to Windows Server.


The dev is done and resets the environment. Now she wants to deploy 1 container for dev, 1 for test, and 1 for production. Simple! This requires 3 commands, each one that will create a new container from the WordPress container image, which logically uses the required PHP and PHP’s required Windows Server.

Nothing is actually deployed to the containers; each container links to the images in the repository and saves changes locally. Each container is isolated from the other to provide application stability (but not security – this is where Hyper-V Containers comes into play). And best of all – the dev has had the experience of:

  • Saying “I want three copies of WordPress”
  • Getting the OS and all WordPress pre-requisites
  • Getting them instantly
  • Getting 3 identical deployments


From the administrator’s perspective, they’ve not had to be involved in the deployment, and the repository is pretty simple. There’s no need for a VM with Windows Server, another with Windows Server & PHP, and another with Windows Server, PHP & WordPress. Instead, there is an image for Windows Server, and image for PHP and an image for WordPress, with links providing the dependencies.

And yes, the repository is a flat file structure so there’s no accidental DBA stuff to see here.

Why Would You Use Containers?

If you operate in the SME space then keep moving, and don’t bother with Containers unless they’re in an exam you need to pass to satisfy the HR drones. Containers are aimed at larger environments where there is application sprawl and repetitive installations.

Is this similar to what SCVMM 2012 introduced with Server App-V and service templates? At a very high level, yes, but Windows Server Containers is easy to use and probably a heck of a lot more stable.

Note that Containers are best suited for stateless workloads. If you want to save data then save it elsewhere, e.g. SMB 3.0. What about MySQL and SQL Server? Based on what was stated at Ignite, then there’s a solution (or one in the works); they are probably using SMB 3.0 to save the databases outside of the container. This might require more digging, but I wonder if databases would really be a good fit for containers. And I wonder, much like with Azure VMs, if there will be a later revision that brings us stateful containers.

I don’t imagine that my market at work (SMEs) will use Windows Server Containers, but if I was back working as an admin in a large enterprise then I would definitely start checking out this technology. If I worked in a software development environment then I would also check out containers for a way to rapidly provision new test and dev labs that are easy to deploy and space efficient.


Here is a link to the Windows Server containers page on the TechNet Library.

We won’t see Hyper-V containers in TPv3 – that will come in a later release, I believe later in 2015.

Ignite 2015 – Windows Server Containers

Here are my notes from the recording of Microsoft’s New Windows Server Containers, presented by Taylor Brown and Arno Mihm. IMO, this is an unusual tech because it is focused on DevOps – it spans both IT pro and dev worlds. FYI, it took me twice as long as normal to get through this video. This is new stuff and it is heavy going.


  • You will now enough about containers to be dangerous 🙂
  • Learn where containers are the right fit
  • Understand what Microsoft is doing with containers in Windows Server 2016.

Purpose of Containers

  • We used to deploy 1 application per OS per physical server. VERY slow to deploy.
  • Then we got more agility and cost efficiencies by running 1 application per VM, with many VMs per physical server. This is faster than physical deployment, but developers still wait on VMs to deploy.

Containers move towards a “many applications per server” model, where that server is either physical or virtual. This is the fastest way to deploy applications.

Container Ecosystem

An operating system virtualization layer is placed onto the OS (physical or virtual) of the machine that will run the containers. This lives between the user and kernel modes, creating boundaries in which you can run an application. Many of these applications can run side by side without impacting each other. Images, containing functionality, are run on top of the OS and create aggregations of functionality. An image repository enables image sharing and reuse.


When you create a container, a sandbox area is created to capture writes; the original image is read only. The Windows container sees Windows and thinks it’s regular Windows. A framework is installed into the container, and this write is only stored in the sandbox, not the original image. The sandbox contents can be preserved, turning the sandbox into a new read-only image, which can be shared in the repository. When you deploy this new image as a new container, it contains the framework and has the same view of Windows beneath, and the container has a new empty sandbox to redirect writes to.

You might install an application into this new container, the sandbox captures the associated writes. Once again, you can preserve the modified sandbox as an image in the repository.

What you get is layered images in a repository, which are possible to deploy independently from each other, but with the obvious pre-requisites. This creates very granular reuse of the individual layers, e.g. the framework image can be deployed over and over into new containers.


A VM is running Docker, the tool for managing containers. A Windows machine has the Docker management utility installed. There is a command-line UI.

Docker Images < list the images in the repository.

There is an image called windowsservercore. He runs:

docker run –rm –it windowsservercore cmd


  • –rm (two hyphens): Remove the sandbox afterwards
  • –it: give me an interactive console
  • cmd: the program he wants the container to run

A container with a new view of Windows starts up a few seconds later and a command prompt (the desired program) appears. This is much faster than deploying a Windows guest OS VM on any hypervisor.  He starts a second one. On the first, he deletes files from C: and deletes HKLM from the registry, and the host machine and second container are unaffected – all changes are written to the sandbox of the first container. Closing the command prompt of the first container erases all traces of it (–rm).

Development Process Using Containers

The image repository can be local to a machine (local repository) or shared to the company (central repository).

First step: what application framework is required for the project … .Net, node.js, PHP, etc? Go to the repository and pull that image over; any dependencies are described in the image and are deployed automatically to the new container. So if I deploy .NET a Windows Server image will be deployed automatically as a dependency.

The coding process is the same as usual for the devs, with the same tools as before. A new container image is created from the created program and installed into the container. A new “immutable image” is created. You can allow selected people or anyone to use this image in their containers, and the application is now very easy and quick to deploy; deploying the application image to a container automatically deploys the dependencies, e.g. runtime and the OS image. Remember – future containers can be deployed with –rm making it easy to remove and reset – great for stateless deployments such as unit testing. Every deployment of this application will be identical – great for distributed testing or operations deployment.

You can run versions of images, meaning that it’s easy to rollback a service to a previous version if there’s an issue.


There is a simple “hello world” program installed in a container. There is a docker file, and this is a text file with a set of directions for building a new container image.

The prereqs are listed with FROM; here you see the previously mentioned windowsservercore image.

WORKDIR sets the baseline path in the OS for installing the program, in this case, the root of C:.

Then commands are run to install the software, and then run (what will run by default when the resulting container starts) the software. As you can see, this is a pretty simple example.


He then runs:

docker build -t demoapp:1 < which creates an image called demoapp with a version of 1. -t tags the image.

Running docker images shows the new image in the repository. Executing the below will deploy the required windowsservercore image and the version 1 demoapp image, and execute demoapp.exe – no need to specity the command because the docker file specified a default executable.

docker run –rm -it demoapp:1

He goes back to the demoapp source code, compiles it and installs it into a container. He rebuilds it as version 2:

docker build -t demoapp:2

And then he runs version 2 of the app:

docker run –rm -it demoapp:2

And it fails – that’s because he deliberately put a bug in the code – a missing dependent DLL from Visual Studio. It’s easy to blow the version 2 container away (–rm) and deploy version 1 in a few seconds.

What Containers Offer

  • Very fast code iteration: You’re using the same code in dev/test, unit test, pilot and production.
  • There are container resource controls that we are used to: CPU, bandwidth, IOPS, etc. This enables co-hosting of applications in a single OS with predictable levels of performance (SLAs).
  • Rapid deployment: layering of containers for automated dependency deployment, and the sheer speed of containers means applications will go from dev to production very quickly, and rollback is also near instant. Infrastructure no longer slows down deployment or change.
  • Defined state separation: Each layer is immutable and isolated from the layers above and below it in the container. Each layer is just differences.
  • Immutability: You get predictable functionality and behaviour from each layer for every deployment.

Things that Containers are Ideal For

  • Distributed compute
  • Databases: The database service can be in a container, with the data outside the container.
  • Web
  • Scale-out
  • Tasks

Note that you’ll have to store data in and access it from somewhere that is persistent.

Container Operating System Environments

  • Nano-Server: Highly optimized, and for born-in-the-cloud applications.
  • Server Core: Highly compatible, and for traditional applications.

Microsoft-Provided Runtimes

Two will be provided by Microsoft:

  • Windows Server Container: Hosting, highly automated, secure, scalable & elastic, efficient, trusted multi-tenancy. This uses a shared-kernel model – the containers run on the same machine OS.
  • Hyper-V Container: Shared hosting, regulate workloads, highly automated, secure, scalable and elastic, efficient, public multi-tenancy. Containers are placed into a “Hyper-V partition wrap”, meaning that there is no sharing of the machine OS.

Both runtimes use the same image formats. Choosing one or the other is a deployment-time decision, with one flag making the difference.

Here’s how you can run both kinds of containers on a physical machine:


And you can run both kinds of containers in a virtual machines. Hyper-V containers can be run in virtual machine that is running the Hyper-V role. The physical host must be running virtualization that supports virtualization of the VT instruction sets (ah, now things get interesting, eh?). The virtual machine is a Hyper-V host … hmm …


Choosing the Right Tools

You can run containers in:

  • Azure
  • On-premises
  • With a service provider

The container technologies can be:

  • Windows Server Containers
  • Linux: You can do this right now in Azure

Management tools:

  • PowerShell support will be coming
  • Docker
  • Others

I think I read previously that System Center would add support. Visual Studio was demonstrated at Build recently. And lots of dev languages and runtimes are supported. Coders don’t have to write with new SDKs; what’s more important is that

Azure Service Fabric will allow you to upload your code and it will handle the containers.

Virtual machines are going nowhere. They will be one deployment option. Sometimes containers are the right choice, and sometimes VMs are. Note: you don’t join containers to AD. It’s a bit of a weird thing to do, because the containers are exact clones with duplicate SIDs. So you need to use a different form of authentication for services.

When can You Play With Containers?

  • Preview of Windows Server Containers: coming this summer
  • Preview of Hyper-V Containers: planned for this year

Containers will be in the final RTM of WS2016. You will be able to learn more on the Windows Server Containers site when content is added.


Taylor Brown, who ran all the demos, finished up the session with a series of demos.

docker history <name of image> < how was the image built – looks like the dockerfile contents in reverse order. Note that passwords that are used in this file to install software appears to be legible in the image.

He tries to run a GUI tool from a container console – no joy. Instead, you can remote desktop into the container (get the IP of the container instance) and then run the tool in the Remote Desktop session. The tool run is Process Explorer.

If you run a system tool in the container, e.g. Process Explorer, then you only see things within the container. If you run a tool on the machine, then you have a global view of all processes.

If you run Task Manager, go to Details and add the session column, you can see which processes are owned by the host machine and which are owned by containers. Session 0 is the machine.

Runs docker run -it windowsservercore cmd < does not put in –rm which means we want to keep the sandbox when the container is closed. Typing exit in the container’s CMD will end the container but the sandbox is kept.

Running ps -a shows the container ID and when the container was created/exited.

Running docker commit with the container ID and a name converts the sandbox into an image … all changes to the container are stored in the new image.

Other notes:

The IP of the container is injected in, and is not the result of a setup. A directory can be mapped into a container. This is how things like databases are split into stateless and stateful; the container runs the services and the database/config files are injected into the container. Maybe SMB 3.0 databases would be good here?


  • How big are containers on the disk? The images are in the repository. There is no local copy – they are referred to over the network. The footprint of the container on the machine is the running state (memory, CPU, network, and sandbox), the size of which is dictated by your application.
  • There is no plan to build HA tech into containers. Build HA into the application. Containers are stateless. Or you can deploy containers in HA VMs via Hyper-V.
  • Is a full OS running in the container? They have a view of a full OS. The image of Core that Microsoft will ship is almost a full image of Windows … but remember that the image is referenced from the repository, not copied.
  • Is this Server App-V? No. Conceptually at a really really high level they are similar, but Containers offer a much greater level of isolation and the cross-platform/cloud/runtime support is much greater too.
  • Each container can have its own IP and MAC address> It can use the Hyper-V virtual switch. NATing will also be possible as an alternative at the virtual switch. Lots of other virtualization features available too.
  • Behind the scenes, the image is an exploded set of files in the repository. No container can peek into the directory of another container.
  • Microsoft are still looking at which of their own products will be support by them in Containers. High priority examples are SQL and IIS.
  • Memory scale: It depends on the services/applications running the containers. There is some kind of memory de-duplication technology here too for the common memory set. There is common memory set reuse, and further optimizations will be introduced over time.
  • There is work being done to make sure you pull down the right OS image for the OS on your machine.
  • If you reboot a container host what happens? Container orchestration tools stop the containers on the host, and create new instances on other hosts. The application layer needs to deal with this. The containers on the patched host stop/disappear from the original host during the patching/reboot – remember; they are stateless.
  • SMB 3.0 is mentioned as a way to present stateful data to stateless containers.
  • Microsoft is working with Docker and 3 containerization orchestration vendors: Docker Swarm, Kubernetes and Mesosphere.
  • Coding: The bottom edge of Docker Engine has Linux drivers for compute, storage, and network. Microsoft is contributing Windows drivers. The upper levels of Docker Engine are common. The goal is to have common tooling to manage Windows Containers and Linux containers.
  • Can you do some kind of IPC between containers? Networking is the main way to share data, instead of IPC.

Lesson: run your applications in normal VMs if:

  • They are stateful and that state cannot be separated
  • You cannot handle HA at the application layer

Personal Opinion

Containers are quite interesting, especially for a nerd like me that likes to understand how new techs like this work under the covers. Containers fit perfectly into the “treat them like cattle” model and therefore, in my opinion, have a small market of very large deployments of stateless applications. I could be wrong, but I don’t see Containers fitting into more normal situations. I expect Containers to power lots of public cloud task -based stuff. I can see large customers using it in the cloud, public or private. But it’s not a tech for SMEs or legacy apps. That’s why Hyper-V is important.

But … nested virtualization, not that it was specifically mentioned, oh that would be very interesting 🙂

I wonder how containers will be licensed and revealed via SKUs?