Russinovich on Hyper-V Containers

We’ve known since Ignite 2015 that Microsoft was going to have two kinds of containers in Windows Server 2016 (WS2016):

  • Windows Server Containers: Providing OS and resource virtualization and isolation.
  • Hyper-V Containers: The hypervisor adds security isolation to machine & resource isolation.

Beyond that general description, we knew almost nothing about Hyper-V Containers, other than expect them in preview during Q4 of 2015 – Technical Preview 4 (TPv4), and that it is the primary motivation for Microsoft to give us nested virtualization.

That also means that nested virtualization will come to Windows Server 2016 Hyper-V in TPv4.

We have remained in the dark since then, but Mark Russinovich appeared on Microsoft Mechanics (a YouTube webcast by Microsoft) and he explained a little more about Hyper-V Containers and he also did a short demo.

Some background first. Normally, a machine has a single user mode running on top of kernel mode. This is what restricts us to the “one app per OS” best practice/requirement, depending on the app. When you enable Containers on WS2016, an enlightenment in the kernel allows multiple user modes. This gives us isolation:

  • Namespace isolation: Each container sees it’s own file system and registry (the hives in the containers hosted files).
  • Resource isolation: How much process, memory, and CPU a container can use.

Kernel mode is already running when you start a new container, which improves the time to start up a container, and thus it’s service(s). This is great for deploying and scaling out apps because a containerised app can be deployed and started in seconds from a container image with no long term commitment, versus minutes for an app in a virtual machine with a longer term commitment.


But Russinovich goes on to say that while containers are great for some things that Microsoft wants to do in Azure, they also have to host “hostile multi-tenant code” – code uploaded by Microsoft customers that Microsoft cannot trust and that could be harmful or risky to other tenants. Windows Server Containers, like their Linux container cousins, do not provide security isolation.

In the past, Microsoft has placed such code into Hyper-V (Azure) virtual machines, but that comes with a management and direct cost overhead. Ideally, Microsoft wants to use lightweight containers with the security isolation of machine virtualization. And this is why Microsoft created Hyper-V Containers.

Hyper-V provides excellent security isolation (far fewer vulnerabilities found than vSphere) that leverages hardware isolation. DEP is a requirement. WS2016 is introducing IOMMU support, VSM, and Shielded Virtual Machines, with a newly hardened hypervisor architecture.

Hyper-V containers use the exact same code or container images as Windows Server Containers.That makes your code interchangeable – Russinovich shows a Windows Server Container being switched into a Hyper-V container by using PowerShell to change the run type (container attribute RuntimeType).

The big difference between the two types, other than the presence f Hyper-V, is that Hyper-V Containers get their own optimized instance of Windows running inside of them, as the host for the single container that they run.


The Hyper-V Container is not a virtual machine – Russinovich demonstrates this by searching for VMs with Get-VM. It is a container, and is manageable by the same commands as a Windows Server Container.

In his demos he switches a Windows Server Container to a Hyper-V Container by running:

Set-Container -Name <Container Name> -RuntimeType HyperV

And then he queries the container with:

Get-Container -Name <Container Name> | fl Name, State, RuntimeType

So the images and the commands are common across Hyper-V Containers and Windows Server Containers. Excellent.

It looked to me that starting this Hyper-V Container is a slower operation than starting a Windows Server Container. That would make sense because the Hyper-V container requires it’s own operating system.

I’m guessing that Hyper-V Containers either require or work best with Nano Server. And you can see why nested virtualization is required. A physical host will run many VM hosts. A VM host might need to run Hyper-V containers – therefore the VM Host needs to run Hyper-V and must have virtualized VT-x instructions.

Russinovich demonstrates the security isolation. Earlier in the video he queries the processes running in a Windows Server Container. There is a single CSRSS process in the container. He shows that this process instance is also visible on the VM host (same process ID). He then does the same test with a Hyper-V Container – the container’s CSRSS process is not visible on the VM host because it is contained and isolated by the child boundary of Hyper-V.

What about Azure? Microsoft wants Azure to be the best place to run containers – he didn’t limit this statement to Windows Server or Hyper-V, because Microsoft wants you to run Linux containers in Azure too. Microsoft announced the Azure Container Service, with investments in Docker and Mesospehere for deployment and automation of Linux, Windows Server, and Hyper-V containers. Russinovich mentions that Azure Automation and Machine Learning will leverage containers – this makes sense because it will allow Microsoft to scale out services very quickly, in a secure manner, but with less resource and management overhead.

That was a good video, and I recommend that you watch it.


7 thoughts on “Russinovich on Hyper-V Containers”

  1. Thanks for share the video. I hope Microsoft releases more information about:
    -Resource Management in containers.
    -HA solutions (Maybe Containers failovercluster + Containers Live Migration?).
    -Containers images management across physical servers(SCVMM?).
    -Containers and AD relationship. (Sounds strange that containers cannot be joined to an AD, when the 90% of Microsoft products need an AD, so in any way i guess Containers will work with AD)


    1. We’re only at pre-pre-pre release of containers. Think of TPv3 as pre-alpha.

      -Resource Management in containers.
      Docker & mesosphere

      -HA solutions (Maybe Containers failovercluster + Containers Live Migration?).
      Doubtful that containers won’t be HA or do live migration. VM hosts – yes, but not containers. Containers are supposed to run born-in-the-cloud apps where HA is done at the HA layer – lots of small containers and a service can survive some going offline.

      -Containers images management across physical servers(SCVMM?).
      Docker + Mesosphere.

      -Containers and AD relationship. (Sounds strange that containers cannot be joined to an AD, when the 90% of Microsoft products need an AD, so in any way i guess Containers will work with AD)
      See born-in-the-cloud apps. Containers are supposed to be short-lived so legacy AD membership makes no sense. Think about computer accounts multiplying like rabbits. Do you want that? You need to move on to new methods of authentication and authorization.

  2. Interesting video, interesting technology. Outside of large clouds and devs wanting to test their apps i don’t see how containers will be useful though.

  3. Suspicious that compartments won’t be HA or do live movement. VM has – yes, however not holders. Holders should run conceived in-the-cloud applications where HA is done at the HA layer – heaps of little compartments and an administration can survive some going disconnected from the

    1. Nothing suspicious at all. Hyper-V Containers are not virtual machines. Containers are for born-in-the-cloud apps – HA is done in the app layer and you deploy lots of few instead of few big.

  4. Is this the kernel which is also be known as Linux Kernel. Code something looks like sql code which used to stimulate the container from one place to other. Do we automate the schedule of system or its manual schedule?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.