Introducing Windows Server Containers

Technical Preview 3 of Windows Server 2016 is out and one of the headline feature additions to this build is Windows Server Containers. What are they? And how do they work? Why would you use them?


Windows Server Containers is Microsoft’s implementation of an open source world technology that has been made famous by a company called Docker. In fact:

  • Microsoft’s work is a result of a partnership with Docker, one which was described to me as being “one of the fastest negotiated partnerships” and one that has had encouragement from CEO Satya Nadella.
  • Windows Server Containers will be compatible with Linux containers.
  • You can manage Windows Server Containers using Docker, which has a Windows command line client. Don’t worry – you won’t have to go down this route if you don’t want to install horrid prerequisites such as Oracle VirtualBox (!!!).

What are Containers?

Containers is around a while, but most of us that live outside of the Linux DevOps world won’t have had any interaction with them. The technology is a new kind of virtualisation to enable rapid (near instant) deployment of applications.

Like most virtualisation, Containers take advantage of the fact that most machines are over-resourced; we over-spec a machine, install software, and then the machine is under-utilized. 15 years ago, lots of people attempted to install more than one application per server. That bad idea usually ended up in p45’s (“pink slips”) being handed out (otherwise known as a “career ending event”. That because complex applications make poor neighbours on a single operating system with no inter-app isolation.

Machine virtualisation (vSphere, Hyper-V, etc) takes these big machines and uses software to carve the physical hosts into lots of virtual machines; each virtual machine has its own guest OS and this isolation provides a great place to install applications. The positives are we have rock solid boundaries, including security, between the VMs, but we have more OSs to manage. We can quickly provision a VM from a template, but then we have to install lots of pre-reqs and install the app afterwards. OK – we can have VM templates of various configs, but a hundred templates later, we have a very full library with lots of guest OSs that need to be managed, updated, etc.

Containers is a kind of virtualisation that resides one layer higher; it’s referred to as OS virtualization. The idea is that we provision a container on a machine (physical or virtual). The container is given a share of CPU, RAM, and a network connection. Into this container we can deploy a container OS image. And then onto that OS image we can install perquisites and an application. Here’s the cool bit: everything is really quick (typing the command takes longer than the deployment) and you can easily capture images to a repository.

How easy is it? It’s very easy – I recently got hands-on access to Windows Server Containers in a supervised lab and I was able to deploy and image stuff using a PowerShell module without any documentation and with very little assistance. It had helped that I’d watched a session on Containers from Microsoft Ignite.

How Do Containers Work?

There are a few terms you should get to know:

  • Windows Server Container: The Windows Server implementation of containers. It provides application isolation via OS virtualisation, but it does not create a security boundary between applications on the same host. Containers are stateless, so stateful data is stored elsewhere, e.g. SMB 3.0.
  • Hyper-V Container: This is a variation of the technology that uses Hyper-V virtualization to securely isolate containers from each other – this is why nested virtualisation was added to WS2016 Hyper-V.
  • Container OS Image: This is the OS that runs in the container.
  • Container Image: Customisations of a container (installing runtimes, services, etc) can be saved off for later reuse. This is the mechanism that makes containers so powerful.
  • Repository: This is a flat file structure that contains container OS images and container images.

Note: This is a high level concept post and is not a step-by-step instructional guide.

We start off with:

  • A container host: This machine will run containers. Note that a Hyper-V virtual switch is created to share the host’s network connection with containers, thus network-enabling those containers when they run.
  • A repository: Here we store container OS images and container images. This repository can be local (in TPv3) or can be an SMB 3.0 file share (not in TPv3, but hopefully in a later release).


The first step is to create a container. This is accomplished, natively, using a Containers PowerShell module, which from experience, is pretty logically laid out and easy to use. Alternatively you can use Docker. I guess System Center will add support too.

When you create the container you specify the name and can offer a few more details such as network connection to the host’s virtual switch (you can add this retrospectively), RAM and CPU.

You then have a blank and useless container. To make it useful you need to add a container OS image. This is retrieved from the Repository, which can be local (in a lab) or on an SMB 3.0 file share (real world). Note that an OS is not installed in the container. The container points at the repository and only differences are saved locally.

How long does it take to deploy the container OS image? You type the command, press return, and the OS is sitting there, waiting for you to start the container. Folks, Windows Server Containers are FAST – they are Vin Diesel parachuting a car from a plane fast.


Now you can use Enter-PSSession to log into a container using PowerShell and start installing and configuring stuff.

Let’s say you want to install PHP. You need to:

  1. Get the installer available to the container, maybe via the network
  2. Ensure that the installer either works silently (unattended) or works from command line

Install the program, e.g. PHP, and then configure it the way you want it (from command line).


Great, we now have PHP in the container. But there’s a good chance that I’ll need PHP in lots of future containers. We can create a container image from that PHP install. This process will capture the changes from the container as it was last deployed (the PHP install) and save those changes to the repository as a container image. The very quick process is:

  1. Stop the container
  2. Capture the container image

Note that container image now has a link to the guest OS image that it was installed on, i.e. there is a dependency link and I’ll come back to this.

Let’s deploy another container with a guest OS image called Container2.


For some insane reason, I want to install the malware gateway known as Java into this container.


Once again, I can shut down this new container and create a container image from this Java installation. This new container image also has a link to the required container OS image.


Right, let’s remove Container1 and Container2 – something that takes seconds. I now have a container OS image for Windows Server 2012 R2 and container images for Java and Linux. Let’s imagine that a developer needs to deploy an application that requires PHP. What do they need to do? It’s quite easy – they create a container from the PHP container image. Windows Server Containers knows that PHP requires the Windows Server container OS image, and that is deployed too.

The entire deployment is near instant because nothing is deployed; the container links to the images in the repository and saves changes locally.


Think about this for a second – we’ve just deployed a configured OS in little more time than it takes to type a command. We’ve also modelled a fairly simple application dependency. Let’s complicate things.

The developer installs WordPress into the new container.


The dev plans on creating multiple copies of their application (dev, test, and production) and like many test/dev environments, they need an easy way to reset, rebuild, and to spin up variations; there’s nothing like containers for this sort of work. The dev shuts down Container3 and then creates a new container image. This process captures the changes since the last deployment and saves a container image in the repository – the WordPress installation. Note that this container doesn’t include the contents of PHP or Windows Server but it does link to PHP and PHP links to Windows Server.


The dev is done and resets the environment. Now she wants to deploy 1 container for dev, 1 for test, and 1 for production. Simple! This requires 3 commands, each one that will create a new container from the WordPress container image, which logically uses the required PHP and PHP’s required Windows Server.

Nothing is actually deployed to the containers; each container links to the images in the repository and saves changes locally. Each container is isolated from the other to provide application stability (but not security – this is where Hyper-V Containers comes into play). And best of all – the dev has had the experience of:

  • Saying “I want three copies of WordPress”
  • Getting the OS and all WordPress pre-requisites
  • Getting them instantly
  • Getting 3 identical deployments


From the administrator’s perspective, they’ve not had to be involved in the deployment, and the repository is pretty simple. There’s no need for a VM with Windows Server, another with Windows Server & PHP, and another with Windows Server, PHP & WordPress. Instead, there is an image for Windows Server, and image for PHP and an image for WordPress, with links providing the dependencies.

And yes, the repository is a flat file structure so there’s no accidental DBA stuff to see here.

Why Would You Use Containers?

If you operate in the SME space then keep moving, and don’t bother with Containers unless they’re in an exam you need to pass to satisfy the HR drones. Containers are aimed at larger environments where there is application sprawl and repetitive installations.

Is this similar to what SCVMM 2012 introduced with Server App-V and service templates? At a very high level, yes, but Windows Server Containers is easy to use and probably a heck of a lot more stable.

Note that Containers are best suited for stateless workloads. If you want to save data then save it elsewhere, e.g. SMB 3.0. What about MySQL and SQL Server? Based on what was stated at Ignite, then there’s a solution (or one in the works); they are probably using SMB 3.0 to save the databases outside of the container. This might require more digging, but I wonder if databases would really be a good fit for containers. And I wonder, much like with Azure VMs, if there will be a later revision that brings us stateful containers.

I don’t imagine that my market at work (SMEs) will use Windows Server Containers, but if I was back working as an admin in a large enterprise then I would definitely start checking out this technology. If I worked in a software development environment then I would also check out containers for a way to rapidly provision new test and dev labs that are easy to deploy and space efficient.


Here is a link to the Windows Server containers page on the TechNet Library.

We won’t see Hyper-V containers in TPv3 – that will come in a later release, I believe later in 2015.

Setting Up WS2016 Storage Spaces Direct SOFS

In this post I will show you how to set up a Scale-Out File Server using Windows Server 2016 Storage Spaces Direct (S2D). Note that:

  • I’m assuming you have done all your networking. Each of my 4 nodes has 4 NICs: 2 for a management NIC team called Management and 2 un-teamed 10 GbE NICs. The two un-teamed NICs will be used for cluster traffic and SMB 3.0 traffic (inter-cluster and from Hyper-V hosts). The un-teamed networks do not have to be routed, and do not need the ability to talk to DCs; they do need to be able to talk to the Hyper-V hosts’ equivalent 2 * storage/clustering rNICs.
  • You have read my notes from Ignite 2015
  • This post is based on WS2016 TPv2

Also note that:

  • I’m building this using 4 x Hyper-V Generation 2 VMs. In each VM SCSI 0 has just the OS disk and SCSI 1 has 4 x 200 GB data disks.
  • I cannot virtualize RDMA. Ideally the S2D SOFS is using rNICs.

Deploy Nodes

Deploy at least 4 identical storage servers with WS2016. My lab consists of machines that have 4 DAS SAS disks. You can tier storage using SSD or NVMe, and your scalable/slow tier can be SAS or SATA HDD. There can be a max of tiers only: SSD/NVMe and SAS/SATA HDD.

Configure the IP addressing of the hosts. Place the two storage/cluster network into two different VLANs/subnets.

My nodes are Demo-S2D1, Demo-S2D2, Demo-S2D3, and Demo-S2D4.

Install Roles & Features

You will need:

  • File Services
  • Failover Clustering
  • Failover Clustering Manager if you plan to manage the machines locally.

Here’s the PowerShell to do this:

Add-WindowsFeature –Name File-Services, Failover-Clustering –IncludeManagementTools

You can use -ComputerName <computer-name> to speed up deployment by doing this remotely.

Validate the Cluster

It is good practice to do this … so do it. Here’s the PoSH code to validate a new S2D cluster:


Create your new cluster

You can use the GUI, but it’s a lot quicker to use PowerShell. You are implementing Storage Spaces so DO NOT ADD ELGIBLE DISKS. My cluster will be called Demo-S2DC1 and have an IP of

New-Cluster -Name Demo-S2DC1 -Node Demo-S2D1, Demo-S2D2, Demo-S2D3, Demo-S2D4 -NoStorage -StaticAddress

There will be a warning that you can ignore:

There were issues while creating the clustered role that may prevent it from starting. For more information view the report file below.

What about Quorum?

You will probably use the default of dynamic quorum. You can either use a cloud witness (a storage account in Azure) or a file share witness, but realistically, Dynamic Quorum with 4 nodes and multiple data copies across nodes (fault domains) should do the trick.

Enable Client Communications

The two cluster networks in my design will also be used for storage communications with the Hyper-V hosts. Therefore I need to configure these IPs for Client communications:


Doing this will also enable each server in the S2D SOFS to register it’s A record of with the cluster/storage NIC IP addresses, and not just the management NIC.

Enable Storage Spaces Direct

This is not on by default. You enable it using PowerShell:


Browsing Around FCM

Open up FCM and connect to the cluster. You’ll notice lots of stuff in there now. Note the new Enclosures node, and how each server is listed as an enclosure. You can browse the Storage Spaces eligible disks in each server/enclosure.


Creating Virtual Disks and CSVs

I then create a pool called Pool1 on the cluster Demo-S2DC1 using PowerShell – this is because there are more options available to me than in the UI:

New-StoragePool  -StorageSubSystemName Demo-S2DC1.demo.internal -FriendlyName Pool1 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem  -Name Demo-S2DC1.demo.internal | Get-PhysicalDisk)

Get-StoragePool Pool1 | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal

The you create the CSVs that will be used to store file shares in the SOFS. Rules of thumb:

  • 1 share per CSV
  • At least 1 CSV per node in the SOFS to optimize flow of data: SMB redirection and redirected IO for mirrored/clustered storage spaces

Using this PoSH you will lash out your CSVs in no time:

$CSVNumber = “4”
$CSVName = “CSV”
$CSV = “$CSVName$CSVNumber”

New-Volume -StoragePoolFriendlyName Pool1 -FriendlyName $CSV -PhysicalDiskRedundancy 2 -FileSystem CSVFS_REFS –Size 200GB
Set-FileIntegrity “C:\ClusterStorage\Volume$CSVNumber” –Enable $false

The last line disables ReFS integrity streams to support the storage of Hyper-V VMs on the volumes. You’ll see from the screenshot what my 4 node S2D SOFS looks like, and that I like to rename things:


Note how each CSV is load balanced. SMB redirection will redirect Hyper-V hosts to the owner of a CSV when the host is accessing files for a VM that is stored on that CSV. This is done for each VM connection by the host using SMB 3.0, and ensures optimal flow of data with minimized/no redirected IO.

There are some warnings from Microsoft about these volumes:

  • They are likely to become inaccessible on later Technical Preview releases.
  • Resizing of these volumes is not supported. 

    Oops! This is a technical preview and this should be pure lab work that your willing to lose.

    Create a Scale-Out File Server

    The purpose of this post is to create a SOFS from the S2D cluster, with the sole purpose of the cluster being to store Hyper-V VMs that are accessed by Hyper-V hosts via SMB 3.0. If you are building a hyperconverged cluster (not supported by the current TPv2 preview release) then you stop here and proceed no further.

    Each of the S2D cluster nodes and the cluster account object should be in an OU just for the S2D cluster. Edit the advanced security of the OU and grand the cluster account object create computer object and delete compute object rights. If you don’t do this then the SOFS role will not start after this next step.

    Next, I am going to create an SOFS role on the S2D cluster, and call it Demo-S2DSOFS1.

    New-StorageFileServer  -StorageSubSystemName Demo-S2DC1.demo.internal -FriendlyName Demo-S2DSOFS1 -HostName Demo-S2DSOFS1Create a  -Protocols SMB

    Create and Permission Shares

    Create 1 share per CSV. If you need more shares then create more CSVs. Each share needs the following permissions:

    • Each Hyper-V host
    • Each Hyper-V cluster
    • The Hyper-V administrators

    You can use the following PoSH to create and permission your shares. I name the share folder and share name after the CSV that it is stored on, so simply change the $ShareName variable to create lots of shares, and change the permissions as appropriate.

    $ShareName = “CSV1”
    $SharePath = “$RootPath\$ShareName\$ShareName”

    md $SharePath
    New-SmbShare -Name $ShareName -Path $SharePath -FullAccess Demo-Host1$, Demo-Host2$, Demo-HVC1$, “Demo\Hyper-V Admins”
    Set-SmbPathAcl -ShareName $ShareName

    Create Hyper-V VMs

    On your hosts/clusters create VMs that store all of their files on the path of the SOFS, e.g. \\Demo-S2DSOFS1\CSV1\VM01, \\Demo-S2DSOFS1\CSV1\VM02, etc.

    Remember that this is a Preview Release

    This post was written not long after the release of TPv2:

    • Expect bugs – I am experiencing at least one bad one by the looks of it
    • Don’t expect support for a rolling upgrade of this cluster
    • Bad things probably will happen
    • Things are subject to change over the next year

New Features in Windows Server 2016 (WS2016) Hyper-V

I’m going to do my best (no guarantees – I only have one body and pair of ears/eyes and NDA stuff is hard to track!) to update this page with a listing of each new feature in Windows Server 2016 (WS2016) Hyper-V and Hyper-V Server 2016 after they are discussed publicly by Microsoft. The links will lead to more detailed descriptions of each feature.

Note, that the features of WS2012 can be found here and the features of WS2012 R2 can be found here.

This list was last updated on 25/May/2015 (during Technical Preview 2).


Active memory dump

Windows Server 2016 introduces a dump type of “Active memory dump”, which filters out most memory pages allocated to VMs making the memory.dmp file much smaller and easier to save/copy.


Azure Stack

A replacement for Windows Azure Pack (WAPack), bringing the code of the “Ibiza” “preview portal” of Azure to on-premises for private cloud or hosted public cloud. Uses providers to interact with Windows Server 2016. Does not require System Center, but you will want management for some things (monitoring, Hyper-V Network Virtualization, etc).


Azure Storage

A post-RTM update (flight) will add support for blobs, tables, and storage accounts, allowing you to deploy Azure storage on-premises or in hosted solutions.


Backup Change Tracking

Microsoft will include change tracking so third-party vendors do not need to update/install dodgy kernel level file system filters for change tracking of VM files.


Binary VM Configuration Files

Microsoft is moving away from text-based files to increase scalability and performance.


Cluster Cloud Witness

You can use Azure storage as a witness for quorum for a multi-site cluster. Stores just an incremental sequence number in an Azure Storage Account, secured by an access key.


Cluster Compute Resiliency

Prevents the cluster from failing a host too quickly after a transient error. A host will go into isolation, allowing services to continue to run without disruptive failover.


Cluster Functional Level

A rolling upgrade requires mixed-mode clusters, i.e. WS2012 R2 and Windows Server vNext hosts in the same cluster. The cluster will stay and WS2012 R2 functional level until you finish the rolling upgrade and then manually increase the cluster functional level (one-way).


Cluster Quarantine

If a cluster node is flapping (going into & out of isolation too often) then the cluster will quarantine a node, and drain it of resources (Live Migration – see MoveTypeThreshold and DefaultMoveType).


Cluster Rolling Upgrade

You do not need to create a new cluster or do a cluster migration to get from WS2012 R2 to Windows Server vNext. The new process allows hosts in a cluster to be rebuilt IN THE EXISTING cluster with Windows Server vNext.



Deploy born-in-the-cloud stateless applications using Windows Server Containers or Hyper-V Containers.


Converged RDMA

Remote Direct Memory Access (RDMA) NICs (rNICs) can be converged to share both tenant and host storage/clustering traffic roles.


Delivery of Integration Components

This will be done via Windows Update


Differential Export

Export just the changes between 2 known points in time. Used for incremental file-based backup.


Distributed Storage QoS

Enable per-virtual hard disk QoS for VMs stored on a Scale-Out File Server, possibly also available for SANs.


File-Based Backup

Hyper-V is decoupling from volume backup for scalability and reliability reasons


Host Resource Protection

An automated process for restricting resource availability to VMs that display unwanted “patterns of access”.


Hot-Add & Hot-Remove of vNICs

You can hot-add and hot-remove virtual NICs to/from a running virtual machine.



This is made possible with Storage Spaces Direct and is aimed initially at smaller deployments.


Hyper-V Cluster Management

A new administration model that allows tools to abstract the cluster as a single host. Enables much easier VM management, visible initially with PowerShell (e.g. Get-VM, etc).


Hyper-V Replica & Hot Add of Disks

You can add disks to a virtual machine that is already being replicated. Later you can add the disks to the replica set using Set-VMReplication.


Hyper-V Manager Alternative Credentials

With CredSSP-enabled PCs and hosts, you can connect to a host with alternative credentials.


Hyper-V Manager Down-Level Support

You can manage Windows Server vNext, WS2012 R2 and WS2012 Hyper-V from a single console


Hyper-V Manager WinRM

WinRM is used to connect to hosts.



This is a new protocol for Microsoft Storage QoS. It uses SMB 3.0 as a transport, and it describes the conversation between Hyper-V compute nodes and the SOFS storage nodes. IOPS, latency, initiator names, imitator node information is sent from the compute nodes to the storage nodes. The storage nodes, send back the enforcement commands to limit flows, etc.


Nested Virtualization

Yes, you read that right! Required for Hyper-V containers in a hosted environment, e.g. Azure. Side-effect is that WS2016 Hyper-V can run in WS2016 via virtualization of VT-X.


Network Controller

A new fabric management feature built-into Windows Server, offering many new features that we see in Azure. Examples are a distributed firewall and software load balancer.


Online Resize of Memory

Change memory of running virtual machines that don’t have Dynamic Memory enabled.


Power Management

Hyper-V has expanded support for power management, including Connected Standby


PowerShell Direct

Target PowerShell at VMs via the hypervisor (VMbus) without requiring network access. You still need local admin credentials for the guest OS.


Pre-Authentication Integrity

When talking from one machine to the next via SMB 3.1.1. This is a security feature that uses checks on the sender & recipient side to ensure that there is no man-in-the-middle.


Production Checkpoints

Using VSS in the guest OS to create a consistent snapshots that workload services should be able to support. Applying a checkpoint is like performing a VM restore from backup.


Nano Server

A new installation option that allows you to deploy headless Windows Servers with tiny install footprint and no UI of any kind. Intended for storage and virtualization scenarios at first. There will be a web version of admin tools that you can deploy centrally.


RDMA to the Host

Remote Direct Memory Access will be supported to the management OS virtual NICs via converged networking.


ReFS Accelerated VHDX Operations

Operations are accelerated by converting them into metadata operations: fixed VHDX creation, dynamic VHDX extension, merge of checkpoints (better file-based backup).



OpenFL 4.4 and OpenCL 1.1 API are supported.


Replica Support for Hot-Add of VHDX

When you hot-add a VHDX to a running VM that is being replicated by Hyper-V Replica, the VHDX is available to be added to the replica set (MSFT doesn’t assume that you want to replicate the new disk).


Replica support for Cross-Version Hosts

Your hosts can be of different versions.


Runtime Memory Resize

You can increase or decrease the memory assigned to Windows Server vNext guests.


Secure Boot for Linux

Enable protection of the boot loader in Generation 2 VMs


Shared VHDX Improvements

You will be able to do host-based snapshots of Shared VHDX (so you get host-level backups) and guest clusters. You will be able to hot-resize a Shared VHDX.

Shared VHDX will have its own hardware category in the UI. Note that there is a new file format for Shared VHDX. There will be a tool to upgrade existing files.


Shielded Virtual Machines

A new security model that hardens Hyper-V and protects virtual machines against unwanted tampering at the fabric level.


SMB 3.1.1

This is a new version of the data transport protocol. The focus has been on security. There is support for mixed mode clusters so there is backwards compatibility. SMB 3.02 is now called SMB 3.0.2.


SMB  Negotiated Encryption

Moving from AES CCM to AES GCM (Galois Counter Mode) for efficiency and performance. It will leverage new modern CPUs that have instructions for AES encryption to offload the heavy lifting.


SMB Forced Encryption

In older versions of SMB, SMB encryption was opt-in on the client side. This is no longer the case in the next version of Windows Server.


Storage Accounts

A later release of WS2016 will bring support for hosting Azure-style Storage accounts, meaning that you can deploy Azure-style storage on-premises or in a hosted cloud.


Storage Replica

Built-in, hardware agnostic, synchronous and asynchronous replication of Windows Storage, performed at the file system level (volume-based). Enables campus or multi-site clusters.

Requires GPT. Source and destination need to be the same size. Need low latency. Finish the solution with the Cluster Cloud Witness.


Storage Spaces Direct (S2D)

A “low cost” solution for VM storage. A cluster of nodes using internal (DAS) disks (SAS or SATA, SSD, HDD, or NVMe) to create a consistent storage spaces pools that stretch across the servers. Compute is normally on a different cluster (converged) but it can be on one tier (hyper-converged)


Storage Transient Failures

Avoid VM bugchecks when storage has a transient issue. The VM freezes while the host retries to get storage back online.


Stretch Clusters

The preferred term for when Failover Clustering spans two sites.


System Center 2016

Those of you who can afford the per-host SMLs will be able to get System Center 2016 to manage your shiny new Hyper-V hosts and fabric.


System Requirements

The system requirements for a server host have been increased. You now must have support for Second-Level Address Translation (SLAT), known as Intel EPT or AMD RVI or NPT. Previously SLAT (Intel Nehalem and later) was recommended but not required on servers and required on Client Hyper-V. It shouldn’t be an issue for most hosts because SLAT has been around for quite some time.


Virtual Machine Groups

Group virtual machines for operations such as orchestrated checkpoints (even with shared VHDX) or group checkpoint export.


Virtual Machine ID Management

Control whether a VM has same or new ID as before when you import it.


Virtual Network Adapter Identification

Not vCDN! You can create/name a vNIC in the settings of a VM and see the name in the guest OS.


Virtual Secure Mode (VSM)

A feature of Windows 10 Enterprise that protects LSASS (secret keys) from pass-the-hash attacks by storing the process in a stripped down Hyper-V virtual machine.


Virtual TPM (vTPM)

A feature of shielded virtual machines that enables secure boot, disk encrypting within the virtual machine, and VSC.


VM Storage Resiliency

A VM will pause when the physical storage of that VM goes offline. Allows the storage to come back (maybe Live Migration) without crashing the VM.


VM Upgrade Process

VM versions are upgraded manually, allowing VMs to be migrated back down to WS2012 R2 hosts with support from Microsoft.


VXLAN Support

The new Network Controller will support VXLAN as well as the incumbent NVGRE for network virtualization.


Windows Containers

This is Docker in Windows Server, enabling services to run in containers on a shared set of libaries on an OS, giving you portability, per-OS density, and fast deployment.

Windows Server 2012 R2 Licensing

As usual, I am not answering any questions about licensing. That’s the job of your reseller or distributor, so ask them.

Microsoft released the updating licensing details for WS2012 R2 several weeks ago.  Remember that once released, you will be buying WS2012 R2, even if you plan to downgrade to W2008 R2.  In this post, I’m going to cover the licensing for “core” editions of Windows Server.

The Core Editions

There aren’t any huge changes to the “core” editions of Windows Server (Datacenter and Standard).  As with WS2012, the two editions are identical technically, having the same scalability and features … except one.


Both the Standard and Datacenter edition cover a licensed server for 2 processors.  Processors are CPUs or sockets.  Cores are not processors.  A server with 2 Intel Xeon E5 processors with 10 cores each has 2 processors.  It requires one Window Server license.  A server with 4 * 16 core AMD processors has 4 processors.  It needs 2 Windows Server licenses.

This applies no matter what downgraded version you plan to install.

Downgrade Rights

According to Microsoft:

If you have Windows Server 2012 R2 Datacenter edition you will have the right to downgrade software bits to any prior version or lower edition. If you have Windows Server 2012 R2 Standard edition, you will have the right to downgrade the software to use any prior version of Enterprise, Standard or Essentials editions.


The One Technical Feature That Is Unique To Datacenter Edition

Technically the Datacenter and Standard editions of WS2012 R2 are identical.  With one exception, which is really due to the exceptional virtualization licensing rights granted with the Datacenter edition.

If you use the Datacenter edition of WS2012 R2 (via any licensing program) for the management OS of your hosts Hyper-V then you get a feature called Automated Virtual Machine Activation (AVMA).  With this you get an AVMA key, that you install into your template VMs (guest OS must be WS2012 R2 DC/Std/Essentials) using SLMGR.  When that template is deployed on to the WS2012 R2 Datacenter hosts, then the guest OS will automatically activate without using KMS or online activation.  Very nice for multi-tenant or Network Virtualization-enabled clouds.

Virtualization Rights

Everything in this section applies to Windows Server licensing on all virtualization platforms on the planet outside of the SPLA (hosting) licensing program.  The key difference between Std and DC is the virtualization rights.  Any host licensed with DC gets unlimited VOSEs.  A VOSE (Virtual Operating System Environment) is licensing speak for a guest OS.  In other words:

  1. Say you license a host with the DC edition of Windows Server.
  2. You can install Windows Server (DC or Std) on an unlimited number of VMs that run on that host.
  3. You cannot transfer those VOSEs (licenses) to another host.
  4. You can transfer a volume license of DC (or Standard for that matter) once every 90 days to another host.  The VOSEs move with that host.

The Standard edition comes with 2 VOSEs.  That means you can install the Std edition of Windows Server in two VMs that run on a licensed host:

  1. Say you license a host with the Std edition of Windows Server.
  2. You can install Windows Server Standard on up to 2 VMs that run on that host.
  3. You cannot transfer those VOSEs (licenses) to another host.
  4. You can transfer a volume license of Standard (or DC for that matter) once every 90 days to another host.  The VOSEs move with that host.

You can stack Windows Server Standard edition licenses to get more VOSEs on a host:

    1. Say you license a host with 3 copies of the Std edition of Windows Server.  This is an accounting operation.  You do not install Windows 3 times on the host.  You do not install 3 license keys on the host.
    2. You can install Windows Server Standard on up to 6 (3 Std * 2 VOSEs) VMs that run on that host.
    3. You cannot transfer those VOSEs (licenses) to another host.
    4. You can transfer a volume license of Standard (or DC for that matter) once every 90 days to another host.  The VOSEs move with that host.

There is a sweet spot (different for every program/region/price band) where it is cheaper to switch from Std licensing to DC licensing for each host.

If you need HA or Live Migration then you license all hosts for the maximum number of VMs that can (not will) run on each host, even for 1 second.  The simplest solution is to license each host for the DC edition.

Upgrade Scenarios

WS2012 CALs do not need an upgrade.  WS2012 server licenses require one of the following to be upgraded:

  • Software Assurance (SA)
  • A new purchase

In my opinion anyone using virtualization is a dummy for not buying SA on their Windows Server licensing.  If you plan on availing of new Hyper-V features (assuming you are using Hyper-V) or you want to install even 1 newer edition of Windows Server, then you need to buy the licenses all over again … SA would have been cheaper, and remember that upgrades are just one of the rights included in SA.


This is what everyone wants to know about!  The $US Open NL (the most expensive volume license) pricing is shown, as it’s the most commonly used example:


The Standard edition went up a small amount from W2008 R2 to WS2012.  It has not increased with WS2012 R2.

The Datacenter edition did not increase from W2008 R2 to WS2012.  It has increased with the release of WS2012 R2.  However, think of how much you’re getting with the DC edition: unlimited VOSEs!

Reminder: There is no difference in Windows Server pricing no matter what virtualization you use.  The price of Windows Server on a Hyper-V host is the same as it is on a VMware host.  Please send me the company name/address of your employer or customers if you disagree – I’d love an easy $10,000 for reporting software piracy Open-mouthed smile

Calculating License Requirements

Do the following on a per-server basis.  This applies whether you are using virtualization or not, and no matter what virtualization you plan to use.

Step 1: Count your physical processors

If you have 1 or 2 physical processors in a server then your server needs 1 copy of Windows Server.  If your server will have 4 processors then you need 2 copies of Windows Server.  If your server will have 8 processors then you will need 4 copies of Windows Server.

Step 2: Count your virtual machines

How many virtual machines running Windows Server will possibly run on the host.  This include VMs that normally run on another host, but could be moved (Quick Migration, Live Migration, vMotion) manually or automatically, or failed over due to cluster high availability (HA).

You have 2 hosts in a cluster.  Each is running 2 VMs normally but could run 4 VMs, then you need to license each host for 4 VMs.  A copy of Windows Server Standard gives you 2 VOSEs.  Each host will need 4 VOSEs because 4 VMs could run on each host.  Therefore you need 2 copies of Standard per host.

When is the sweet spot?  That depends on your pricing.  Datacenter costs $6,155 and Standard costs $882 under US Open NL.  $6,155 / $882 = 6.97.  7 copies of Windows Std = the price of Windows DC.  Therefore the sweet spot for switching is 14 VMs per host.  Once you get close to 14 VMs that could run on a host, you would be better off economically by buying the DC edition.

Windows Server 2012 R2 Hyper-V Feature List Glossary

I’m going to do my best (no guarantees – I only have one body and pair of ears/eyes and NDA stuff is hard to track!) to update this page with a listing of each new WS2012 R2 Hyper-V and Hyper-V Server 2012 R2 (and related) feature as it is revealed by Microsoft (starting with TechEd North America 2013).  Note, that the features of WS2012 can be found here.

This list was last updated on 05/September/2013.


3rd party Software Defined Networking Is supported by the extensibility of the virtual switch.
Automatic Guest Activation Customers running WS2012 R2 Datacenter can automatically activate their WS2012 R2 guests without using KMS. Works with OEM and volume licenses. Great for multi-tenant clouds.
Azure Compatibility Azure is running the same Hyper-V as on-premise deployments, giving you VM mobility from private cloud, to hosted cloud, to Microsoft Azure.
Built-In NVGRE Gateway A multi-tenant aware NVGRE gateway role is available in WS2012 R2. Offers site-site VPN, NAT for Internet access, and VM Network to physical network gateway.
Clustering: Configurable GUM Mode Global Update Manager (GUM) is responsible for synchronizing cluster resource updates.  With Hyper-V enabled, all nodes must receive and process an update before it is committed to avoid inconsistencies.
Clustering: Larger CSV Cache Percentage WS2012 allows a maximum of 20% RAM to be allocated to CSV Cache.  This is 80% in WS2012 R2.
Clustering: CSV Load Balancing CSV ownership (coordinators) will be automatically load balanced across nodes in the cluster.
Clustering: CSV & ReFS ReFS is supported on CSV.  Probably still not preferable over NTFS for most deployments, but it is CHKDSK free!
Clustering: Dynamic Witness The votes of cluster nodes are automatically changed as required by the cluster configuration.  Enabled by default.  This can be used to break 50/50 votes when a witness fails.
Clustering: Hyper-V Cluster Heartbeat Clusters running Hyper-V have a longer heartbeat to avoid needless VM failovers on latent/contended networks. SameSubnetThreshold is 10 (normally 5) and CrossSubnetThreshold is 20 (normally 5).
Clustering: Improved logging Much more information is recorded during host add/remove operations.
Clustering: Pause action Pausing a node no longer will use Quick Migration for “low” priority VMs by default; Live Migration is used as expected by most people. You can raise the threshold to force Quick Migration if you want to.
Clustering: Proactive Server Service Health Detection The health of a destination host will be verified before moving a VM to another host.
Clustering: Protected Networks Virtual NICs are marked as being on protected networks by default. If a virtual NICs’ virtual switch becomes disconnected then the cluster will Live Migrate that VM to another host with a healthy identical virtual switch.
Clustering: Virtual Machine Drain on Host Shutdown Shutting down a host will cause all virtual machines to Live Migrate to other hosts in the cluster.
Compressed Live Migration Using only idle CPU resources on the host, Hyper-V can compress Live Migration to make it quicker. Could provide up to 2x migrations on 1 GbE networks.
Cross-Version Live Migration You can perform a Live Migration from WS2012 to WS2012 R2. This is one-way, and enables zero-downtime upgrades from a WS2012 host/cluster to a WS2012 R2 host/cluster.
Dynamic Mode NIC Teaming In addition to Hyper-V Port Mode and Address Hashing. Uses “flowlets” to give fine-grained inbound and outbound traffic.
Enhanced Session Mode The old Connect limited KVM access to a VM. Now Connect can use Remote Desktop that is routed via the Hyper-V stack, even without network connection to the VM. Copy/paste and USB redirection are supported. Disabled on servers and enabled by Client Hyper-V by default.
Generation 2 VM A G2 virtual machine is a VM with no legacy “hardware”. It uses UEFI boot, has no emulated devices, boots from SCSI, and can PXE boot from synthetic NIC. You cannot convert from G1 VM (UEFI I am guessing).
HNV Diagnostics A new PoSH cmdlet enables an operator to diagnose VM connectivity in a VM Network without network access to that VM.
HNV: Dynamic Learning of CAs Hyper-V Network Virtualization can learn the IPs of VM Network VMs. Enables guest DHCP and guest clustering in the VM Network.
HNV: NIC Teaming Inbound and outbound traffic can traverse more than one team member in a NIC team for link aggregation.
HNV: NVGRE Task Offloads A new type of physical NIC will offload NVGRE de- and encapsulation from the host processor.
HNV: Virtual Switch extensions The HNV filter has been included in the Hyper-V Virtual Switch. This enables 3rd party extensions to work with HNV CAs and PAs.
Hyper-V Replica Extended Replication You can configure a VM in Site A to replicate to Site B, and then replicate it from Site B to Site C.
Hyper-V Replica Finer Grained Interval controls You can change the replication interval from the default 5 minutes to every 30 seconds or every 15 minutes.
IPAM IP Address Management was extended in WS2012 R2 to do management of physical and virtual networking with built-in integration into SCVMM 2012 R2.
Linux Dynamic Memory All features of Dynamic Memory are supported on WS2012 R2 hosts with the up to date Linux Integration Services.
Linux Kdump/kexec Allows you to create kernel dumps of Linux VMs.
Linux Live VM backup You can backup a running Linux VM with no pause, with file system “freeze”, giving file system consistency. Linux does not have VSS.
Linux Specification of Memory Mapped I/O (MMIO) gap Provides fine grained control over available RAM for virtual appliance manufacturers.
Linux Non-Maskable Interrupt (NMI) Allows delivery of manually triggered interrupts to Linux virtual machines running on Hyper-V.
Linux Video Driver A Synthetic Frame Buffer driver for Linux guest OSs will provide improved performance and mouse support.
Live Resizing of VHDX You can expand or shrink (if there’s un-partitioned space) a VHDX attached to a running VM. It must be SCSI attached.  This applies to Windows and Linux.
Live Virtual Machine Cloning You can clone a running virtual machine. Useful for testing and diagnostics.
Remote Live Monitoring Remote monitoring of VM network traffic made easier with Message Analyzer.
Service Provider Foundation (SPF) The SPF is used to provide an API in-front of SCVMM. It is required for the Windows Azure Pack. A hosting company can share their infrastructure with clients, who can interact with SPF via on-premise System Center – App Controller.
Shared VHDX Up to 8 VMs can share a VHDX (on shared storage like CSV/SMB) to create guest clusters. Appears like a shared SAS drive.
SMB Live Migration This feature uses SMB to perform Live Migration over 10 GbE or faster networks. It uses SMB Multichannel if there are multiple Live Migration networks. SMB Direct is used if RDMA is available.  SMB Multichannel gives the fastest VM movement possible, and SMB Direct offloads the work from the CPU. Now moving that 1 TB RAM VM doesn’t seem so scary!
SMB 3.0: Automatic rebalancing of Scale-Out File Server clients SMB clients of the scalable and continuously available active/active SOFS are rebalanced across nodes after the initial connection. Tracking is done per-share for better alignment of server/CSV ownership.
SMB 3.0: Bandwidth controls QoS just sees SMB 3.0. New filters for default, live migration, and virtual machine allow you to manage bandwidth over converged networks.
SMB 3.0: Improved RDMA performance Improves performance for small I/O workloads such as OLTP running in a VM. Very noticeable on 40/56 Gbps networks.
SMB 3.0: Multiple SMB instances on SOFS The Scale-Out File Server has an additional SMB instance for CSV management, improving scalability and overall reliability. Default instance handles SMB clients.
Storage Spaces: Tiered Storage You can mix 1 tier of SSD with 1 tier of HDD to get a blend of expensive extreme speed and economic capacity.  You define how much (if any) SSD and how much HDD a virtual disk will take from the pool.  Data is promoted/demoted in the tiers at 1am by default.  You can pin entire files to a tier.
Storage Spaces: Parallelized Restore Instead of using slow host spare disks in a pool, you can use the cumulative write IOPS of the pool to restore virtual disk fault tolerance over the remaining healthy disks. The replacement disk is seen as new blank capacity.
Storage Spaces: Write-Back Cache Hyper-V is write-through, avoiding controller caches on writes.  With tiered storage, you get Write-Back Cache.  The SSD tier can absorb spikes in write activity.  Supported with CSV.
Storage QoS You can set an IOPS limit on individual virtual hard disks to avoid one disk consuming all resources, or to price-band your tenants. Minimum alerts will notify you if virtual hard disks cannot get enough storage bandwidth.
System Center alignment System Center and Windows Server were developed together and will be released very closely together.
Network Diagnostics New PowerShell tools for testing the networking of VMs, including Get-VMNetworkAdapter, Test-NetConnection, Test-VMNetworkAdapter,a nd Ping -P.
VDI & Deduplication WS2012 R2 can be enabled in VDI scenarios (only) where the VMs are stored on dedicated (only) WS2012 R2 storage servers.
Virtual Machine Exports You can export a VM with snapshots/checkpoints
Virtual Switch Extended Port ACLs ACLs now include the socket port number.  You can now configure stateful rules that are unidirectional and provide a timeout parameter. Compatibility with Hyper-V Network Virtualization.
vRSS Virtual Receive Side Scaling leverages DVMQ on the host NIC to enable a VM to use more than 1 vCPU to process traffic. Improves network scalability of a VM.
Windows Azure Pack This was previously called Windows Azure Services for Windows Server, and is sometimes called “Katal”. This is based on the source code of the Azure IaaS portal, and allows companies (such as hosting companies) to provide a self-service portal (with additional cloud traits) for their cloud.


Technorati Tags: ,,,,