Windows 10 Build 10565 Makes Nested Hyper-V Virtualisation … Possible!

One of the biggest hitting articles on my site, written in 2009 (!!!) is “Can You Install Hyper-V in a VM?”. The short answer has always been “yes, if you know how”, but the long/compelete answer continues with “the hypervisor will not start and you will not be able to boot any virtual machines”.

This was because Hyper-V did not support nested virtualization – the ability to run Hyper-V in a VM that is running on Hyper-V (yes, I know there are hacks to get Hyper-V to run in a VM on VMware). A requirement of Hyper-V is a processor feature, VT-x from Intel or AMD-V from AMD. Hyper-V takes control of this feature and does not reveal it to the guests running on the host. This means that a system requirement of Hyper-V is not present in the virtual machine, and you cannot use the virtual machine as a real host.

Microsoft released Build 10565 of Windows 10 to Windows Insiders this week and announced that the much anticipated nested Hyper-V virtualization is included. Yup, I’ve tried it and it works. Microsoft has made this work by revealing processor virtualization on a per-VM basis to VMs that will be Hyper-V hosts – let’s call these VM hosts to keep it consistent with the language of Windows Server Containers. This means that I can:

  1. Install Hyper-V on a physical host
  2. Create a VM
  3. Enable nested virtualization for that VM, making it a VM host
  4. Install a guest OS in that VM host and enable Hyper-V
  5. Create VMs that will actually run in the VM host.

Applications of Nested Virtualization

I know lots of you have struggled with learning Hyper-V due to lack of equipment. You might have a PC with some RAM/CPU/fast disk and can’t afford more, so how can you learn about Live Migration, SOFS, clustering, etc. With nested virtualization, you can run lots of VMs on that single physical machine, and some of those VMs can be VM hosts, in turn hosting more VMs that you can run, back up, migrate, failover, and so on (eventually, because there are limitations at this point).

Consultants and folks like me have struggled with doing demonstrations on the road. At TechEd Europe and Ignite, I used a VPN connection back to a lab in Dublin where a bunch of physical machines resided. I know one guy that travels with a Pelicase full of of Intel NUC PCs (a “cloud in a case”). Now, one high spec laptop with lots of SSD could do the same job, without relying on dodgy internet connections at event venues!

A big part of my job is delivering training. In the recent past, we nearly bought 20 rack servers (less space consumed than PCs, and more NICs than NUC can do) to build a hands-on training lab. With a future release of WS2016, all I need is some CPU and RAM, and maybe I’ll build a near-full experience hands-on training lab that I can teach Hyper-V, Failover Clustering, and SOFS with, instead of using the limited experience solution that Microsoft uses with Azure VMs (no nested virtualization at this time). Personally I think this feature could revolutionize how Hyper-V training is delivered, finally giving Microsoft something that is extremely badly required (official Hyper-V training is insufficient at this time).

Real world production uses include:

  • The possibility of hosted private cloud: Imagine running Hyper-V on Azure, so you can do private cloud in a public cloud! I think that might be pricey, but who knows!
  • Hyper-V Containers: Expected with TPv4 of WS2016, Hyper-V Containers will secure the boundaries between containerized apps.

It’s the latter that has motivated Microsoft to finally listen to our cries for this feature.

Release Notes

  • Nested virtualization is a preview feature and not to be used in production.
  • AMD-v is not supported at this time. Intel VT-x must be present and enabled in the physical host.
  • You cannot virtualize third-party hypervisors at this time – expect VMware to work on this.
  • The physical host and the VM host must be running Build 10565 or later. You cannot use Windows 10 GA, WS2012 R2 or WS2016 TPv3 as the physical host or the VM host.
  • Dynamic Memory is not supported.
  • The following features don’t work yet: Hot-memory resize, Live Migration, applying checkpoints, save/restore.
  • MAC spoofing must be enabled on the VNIC of the VM host.
  • Virtual Secure Mode (VSM) / Virtualization Based Security (VBS) / Credential Guard (a Windows 10 Enterprise feature) must be disabled to allow virtualization extensions.

Enabling Nested Virtualization

1 – Install the Physical Host

Install Build 10565 of Windows or later on the physical host. Enable the Hyper-V role and configure a virtual switch.

2- Create a VM Host

Deploy a VM (static RAM) with Build 10565 or later as the guest OS. Connect the VM to the virtual switch of the physical host.

2 - Create VM with Static RAM

3 – Enable Nested Virtualization

Run the following, using an elevated PowerShell window, on the physical host to execute the enablement script (shared on GitHub):

Invoke-WebRequest https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/hyperv-tools/Nested/Enable-NestedVm.ps1 -OutFile ~/Enable-NestedVm.ps1

~/Enable-NestedVm.ps1 -VmName <VmName>

3 - Enable Nested Virtualization

4 – Enable MAC Spoofing

Run the following on the physical host, targeting the VM host. This will enable MAC spoofing on the VM host. Modify this cmdlet to specify a vNIC if the VM will have NIC just for nested VMs to communicate on.

Set-VMNetworkAdapter -VMName <VMName> -MacAddressSpoofing on

5 – Enable Hyper-V in the VM Host

Enable the Hyper-V role in the VM host and configure a virtual switch on the vNIC that is enabled for MAC spoofing.

1 - Enable Hyper-V

6 – Create Nested VMs

Create VMs in the VM host, power them up and deploy operating systems. Have fun!

10 - Nested Virtualization in Action

And bingo, there you go!

How Useful is Nested Virtualization Now?

I won’t be rushing out to buy a new laptop or re-deploy the lab yet. I want to run this with WS2016 so I have to wait. I’ll wait longer for Live Migration support. So right now, it’s cool, but with WS2016 TPv4 (hopefully), I’ll have something substantial.

Create a WS2016 Nano Server Hyper-V VM

Setting up a Nano Server VM requires running some PowerShell. The instructions that I found out there aren’t that clear for a non-PowerShell guru , are wrong, or are incomplete. So let me clear up everything by showing you exactly what I am using to deploy Nano Server as a Windows Server 2016 (TPv3/Technical Preview 3) Hyper-V virtual machine.

Note: The process will probably change after I published this post.

Step 1 – Make Folders

Create three folders on a computer with a fast disk. Note that I’ll use C: but maybe you should use a D: or something.

  • C:\Nano
  • C:\Nano\Base
  • C:\Scripts

Step 2 – Copy Scripts

Mount the WS2016 ISO – let’s assume that it mounts as E:. Copy two scripts from E:\NanoServer from the ISO to C:\Scripts:

  • new-nanoserverimage.ps1
  • convert-windowsimage.ps1

Step 3 – Dot The Scripts

Note that I missed out on this step because I had never encountered this sort of thing before – I’m an advocate of PowerShell but I’m no guru! If you do not run this step, New-NanoServerImage.ps1 will do nothing at all and wreck your head for 3 hours (it did for me!).

Open a PowerShell window with elevated privileges. Navigate to C:\Scripts. Run the following:

. .\convert-windowsimage.ps1

I know – it looks funny. Enter it exactly as above. This appears to load the contained script as a cmdlet that can later be executed.

Do the same again for New-NanoServerVHD.ps1:

. .\new-nanoserverimage.ps1

Now we can build a new VHD with Nano Server pre-installed.

Step 4 – Create a VHD

You can now run New-NanoServerImage. Here’s what I ran:

New-NanoServerImage -MediaPath e:\ -BasePath C:\Nano\Base -TargetPath C:\Nano\Nano1 -GuestDrivers -ComputerName "Nano1" -DomainName "prev.internal" -EnableIPDisplayOnBoot -AdministratorPassword (convertto-securestring -string "AVerySecurePassPhrase" -asplaintext -force) -EnableRemoteManagementPort -Language EN-US

The above will prep a VHD with a VM called Nano1. I have configured the VM to join the prev.internal domain – note that this will require me to have suitable domain creds – a computer account is created in the domain. I enabled the Hyper-V guest drivers and allowed the IP of the VM to appear on the console. The VHD will be stored in C:\Nano\Nano1. Note that if this folder exists then the process will abort:

WARNING: The target directory already exists. If you want to rebuild this image, delete the directory first.
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

Note that I had to specify EN-US because, at this time, my default region of EN-IE was not available:

WARNING: The ‘en-ie’ directory does not exist in the ‘Packages’ directory (‘g:\NanoServer\Packages’).
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

I could have added other roles/packages to the VHD such as:

  • -Storage: For a SOFS cluster.
  • -Compute: To enable Hyper-V … useful when TPv4 (we guess) introduces guest virtualization.
  • -Clustering: To enable failover clustering in the VM.
  • -Defender: Adding security to the guest OS.

A minute or so later, a 439 MB was created in the newly created C:\Nano\Nano1.

Recreating a Nano Server VM

If you’re playing with Nano Server in a lab then you’ll create VMs with name reuse. If you do this with domain join then you might encounter a failure:

WARNING: Failed with 2224.
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

Open the log and you’ll find:

Provisioning the computer…

Failed to provision [Nano1] in the domain [prev.internal]: 0x8b0.

It may be necessary to specify /REUSE when running

djoin.exe again with the same machine name.

Computer provisioning failed: 0x8b0.

The account already exists.

That’s one of those “ding-ding-ding aha!” moments. The computer account already exists in AD so delete the account and start over.

Creating Additional VMs

Once you have run the above process, C:\Nano\Base will be populated with files from the ISO (\NanoServer). This means that you can drop the -MediaPath flag and eject the ISO.

New-NanoServerImage -BasePath C:\Nano\Base -TargetPath C:\Nano\Nano2 -GuestDrivers -ComputerName "Nano2" -DomainName "prev.internal" -EnableIPDisplayOnBoot -AdministratorPassword (convertto-securestring -string "AVerySecurePassPhrase" -asplaintext -force) -EnableRemoteManagementPort -Language EN-US

Step 5 – Move the Computer Account

In AD, move the computer account for the new Nano server to the required OU so it get’s any requierd policies on the first boot – remember that this sucker has no UI so GPO and stuff like Desired State Configuration (DSC) will eventually be the best way to configure Nano Server.

Step 6 – Create a VM

The above process prepare a VHD for a Generation 1 virtual machine. Create a Generation 1 VM, and attach the VHD to the boot device. Connect to the VM and power it up. A couple of seconds will pass and a log in screen will appear:

image

Log in with your local admin or domain credentials and you’ll be greeted with the console. Note that I enabled the IP address to be displayed during the setup:

image

Step 7 – Manage the Nano Server VM

If you want to do some management work then you’ll need to:

  • Wait for the eventual remote management console that was quickly shown at Ignite 2015.
  • Use PowerShell remoting.
  • Use PowerShell Direct (new in WS2016).

If you have network access to the VM then you can use remoting:

Enter-PSSession -ComputerName Nano1 -Credential prev\administrator

Troubleshooting network issues with Nano Server can be a dog because there is no console that you can log into. However … you can use PowerShell Direct with no network access to the VM, via the Hyper-V guest OS integration components:

Enter-PSSession -VMName Nano1 -Credential prev\administrator

Tip: Most AD veterans start network troubleshooting with DNS – it’s nearly always the cause. In my lab, I have 3 domains, so 3 sets of DNS. My DHCP scope sets up on domain’s DNS server as the primary, and that can cause issues. Some PowerShell Direct to the VM with some Set-DnsClientServerAddress sorted things out.

Configuring Windows Server Containers To Use DHCP Instead Of NAT

Read on if you want to learn how to connect Windows Server containers to an external virtual switch so that you don’t use NAT, and the containers actually talk directly to the LAN via DHCP assigned addresses. You’ll also see why a DHCP enabled container fails to get and address and ends up with a 169.254.x.x APIPA IPv4 configuration.

If you use Microsoft’s setup scripts for Windows Server 2016 (WS2016) Technical Preview 3 (TPv3), the default configuration for container networking is that each VM host will have virtual switch (in the VM), connected the VM’s vNIC. The virtual switch works in NAT mode, and uses a private network range to dynamically address containers that connect to the virtual switch. This set up requires each container to have NAT rules on the VM host so that external clients can connect to the services running in the containers. That … could be messy. In some terms, it could allow for huge network scalability (with tens of thousands of possible ports per VM host) but in others, it could be a nightmare to orchestrate.

What if you wanted your containers to talk directly on the LAN. In other words: no NAT. Yes, your containers can do this, and it’s known as a DHCP configuration – your containers are stateless so it’s pointless assigning them static IP addresses; instead the containers will get their addressing from DHCP services on the LAN.

Remember that there are two scripts that we can run to set up a VM host.

  • Method 1: You download New-ContainerHost.ps1 and run it. This downloads a bunch of stuff, creates a VM host, and then runs Install-ContainerHost.ps1. By default, this will configure the VM host with NAT networking.
  • Method 2: You create your own VM, download and run Install-ContainerHost.ps1. By default, you’ll get NAT networking.

But …

Install-ContainerHost.ps1 includes the option for a flag:

image

If you use method 2 then you could run Install-ContainerHost in the new VM host with the -UseDHCP flag set to $true; the behaviour of the script will change. By default it creates the VM host’s virtual switch in NAT mode. But enabling this flag creates an external virtual switch.

In my lab, I like to create my VM hosts using New-ContainerHost because it’s very quick (thanks to the use of differencing disks) and automates the entire setup. But New-ContainerHost doesn’t include the option for UseDHCP. You could edit any call of Install-ContainerHost from New-ContainerHost, but I do it another way.

Instead I edit Install-ContainerHost. One small change will do the trick. Not far from the top is where the parameters are set as script variables. Look for a line that reads:

$UseDHCP,

Modify this line so it reads:

$UseDHCP = $true,

image

Now every time I either run Install-ContainerHost or New-ContainerHost I’ll get the DHCP networking configuration instead of NATing.

So try this to create/configure a VM host, create a container, use Enter-PSSession to connect to the container, run IPConfig and … viola, you’ll have no DHCP address. Say what?

I was stumped. I tried it again. Nothing. I asked for help and by the time I got home, I got a tip from one of the folks in Redmond. It proved to be my “I’m a moron” moment of the day. If I’d thought about it, DHCP is all about broadcasts and MAC addresses. I have a single VLAN set up in the lab so broadcasts wasn’t the issue. What’s going on with MACs? A VM host has a MAC for itself. And then each container on the VM host that connects to the virtual switch has it’s own MAC address … but the network sees only one interface. Have you figured it out yet?

By default, Hyper-V has MAC spoofing disabled on every virtual NIC – a virtual NIC can only have 1 MAC address. What I needed to do was, at the host level, run the following to enable MAC spoofing on the VM host’s virtual NIC:

Get-VMNetworkAdapter -VMName containers3 | Set-vmNetworkAdapter -MacAddressSpoofing On

Now everything works Smile

Windows Server Containers – “Enter-PSSession : The term ‘Measure-Object’ Is Not Recognized”

If you’ve been working with Windows Server Containers in Windows Server 2016 (WS2016) Technical Preview 3 (TPv3) then you’ve probably experienced something like this:

  1. You create a new container
  2. Then start the container
  3. And try to create a PowerShell session into the container using Enter-PSSession

And then there’s lots of red on the screen:

enter-pssession : The term ‘Measure-Object’ is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.
At line:1 char:1
+ enter-pssession -ContainerId $container.ContainerId
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (Measure-Object:String) [Enter-PSSession], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

image

Strangely, I have not been able to recreate this with Invoke-Command, so it appears to be unique to how Enter-PSSession sets up session in the container.

So how do you solve the issue? It’s simple – you rushed from starting your container to trying to log into it. Wait a few seconds and then try again.

Logging Into Windows Server Containers

How do you log into a container to install software? Ah … you don’t actually log into a container because a container is not a virtual machine. Confusing? Slightly!

What you actually do is remotely execute commands inside of a container; this is actually something like PowerShell Direct, a new feature in Windows Server 2016 (WS2016).

There are two ways to run commands inside of a container.

Which Container?

In Technical Preview 3 (TPv3), the methods we will use to execute commands inside of a container don’t use the name of the container; instead they use a unique container ID. This is because containers can have duplicate names – I really don’t like that!

So, if you want to know which container you’re targeting then do something along the lines of the following to store the container ID. The first creates a new container and stores the resulting container’s metadata in a variable object called $container.

$Container = New-Container -Name TestContainer -ContainerImageName WindowsServerCore

Note that I didn’t connect this container to a virtual switch!

The following example retrieves a container, assuming that it has a unique name.

$Container = Get-Container TestContainer

Invoke-Command

If you want to fire a single command into a container then Invoke-Command is the cmdlet to use. This method sends a single instruction into a virtual machine. This can be a command or a script block. Here’s a script block example:

Invoke-Command -ContainerID $Container.ContainerId -RunAsAdministrator -ScriptBlock { New-Item -Path C:\RemoteTest -ItemType Directory }

Note how I’m using the ContainerID attribute of $Container to identify the container.

The nice thing about Invoke-Command is that it is not interactive; the command remotely runs the script block without an interactive login. That makes Invoke-Command perfect for scripting; you write a script that deploys a container, starts it, does some stuff inside of the container, and then configures networking in the VM host. Lots of nice automation, there!

Enter-PSSession

If you want an interactive session with a container then Enter-PSSession is the way to go. Using this cmdlet you get a PowerShell session in the container where you can run commands and see the results. This is great for once-off stuff and troubleshooting, but it’s no good for automation/scripting.

Enter-PSSession -ContainerID $Container.ContainerId –RunAsAdministrator

Warning – In TPv3 we’ve seen that rushing into running this cmdlet after creating your new container can lead to an error. Wait a few seconds before trying to connect to the VM.

No Network Required!

These methods are using something PowerShell Direct, a new feature in WS2016 – it’s actually PowerShell via a named pipe. The above example deliberately created a VM that has no networking. I can still run commands inside of the container or get an interactive PowerShell session inside of the container without connectivity – I just need to be able to get onto the VM host.

Creating & Deploying Windows Server Containers Using NAT and PowerShell

This post will show you how to use PowerShell to deploy Windows Server Containers using Windows Server 2016 (WS2016) Technical Preview 3 (TPv3).

Note: I wanted to show you how to deploy IIS, but I found that IIS would only work on my first container, and fail on the others.

This example will deploy multiple containers running nginx web server on the same VM host. NAT will be used to network the VMs using a private IP range on the VM host’s internal virtual switch.

Note: The VM host is created at this point, with a working NATing virtual switch that has an IP range of 192.168.250.0/24, with 192.168.250.1 assigned to the VM host.

Create the nginx Container Image

The beauty of containers is that you create a set of reusable container images that have a parent child relationship. The images are stored in a flat file repository.

Note: In TPv3, the repository is local on the VM host. Microsoft will add a shared repository feature in later releases of WS2016.

Log into the VM host (which runs Server Core) and launch PowerShell

PowerShell

In this example I will create a new container using the default WindowsServerCore container OS image. Note that I capture the instance of the new container in $Container; this allows me to easily reference the container and it’s attributes in later cmdlets:

$Container = New-Container -Name nginx -ContainerImageName WindowsServerCore -SwitchName "Virtual Switch"

The container is linked to the virtual switch in the VM host called “Virtual Switch”. This virtual switch is associated with the VM’s sole virtual NIC, and sharing is enabled to allow the VM to also have network connectivity. The switch is enabled for NATing, meaning that containers that connect to the switch will have an IP of 192.168.250.x (in my setup). More on this stuff later.

Start the new container:

Start-Container $Container

Wait 30 seconds for the container to boot up and then remote into it:

Enter-PSSession -ContainerId $Container.ContainerId -RunAsAdministrator

I would normally use IIS here, but I had trouble with IIS in Windows Server Containers (TPv3). So instead, I’m going to deploy nginx web server. Run the following to download the installer (zip file):

WGet -Uri 'http://nginx.org/download/nginx-1.9.3.zip' -OutFile "c:\nginx-1.9.3.zip"

The next command will expand the zip file to  c:\nginx-1.9.3\

Expand-Archive -Path C:\nginx-1.9.3.zip -DestinationPath c:\ -Force

There isn’t really an installer. nginx exists as an executable that can be run, which you’ll see later. The service “install” is done, so now we’ll exit from the remote session:

Exit

We now have a golden container that we want to capture. To do this, we must first shut down the container:

Stop-Container $Container

Now we create a new reusable container image called nginx:

New-ContainerImage -Container $Container -Publisher AFinn -Name nginx -Version 1.0

The process only captures the differences between the original container (created from the WindowsServerCore container OS image) and where the machine is now. The new container image will be linked to the image that created the container. So, if I create a container image called nginx, it will have a parent of WindowsServerCore.

image

I’m done with the nginx container so I’ll remove it:

Remove-Container $Container –Force

Deploying A Service Using A Container

The beauty of containers is how quick it is to deploy a new service. We can deploy a new nginx web server by simply deploying a new container from the nginx container image. All dependencies, WindowsServerCore in this case, will also be automatically deployed in the container.

Actually, “deploy” is the wrong word. In fact, a link is created to the images in the repository. Changes are saved with the container. So, if I was to add content to a new nginx container, the container will contain the web content, and use the service and OS data from the nginx container image in the repository, and OS stuff from the VM host and the container OS image in the repository.

Let’s deploy a a new container with nginx. Once again I will store the resulting object in a variable for later use:

$Web2 = New-Container -Name Web2 -ContainerImageName nginx -SwitchName "Virtual Switch"

Then we start the container:

Start-Container $Web2

Wait 30 seconds before you attempt to remote into the container:

Enter-PSSession -ContainerId $Web2.ContainerId –RunAsAdministrator

Now I browse into the extracted nginx folder:

cd c:\nginx-1.9.3\

And then I start up the web service:

start nginx

Yes, I could have figured out how to autostart ngnix in the original template container. Let’s move on …

I want to confirm that nginx is running, so I check what ports are listening using:

NetStat –AN

I then retrieve the IP of the container:

IPConfig

Remember that the container lives in the NAT network of the virtual switch. In my lab, the LAN is 172.16.0.0/16. My VM host has 192.168.250.0/24 configured (Install-ContainerHost.ps1) as the NAT range. In this case, the new container, Web2 has an IP of 192.168.250.2.

I then exit the remote session:

Exit

There’s two steps left to allow HTTP traffic to the web service in the container. First, we need to create a NAT rule. The container will communicate on the LAN via the IP of the VM host. We need to create a rule that says that any TCP traffic on a select port (TCP 82 here) will be forwarded to TCP 80 of the container (192.168.250.2). Run this on the VM host:

Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 192.168.250.2 -InternalPort 80 -ExternalPort 82

Finally, I need to create a firewall rule in the VM host to allow inbound TCP 82 traffic:

New-NetFirewallRule -Name "TCP82" -DisplayName "HTTP on TCP/82" -Protocol tcp -LocalPort 82 -Action Allow -Enabled True

Now if I open up a browser on the LAN, I should be able to browse to the web service in the container. My VM host has an IP of 172.16.250.27 so I browse to http://172.16.250.27:82/ and the default nginx page appears.

Deploy More of the Service

OK, we got one web server up. The beauty of containers is that you can quickly deploy lots of identical services. Let’s do that again. The next snippet of code will deploy an additional nginx container, start it, wait 30 seconds, and then log into it via session remoting:

$Web3 = New-Container -Name Web3 -ContainerImageName nginx -SwitchName "Virtual Switch"

Start-Container $Web3

Sleep 30

Enter-PSSession -ContainerId $Web3.ContainerId -RunAsAdministrator

I then start nginx, verify that it’s running, and get the NAT IP of the container (192.168.250.3).

cd c:\nginx-1.9.3\

start nginx

NetStat -AN

IPconfig

exit

Now I can create a NAT mapping for the container in the networking of the VM host. In this case we will forward traffic to TCP 83 to 192.168.250.4 (the container):

Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 192.168.250.3 -InternalPort 80 -ExternalPort 83

And then we open up a firewall rule on the VM host to allow inbound traffic on TCP 83:

New-NetFirewallRule -Name "TCP83" -DisplayName "HTTP on TCP/83" -Protocol tcp -LocalPort 83 -Action Allow -Enabled True

Now I can browse to identical but independent nginx web services on container 1 on http://172.16.250.27:82/ and http://172.16.250.27:83/, all accomplished with very little work and a tiny footprint. One might be production. One might be test. I could fire up another for development. And there’s nothing stopping me firing up more to troubleshoot, branch code, and test upgrades, and more, getting a quick and identical deployment every time that I can dump in seconds:

Remove-Container $Web2, $Web3

If you have apps that are suitable (stateless and no AD requirement) then containers could be very cool.

“Install-WindowsFeature : An unexpected error has occurred” Error When You Run Install-WindowsFeature In A Windows Server Container

This is one of those issues that makes me question a lot of step-by-step blog posts on Windows Server Containers that are out there – plenty of people were quick to publish guides on containers and didn’t mention encountering this issue which I always encounter; I suspect that there’s a lot of copy/pasting from Microsoft sites with little actual testing in a rush to be first to publish. It’s clear that many bloggers didn’t try to install things in a container that required administrator rights, because UAC was blocking those actions. In my case, it was installing IIS in a Windows Server 2016 (WS2016) Technical Preview 3 (TPv3) container.

In my lab, I created a new container and then logged in using the following (I had already populated $Container by  returning the container object into the variable):

Enter-PSSession -ContainerId $Container.ContainerId -RunAsAdministrator

And then I tried to install some role/feature, such as IIS using Install-WindowsFeature:

Install-WindowsFeature -Name Web-Server

I logged in using -RunAsAdministrator so I should have no issues with UAC, right? Wrong! Because the installation fails as follows:

Install-WindowsFeature : An unexpected error has occurred. The system cannot find the file specified.  Error: 0x80070002
+ CategoryInfo          : InvalidResult: (@{Vhd=; Credent…Name=localhost}:PSObject) [Install-WindowsFeature], Exception
+ FullyQualifiedErrorId : RegistryKey_OpenSubKey_Failed,Microsoft.Windows.ServerManager.Commands.AddWindowsFeatureCommand

image

What’s the solution? When you are remoted into the container you need to raise your administrator privileges to counter UAC. You can do this as follows, after you log into the container:

Start-Process Powershell.exe -Verb runAs

Run Install-WindowsFeature now and it will complete.

image

Sorted!

Note: I have found in my testing that IIS behaves poorly in TPv3. This might be why Microsoft’s getting started guides on MSDN use nginx web server instead of IIS! I’ve confirmed that nginx works perfectly well.

Speaking At Experts Live 2015 in The Netherlands

An awesome looking event called Experts Live 2015 will be running in The Netherlands (CineMex, Ede), covering many aspects of Microsoft infrastructure solutions:

  • Azure
  • Office 365
  • OMS (and more Azure)
  • Azure Stack and Windows Azure Pack
  • Hyper-V
  • Windows

I’ll be speaking as a part of the Hyper-V track:

  • Less known Hyper-V best practices: Mike Resseler, MVP
  • SMB Direct – The Secret Decoder Ring: Didier Van Hoyw, MVP
  • Notes from your Program Manager: Jeff Woolsey, Microsoft/Redmond
  • What’s New in Hyper-V 2016: Aidan Finn (Me!), MVP
  • Storage Spaces Direct and Hyper-V – The Perfect Couple?: Carsten Rachfahl, MVP
  • Would you like Nano Server with Containers?: Thomas Maurer, MVP

In other words, it’s a whole bunch of Hyper-V MVPs from around Europe plus one of the senior Windows Server PMs from Redmond; that’s quite a cast of characters! I would register if I wasn’t one of the speakers.

I had a great time the last time I presented at a Dutch community event during the lead-up to WS2012, so I’m really looking forward to this trip. Hopefully I’ll see you there!

ReFS Accelerated VHDX Operations

One of the interesting new features in Windows Server 2016 (WS2016) is ReFS Accelerated VHDX Operations (which also work with VHD). This feature is not ODX (VAAI for you VMware-bods), but it offers the same sort of benefits for VHD/X operations. In other words: faster creation and copying of VHDX files, particularly fixed VHDX files.

Reminder: while Microsoft continually tells us that dynamic VHD/Xs are just as fast as fixed VHDX files, we know from experience that the fixed alternative gives better application performance. Even some of Microsoft’s product groups refuse to support dynamic VHD/X files. But the benefit of Dynamic disks is that they start out as a small file that is extended as time requires, but fixed VHDX files take up space immediately. The big problem with fixed VHD/X files is that they take an age to create or extend because they must be zeroed out.

Those of you with a nice SAN have seen how ODX can speed up VHD/X operations, but the Microsoft world is moving (somewhat) to SMB 3.0 storage where there is no SAN for hardware offloading.

This is why Microsoft has added Accelerated VHDX Operations to ReFS. If you format your CSVs with ReFS then ReFS will speed up the creation and extension of the files for you. How much? Well this is why I built a test rig!

The back-end storage is a pair of physical servers that are SAS (6 Gb) connected to a shared DataON DNS-1640 JBOD with tiered storage (SSD and HDD); I built a WS2016 TPv3 Scale-Out File Server with 2 tiered virtual disks (64 KB interleave) using this gear. Each virtual disk is a CSV in the SOFS cluster. CSV1 is formatted with ReFS and CSV2 is formatted with NTFS, 64 KB allocation unit size on both. Each CSV has a file share, named after the CSV.

I had another WS2016 TPv3 physical server configured as a Hyper-V host. I used Switch Embedded Teaming to aggregate a pair of iWARP NICs (RDMA/SMB Direct, each offering 10 GbE connectivity to the SOFS) and created a pair of virtual NICs in the host for SMB Multichannel.

I ran a script on the host to create fixed VHDX files against each share on the SOFS, measuring the time it requires for each disk. The disks created are of the following sizes:

  • 1 GB
  • 10 GB
  • 100 GB
  • 500 GB

Using the share on the NTFS-formatted CSV, I had the following results:

image

A 500 GB VHDX file, nothing that unusual for most of us, took 40 minutes to create. Imagine you work for an IT service provider (which could be a hosting company or an IT department) and the customer (which can be your employer) says that they need a VM with a 500 GB disk to deal with an opportunity or a growing database. Are you going to say “let me get back to you in an hour”? Hmm … an hour might sound good to some but for the customer it’s pretty rubbish.

Let’s change it up. The next results are from using the share on the ReFS volume:

image

Whoah! Creating a 500 GB fixed VHDX now takes 13 seconds instead of 40 minutes. The CSVs are almost identical; the only difference is that one is formatted with ReFS (fast VHD/X operations) and the other is NTFS (unenhanced). Didier Van Hoye has also done some testing using direct CSV volumes (no SMB 3.0), comparing Compellent ODX and ReFS. What the heck is going on here?

The zero-ing out process that is done while creating a fixed VHDX has been converted into a metadata operation – this is how some SANs optimize the same process using ODX. So instead of writing out to the disk file, ReFS is updating metadata which effectively says “nothing to see here” to anything (such as Hyper-V) that reads those parts of the VHD/X.

Accelerated VHDX Operations also works in other subtle ways. Merging a checkpoint is now done without moving data around on the disk – another metadata operation. This means that merges should be quicker and use fewer IOPS. This is nice because:

  • Production Checkpoints (on by default) will lead to more checkpoint usage in DevOps
  • Backup uses checkpoints and this will make backups less disruptive

Does this feature totally replace ODX? No, I don’t think it does. Didier’s testing proves that ReFS’s metadata operation is even faster than the incredible performance of ODX on a Compellent. But, the SAN offers more. ReFS is limited to operations inside a single volume. Say you want to move storage from one LUN to another? Or maybe you want to provision a new VM from a VMM library? ODX can help in those scenarios, but ReFS cannot. I cannot say yet if the two technologies will be compatible (and stable together) at the time of GA (I suspect that they will, but SAN OEMs will have the biggest impact here!) and offer the best of both worlds.

This stuff is cool and it works without configuration out of the box!

Starting Lab Work With WS2016 TPv3

You might have assumed that I’ve had Windows Server 2016 (WS2016) running in my lab since TPv1 was launched. Well, that would have been nice but although I spend more time in a lab than most, I didn’t have time/resources. All I had time to play with was a virtual S2D SOFS using VMs and VHDX files.

What resources I have had to be allocated toWS2012 R2 because that’s what’s used by most people. For my writing in Petri.com, I’ve stayed mostly with WS2012 R2 because WS2016 is still too fluid.

My day job has been 95% Azure since January of last year so that’s consumed a lot of time. Any hybrid stuff I’ve been doing has required a GA OS so that’s why I’ve had so much WS2012 R2.

But in the last few weeks (sandwiching some vacation time) I’ve been deploying WS2016 in the lab. Right now I have:

  • Some VMs running Windows 10 with RSAT and a WS2016 DC
  • A SOFS running WS2016 with a DataON DNS-1640
  • A pair of Hyper-V hosts using the SOFS (SMB 3.x) and StarWind (iSCSI) for storage

There’s plenty of fun stuff to start looking at. Things I want to play with are Network Controller and Containers. I’ve already had a play with Switch Embedded Teaming – it’s pretty easy to set up. More to come!

Technorati Tags: ,