Optimize Hyper-V VM Placement To Match CSV Ownership

This post shares a PowerShell script to automatically live migrate clustered Hyper-V virtual machines to the host that owns the CSV that the VM is stored on. The example below should work nicely with a 2-node cluster, such as a cluster-in-a-box.

For lots of reasons, you get the best performance for VMs on a Hyper-V cluster if:

  • Host X owns CSV Y AND
  • The VMs that are stored on CSV Y are running on Host X.

This continues into WS2016, as we’ve seen by analysing the performance enhancements of ReFS for VHDX operations. In summary, the ODX-like enhancements work best when the CSV and VM placement are identical as above.

I wrote a script, with little bits taken from several places (scripting is the art of copy & paste), to analyse a cluster and then move virtual machines to the best location. The method of the script is:

  1. Move CSV ownership to what you have architected.
  2. Locate the VMs that need to move.
  3. Order that list of VMs based on RAM. I want to move the smallest VMs first in case there is memory contention.
  4. Live migrate VMs based on that ordered list.

What’s missing? Error handling 🙂

What do you need to do?

  • You need to add variables for your CSVs and hosts.
  • Modify/add lines to move CSV ownership to the required hosts.
  • Balance the deployment of your VMs across your CSVs.

Here’s the script. I doubt the code is optimal, but it works. Note that the Live Migration command (Move-ClusterVirtualMachineRole) has been commented out so you can see what the script will do without it actually doing anything to your VM placement. Feel free to use, modify, etc.

#List your CSVs 
$CSV1 = "CSV1" 
$CSV2 = "CSV2"

#List your hosts 
$CSV1Node = "Host01" 
$CSV2Node = "Host02"

function ListVMs () 
{ 
    Write-Host "`n`n`n`n`n`nAnalysing the cluster $Cluster ..."

    $Cluster = Get-Cluster 
    $AllCSV = Get-ClusterSharedVolume -Cluster $Cluster | Sort-Object Name

    $VMMigrationList = @()

    ForEach ($CSV in $AllCSV) 
    { 
        $CSVVolumeInfo = $CSV | Select -Expand SharedVolumeInfo 
        $CSVPath = ($CSVVolumeInfo).FriendlyVolumeName

        $FixedCSVPath = $CSVPath -replace '\\', '\\'

        #Get the VMs where VM placement doesn't match CSV ownership
        $VMsToMove = Get-ClusterGroup | ? {($_.GroupType –eq 'VirtualMachine') -and ( $_.OwnerNode -ne $CSV.OWnernode.Name)} | Get-VM | Where-object {($_.path -match $FixedCSVPath)} 

        #Build up a list of VMs including their memory size 
        ForEach ($VM in $VMsToMove) 
        { 
            $VMRAM = (Get-VM -ComputerName $VM.ComputerName -Name $VM.Name).MemoryAssigned

            $VMMigrationList += ,@($VM.Name, $CSV.OWnernode.Name, $VMRAM) 
        }

    }

    #Order the VMs based on memory size, ascending 
    $VMMigrationList = $VMMigrationList | sort-object @{Expression={$_[2]}; Ascending=$true}

    Return $VMMigrationList 
}

function MoveVM ($TheVMs) 
{

    foreach ($VM in $TheVMs) 
        { 
        $VMName = $VM[0] 
        $VMDestination = $VM[1] 
        Write-Host "`nMove $VMName to $VMDestination" 
        #Move-ClusterVirtualMachineRole -Name $VMName -Node $VMDestination -MigrationType Live 
        }

}

cls

#Configure which node will own wich CSV 
Move-ClusterSharedVolume -Name $CSV1 -Node $CSV1Node | Out-Null 
Move-ClusterSharedVolume -Name $CSV2 -Node $CSV2Node | Out-Null

$SortedVMs = @{}

#Get a sorted list of VMs, ordered by assign memory 
$SortedVMs = ListVMs

#Live Migrate the VMs, so that their host is also their CSV owner 
MoveVM $SortedVMs

Possible improvements:

  • My ListVMs algorithm probably can be improved.
  • The Live Migration piece also can be improved. It only does 1 VM at a time, but you could implement parallelism using jobs.
  • Quick Migration should be used for non-running VMs. I haven’t handles that situation.
  • You could opt to use Quick Migration for low priority VMs – if that’s your policy.
  • The script could be modified to start using parameters, e.g. Analyse (not move), QuickMigrateLow, QuickMigrate (instead of Live Migrate), etc.

Playing with WS2016 Hyper-V – Nested Virtualization, Nano, SET, and PowerShell Direct

I have deployed Technical Preview 5 (TP5) of Windows Server 2016 (WS2016) to most of the hardware in my lab. One of the machines, a rather old DL380 G6, is set up as a standalone host. I’m managing it using Remote Server Administration Toolkit (RSAT) for Windows 10 (another VM).

I enabled Hyper-V on that host. I then deployed a 4 x Generation 2 VMs using Nano Server (domain pre-joined using .djoin files) – this keeps the footprint tiny and the boot times are crazy fast.

Hyper-V is enabled in the Nano VMs – thanks to the addition of nested virtualization. I’ve also clustered these machines. Networking-wise, I have given each VM 2 x vNICs, each with MAC spoofing (for nested VMs) and NIC teaming enabled.

I launched PowerShell ISE then used Enter-PSSession to connect to the host from the admin PC. And from the host, I used Enter-PSSession -VMName to use PowerShell Direct to get into each VM – this gives me connectivity without depending on the network. That’s because I wanted to deploy Switch Embedded Teaming (SET) and provision networking in the Nano VMs. This script configure the VMs each with 3 vNICs for the management OS, connected to the vSwitch that uses both of the Nano VMs vNICs as teamed uplinks:

$idx = 54

New-VMSwitch -Name External -NetAdapterName "Ethernet","Ethernet 2" -EnableEmbeddedTeaming $true -AllowManagementOS $false

Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName External
Add-VMNetworkAdapter -ManagementOS -Name "SMB1" -SwitchName External
Add-VMNetworkAdapter -ManagementOS -Name "SMB2" -SwitchName External

Sleep 10

New-NetIPAddress -InterfaceAlias "vEthernet (Management)" -IPAddress 172.16.2.$idx -PrefixLength 16  -DefaultGateway 172.16.1.1
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management)" -ServerAddresses "172.16.1.40"

New-NetIPAddress -InterfaceAlias "vEthernet (SMB1)" -IPAddress 192.168.3.$idx -PrefixLength 24
New-NetIPAddress -InterfaceAlias "vEthernet (SMB2)" -IPAddress 192.168.4.$idx -PrefixLength 24

Note: there’s no mention of RDMA because I’m working in a non-RDMA scenario – a test/demo lab. Oh yes; you can learn Hyper-V, Live Migration, Failover Clustering, etc on your single PC now!

And in no time, I had myself a new Hyper-V cluster with a tiny physical footprint, thanks to 4 new features in WS2016.

Cannot Bind Parameter ‘ForegroundColor’ Error When Creating Nano Server Image

You get the following error when running New-NanoServerImage in PowerShell ISE to create a new Windows Server 2016 (WS2016) Nano Server image:

Write-W2VError : Cannot bind parameter ‘ForegroundColor’. Cannot convert the “#FFFF0000” value of type
“System.Windows.Media.Color” to type “System.ConsoleColor”.

image

The fix (during TP5) is to not use PowerShell ISE. Use an elevated PowerShell prompt instead. The reasoning is explained here by “daviwil”.

Russinovich on Hyper-V Containers

We’ve known since Ignite 2015 that Microsoft was going to have two kinds of containers in Windows Server 2016 (WS2016):

  • Windows Server Containers: Providing OS and resource virtualization and isolation.
  • Hyper-V Containers: The hypervisor adds security isolation to machine & resource isolation.

Beyond that general description, we knew almost nothing about Hyper-V Containers, other than expect them in preview during Q4 of 2015 – Technical Preview 4 (TPv4), and that it is the primary motivation for Microsoft to give us nested virtualization.

That also means that nested virtualization will come to Windows Server 2016 Hyper-V in TPv4.

We have remained in the dark since then, but Mark Russinovich appeared on Microsoft Mechanics (a YouTube webcast by Microsoft) and he explained a little more about Hyper-V Containers and he also did a short demo.

Some background first. Normally, a machine has a single user mode running on top of kernel mode. This is what restricts us to the “one app per OS” best practice/requirement, depending on the app. When you enable Containers on WS2016, an enlightenment in the kernel allows multiple user modes. This gives us isolation:

  • Namespace isolation: Each container sees it’s own file system and registry (the hives in the containers hosted files).
  • Resource isolation: How much process, memory, and CPU a container can use.

Kernel mode is already running when you start a new container, which improves the time to start up a container, and thus it’s service(s). This is great for deploying and scaling out apps because a containerised app can be deployed and started in seconds from a container image with no long term commitment, versus minutes for an app in a virtual machine with a longer term commitment.

image

But Russinovich goes on to say that while containers are great for some things that Microsoft wants to do in Azure, they also have to host “hostile multi-tenant code” – code uploaded by Microsoft customers that Microsoft cannot trust and that could be harmful or risky to other tenants. Windows Server Containers, like their Linux container cousins, do not provide security isolation.

In the past, Microsoft has placed such code into Hyper-V (Azure) virtual machines, but that comes with a management and direct cost overhead. Ideally, Microsoft wants to use lightweight containers with the security isolation of machine virtualization. And this is why Microsoft created Hyper-V Containers.

Hyper-V provides excellent security isolation (far fewer vulnerabilities found than vSphere) that leverages hardware isolation. DEP is a requirement. WS2016 is introducing IOMMU support, VSM, and Shielded Virtual Machines, with a newly hardened hypervisor architecture.

Hyper-V containers use the exact same code or container images as Windows Server Containers.That makes your code interchangeable – Russinovich shows a Windows Server Container being switched into a Hyper-V container by using PowerShell to change the run type (container attribute RuntimeType).

The big difference between the two types, other than the presence f Hyper-V, is that Hyper-V Containers get their own optimized instance of Windows running inside of them, as the host for the single container that they run.

image

The Hyper-V Container is not a virtual machine – Russinovich demonstrates this by searching for VMs with Get-VM. It is a container, and is manageable by the same commands as a Windows Server Container.

In his demos he switches a Windows Server Container to a Hyper-V Container by running:

Set-Container -Name <Container Name> -RuntimeType HyperV

And then he queries the container with:

Get-Container -Name <Container Name> | fl Name, State, RuntimeType

So the images and the commands are common across Hyper-V Containers and Windows Server Containers. Excellent.

It looked to me that starting this Hyper-V Container is a slower operation than starting a Windows Server Container. That would make sense because the Hyper-V container requires it’s own operating system.

I’m guessing that Hyper-V Containers either require or work best with Nano Server. And you can see why nested virtualization is required. A physical host will run many VM hosts. A VM host might need to run Hyper-V containers – therefore the VM Host needs to run Hyper-V and must have virtualized VT-x instructions.

Russinovich demonstrates the security isolation. Earlier in the video he queries the processes running in a Windows Server Container. There is a single CSRSS process in the container. He shows that this process instance is also visible on the VM host (same process ID). He then does the same test with a Hyper-V Container – the container’s CSRSS process is not visible on the VM host because it is contained and isolated by the child boundary of Hyper-V.

What about Azure? Microsoft wants Azure to be the best place to run containers – he didn’t limit this statement to Windows Server or Hyper-V, because Microsoft wants you to run Linux containers in Azure too. Microsoft announced the Azure Container Service, with investments in Docker and Mesospehere for deployment and automation of Linux, Windows Server, and Hyper-V containers. Russinovich mentions that Azure Automation and Machine Learning will leverage containers – this makes sense because it will allow Microsoft to scale out services very quickly, in a secure manner, but with less resource and management overhead.

That was a good video, and I recommend that you watch it.

 

Create a WS2016 Nano Server Hyper-V VM

Setting up a Nano Server VM requires running some PowerShell. The instructions that I found out there aren’t that clear for a non-PowerShell guru , are wrong, or are incomplete. So let me clear up everything by showing you exactly what I am using to deploy Nano Server as a Windows Server 2016 (TPv3/Technical Preview 3) Hyper-V virtual machine.

Note: The process will probably change after I published this post.

Step 1 – Make Folders

Create three folders on a computer with a fast disk. Note that I’ll use C: but maybe you should use a D: or something.

  • C:\Nano
  • C:\Nano\Base
  • C:\Scripts

Step 2 – Copy Scripts

Mount the WS2016 ISO – let’s assume that it mounts as E:. Copy two scripts from E:\NanoServer from the ISO to C:\Scripts:

  • new-nanoserverimage.ps1
  • convert-windowsimage.ps1

Step 3 – Dot The Scripts

Note that I missed out on this step because I had never encountered this sort of thing before – I’m an advocate of PowerShell but I’m no guru! If you do not run this step, New-NanoServerImage.ps1 will do nothing at all and wreck your head for 3 hours (it did for me!).

Open a PowerShell window with elevated privileges. Navigate to C:\Scripts. Run the following:

. .\convert-windowsimage.ps1

I know – it looks funny. Enter it exactly as above. This appears to load the contained script as a cmdlet that can later be executed.

Do the same again for New-NanoServerVHD.ps1:

. .\new-nanoserverimage.ps1

Now we can build a new VHD with Nano Server pre-installed.

Step 4 – Create a VHD

You can now run New-NanoServerImage. Here’s what I ran:

New-NanoServerImage -MediaPath e:\ -BasePath C:\Nano\Base -TargetPath C:\Nano\Nano1 -GuestDrivers -ComputerName "Nano1" -DomainName "prev.internal" -EnableIPDisplayOnBoot -AdministratorPassword (convertto-securestring -string "AVerySecurePassPhrase" -asplaintext -force) -EnableRemoteManagementPort -Language EN-US

The above will prep a VHD with a VM called Nano1. I have configured the VM to join the prev.internal domain – note that this will require me to have suitable domain creds – a computer account is created in the domain. I enabled the Hyper-V guest drivers and allowed the IP of the VM to appear on the console. The VHD will be stored in C:\Nano\Nano1. Note that if this folder exists then the process will abort:

WARNING: The target directory already exists. If you want to rebuild this image, delete the directory first.
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

Note that I had to specify EN-US because, at this time, my default region of EN-IE was not available:

WARNING: The ‘en-ie’ directory does not exist in the ‘Packages’ directory (‘g:\NanoServer\Packages’).
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

I could have added other roles/packages to the VHD such as:

  • -Storage: For a SOFS cluster.
  • -Compute: To enable Hyper-V … useful when TPv4 (we guess) introduces guest virtualization.
  • -Clustering: To enable failover clustering in the VM.
  • -Defender: Adding security to the guest OS.

A minute or so later, a 439 MB was created in the newly created C:\Nano\Nano1.

Recreating a Nano Server VM

If you’re playing with Nano Server in a lab then you’ll create VMs with name reuse. If you do this with domain join then you might encounter a failure:

WARNING: Failed with 2224.
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

Open the log and you’ll find:

Provisioning the computer…

Failed to provision [Nano1] in the domain [prev.internal]: 0x8b0.

It may be necessary to specify /REUSE when running

djoin.exe again with the same machine name.

Computer provisioning failed: 0x8b0.

The account already exists.

That’s one of those “ding-ding-ding aha!” moments. The computer account already exists in AD so delete the account and start over.

Creating Additional VMs

Once you have run the above process, C:\Nano\Base will be populated with files from the ISO (\NanoServer). This means that you can drop the -MediaPath flag and eject the ISO.

New-NanoServerImage -BasePath C:\Nano\Base -TargetPath C:\Nano\Nano2 -GuestDrivers -ComputerName "Nano2" -DomainName "prev.internal" -EnableIPDisplayOnBoot -AdministratorPassword (convertto-securestring -string "AVerySecurePassPhrase" -asplaintext -force) -EnableRemoteManagementPort -Language EN-US

Step 5 – Move the Computer Account

In AD, move the computer account for the new Nano server to the required OU so it get’s any requierd policies on the first boot – remember that this sucker has no UI so GPO and stuff like Desired State Configuration (DSC) will eventually be the best way to configure Nano Server.

Step 6 – Create a VM

The above process prepare a VHD for a Generation 1 virtual machine. Create a Generation 1 VM, and attach the VHD to the boot device. Connect to the VM and power it up. A couple of seconds will pass and a log in screen will appear:

image

Log in with your local admin or domain credentials and you’ll be greeted with the console. Note that I enabled the IP address to be displayed during the setup:

image

Step 7 – Manage the Nano Server VM

If you want to do some management work then you’ll need to:

  • Wait for the eventual remote management console that was quickly shown at Ignite 2015.
  • Use PowerShell remoting.
  • Use PowerShell Direct (new in WS2016).

If you have network access to the VM then you can use remoting:

Enter-PSSession -ComputerName Nano1 -Credential prev\administrator

Troubleshooting network issues with Nano Server can be a dog because there is no console that you can log into. However … you can use PowerShell Direct with no network access to the VM, via the Hyper-V guest OS integration components:

Enter-PSSession -VMName Nano1 -Credential prev\administrator

Tip: Most AD veterans start network troubleshooting with DNS – it’s nearly always the cause. In my lab, I have 3 domains, so 3 sets of DNS. My DHCP scope sets up on domain’s DNS server as the primary, and that can cause issues. Some PowerShell Direct to the VM with some Set-DnsClientServerAddress sorted things out.

Microsoft News – 30 September 2015

Microsoft announced a lot of stuff at AzureCon last night so there’s lots of “launch” posts to describe the features. I also found a glut of 2012 R2 Hyper-V related KB articles & hotfixes from the last month or so.

Hyper-V

Windows Server

Azure

Office 365

EMS

Upgrade An Azure-Hosted Service By Moving A VIP To A New Cloud Service

Last Friday I talked about how you could reserve and manipulate cloud service VIPs. In this post I’m going to show you how to “upgrade” a service by moving to a new installation of that service running in a new cloud service – this can be done by moving the VIP of the original cloud service to the new cloud service.

Have you wondered how you will upgrade your WS2012 R2 VMs to WS2016 in Azure? The answer is that you won’t. You will have to migrate services to new VMs. Here’s a way to do that migration. This process will keep the original installation running while the new service is being built. Once ready, the VIP (the public IP of the original service) is migrated to the newer cloud service. If all goes well, you remove the old cloud service. If all sucks, you migrate the VIP back to the original cloud service.

In my lab I have two cloud services:

  • OldWeb: This runs a WS2012 R2 VM with IIS
  • NewWeb2016: This runs a WS2016 VM with IIS

image

image

Let’s say I have a site called http://www.joeelway.com. The A records for joeelway.com and http://www.joeelway.com will point to this VIP of the OldWeb cloud service; this is what allows a browser to connect to that site. If I don’t have a reserved VIP then I can create one easily enough with:

New-AzureReservedIP -ReservedIPName "WebsiteVIP" -Location "North Europe" -ServiceName "OldWeb"

This will reserve the existing IPv4 address that is used by OldWeb with the cloud service. This is a non-disruptive change that simply fixes the existing IP address with the cloud service. I can continue to browse to the website using the same VIP as when it was dynamic.

image

image

Now I can build up a new web application using the NewWeb2016 cloud service. This has zero impact on the OldWeb cloud service, running side-by-side but using a different (probably dynamic) VIP:

image

The A records for the joeelway.com domain continue to point at the reserved VIP for OldWeb, so users are still going to the old service.

And then we plan a switchover, with all of the necessary data copy/replication/synchronisation, change controls, reviews, communications, etc. How do I make the change? It’s simple; we run two cmdlets to change the reserved IP association.

The first cmdlet will remove the association of the reserved VIP from the OldWeb cloud service. This forces the old service to get a new dynamic VIP:

Remove-AzureReservedIPAssociation -ReservedIPName "WebsiteVIP" -ServiceName “OldWeb”

This cmdlet takes a few minutes to run so plan for the associated outage that will be caused. The A records for the joeelway.com domain continue to point at the reserved VIP, which is no longer associated with a service. If you browse to the VIP the connection will time out:

image

We want to avoid such a time out experience for the site’s users so we will very quickly associate the VIP with the new cloud service to minimise downtime (scripting is perfect for this!):

Set-AzureReservedIPAssociation -ReservedIPName "WebsiteVIP" -ServiceName "NewWeb2016"

The A records continue to resolve to the reserved VIP, and now the VIP is associated to the new cloud service:

image

If all goes well, you can decommission the old cloud service (VMs, etc), but you can leave them running for a little while as a rollback plan:

  1. Remove the VIP association from the new cloud service
  2. Set the VIP association with the old cloud service

You have to admit that, even if you are a PowerShell hater, this is a nice way to switch clients to a new version of a service.

Configuring Windows Server Containers To Use DHCP Instead Of NAT

Read on if you want to learn how to connect Windows Server containers to an external virtual switch so that you don’t use NAT, and the containers actually talk directly to the LAN via DHCP assigned addresses. You’ll also see why a DHCP enabled container fails to get and address and ends up with a 169.254.x.x APIPA IPv4 configuration.

If you use Microsoft’s setup scripts for Windows Server 2016 (WS2016) Technical Preview 3 (TPv3), the default configuration for container networking is that each VM host will have virtual switch (in the VM), connected the VM’s vNIC. The virtual switch works in NAT mode, and uses a private network range to dynamically address containers that connect to the virtual switch. This set up requires each container to have NAT rules on the VM host so that external clients can connect to the services running in the containers. That … could be messy. In some terms, it could allow for huge network scalability (with tens of thousands of possible ports per VM host) but in others, it could be a nightmare to orchestrate.

What if you wanted your containers to talk directly on the LAN. In other words: no NAT. Yes, your containers can do this, and it’s known as a DHCP configuration – your containers are stateless so it’s pointless assigning them static IP addresses; instead the containers will get their addressing from DHCP services on the LAN.

Remember that there are two scripts that we can run to set up a VM host.

  • Method 1: You download New-ContainerHost.ps1 and run it. This downloads a bunch of stuff, creates a VM host, and then runs Install-ContainerHost.ps1. By default, this will configure the VM host with NAT networking.
  • Method 2: You create your own VM, download and run Install-ContainerHost.ps1. By default, you’ll get NAT networking.

But …

Install-ContainerHost.ps1 includes the option for a flag:

image

If you use method 2 then you could run Install-ContainerHost in the new VM host with the -UseDHCP flag set to $true; the behaviour of the script will change. By default it creates the VM host’s virtual switch in NAT mode. But enabling this flag creates an external virtual switch.

In my lab, I like to create my VM hosts using New-ContainerHost because it’s very quick (thanks to the use of differencing disks) and automates the entire setup. But New-ContainerHost doesn’t include the option for UseDHCP. You could edit any call of Install-ContainerHost from New-ContainerHost, but I do it another way.

Instead I edit Install-ContainerHost. One small change will do the trick. Not far from the top is where the parameters are set as script variables. Look for a line that reads:

$UseDHCP,

Modify this line so it reads:

$UseDHCP = $true,

image

Now every time I either run Install-ContainerHost or New-ContainerHost I’ll get the DHCP networking configuration instead of NATing.

So try this to create/configure a VM host, create a container, use Enter-PSSession to connect to the container, run IPConfig and … viola, you’ll have no DHCP address. Say what?

I was stumped. I tried it again. Nothing. I asked for help and by the time I got home, I got a tip from one of the folks in Redmond. It proved to be my “I’m a moron” moment of the day. If I’d thought about it, DHCP is all about broadcasts and MAC addresses. I have a single VLAN set up in the lab so broadcasts wasn’t the issue. What’s going on with MACs? A VM host has a MAC for itself. And then each container on the VM host that connects to the virtual switch has it’s own MAC address … but the network sees only one interface. Have you figured it out yet?

By default, Hyper-V has MAC spoofing disabled on every virtual NIC – a virtual NIC can only have 1 MAC address. What I needed to do was, at the host level, run the following to enable MAC spoofing on the VM host’s virtual NIC:

Get-VMNetworkAdapter -VMName containers3 | Set-vmNetworkAdapter -MacAddressSpoofing On

Now everything works Smile

Windows Server Containers – “Enter-PSSession : The term ‘Measure-Object’ Is Not Recognized”

If you’ve been working with Windows Server Containers in Windows Server 2016 (WS2016) Technical Preview 3 (TPv3) then you’ve probably experienced something like this:

  1. You create a new container
  2. Then start the container
  3. And try to create a PowerShell session into the container using Enter-PSSession

And then there’s lots of red on the screen:

enter-pssession : The term ‘Measure-Object’ is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.
At line:1 char:1
+ enter-pssession -ContainerId $container.ContainerId
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (Measure-Object:String) [Enter-PSSession], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

image

Strangely, I have not been able to recreate this with Invoke-Command, so it appears to be unique to how Enter-PSSession sets up session in the container.

So how do you solve the issue? It’s simple – you rushed from starting your container to trying to log into it. Wait a few seconds and then try again.

Logging Into Windows Server Containers

How do you log into a container to install software? Ah … you don’t actually log into a container because a container is not a virtual machine. Confusing? Slightly!

What you actually do is remotely execute commands inside of a container; this is actually something like PowerShell Direct, a new feature in Windows Server 2016 (WS2016).

There are two ways to run commands inside of a container.

Which Container?

In Technical Preview 3 (TPv3), the methods we will use to execute commands inside of a container don’t use the name of the container; instead they use a unique container ID. This is because containers can have duplicate names – I really don’t like that!

So, if you want to know which container you’re targeting then do something along the lines of the following to store the container ID. The first creates a new container and stores the resulting container’s metadata in a variable object called $container.

$Container = New-Container -Name TestContainer -ContainerImageName WindowsServerCore

Note that I didn’t connect this container to a virtual switch!

The following example retrieves a container, assuming that it has a unique name.

$Container = Get-Container TestContainer

Invoke-Command

If you want to fire a single command into a container then Invoke-Command is the cmdlet to use. This method sends a single instruction into a virtual machine. This can be a command or a script block. Here’s a script block example:

Invoke-Command -ContainerID $Container.ContainerId -RunAsAdministrator -ScriptBlock { New-Item -Path C:\RemoteTest -ItemType Directory }

Note how I’m using the ContainerID attribute of $Container to identify the container.

The nice thing about Invoke-Command is that it is not interactive; the command remotely runs the script block without an interactive login. That makes Invoke-Command perfect for scripting; you write a script that deploys a container, starts it, does some stuff inside of the container, and then configures networking in the VM host. Lots of nice automation, there!

Enter-PSSession

If you want an interactive session with a container then Enter-PSSession is the way to go. Using this cmdlet you get a PowerShell session in the container where you can run commands and see the results. This is great for once-off stuff and troubleshooting, but it’s no good for automation/scripting.

Enter-PSSession -ContainerID $Container.ContainerId –RunAsAdministrator

Warning – In TPv3 we’ve seen that rushing into running this cmdlet after creating your new container can lead to an error. Wait a few seconds before trying to connect to the VM.

No Network Required!

These methods are using something PowerShell Direct, a new feature in WS2016 – it’s actually PowerShell via a named pipe. The above example deliberately created a VM that has no networking. I can still run commands inside of the container or get an interactive PowerShell session inside of the container without connectivity – I just need to be able to get onto the VM host.