Optimize Hyper-V VM Placement To Match CSV Ownership

This post shares a PowerShell script to automatically live migrate clustered Hyper-V virtual machines to the host that owns the CSV that the VM is stored on. The example below should work nicely with a 2-node cluster, such as a cluster-in-a-box.

For lots of reasons, you get the best performance for VMs on a Hyper-V cluster if:

  • Host X owns CSV Y AND
  • The VMs that are stored on CSV Y are running on Host X.

This continues into WS2016, as we’ve seen by analysing the performance enhancements of ReFS for VHDX operations. In summary, the ODX-like enhancements work best when the CSV and VM placement are identical as above.

I wrote a script, with little bits taken from several places (scripting is the art of copy & paste), to analyse a cluster and then move virtual machines to the best location. The method of the script is:

  1. Move CSV ownership to what you have architected.
  2. Locate the VMs that need to move.
  3. Order that list of VMs based on RAM. I want to move the smallest VMs first in case there is memory contention.
  4. Live migrate VMs based on that ordered list.

What’s missing? Error handling 🙂

What do you need to do?

  • You need to add variables for your CSVs and hosts.
  • Modify/add lines to move CSV ownership to the required hosts.
  • Balance the deployment of your VMs across your CSVs.

Here’s the script. I doubt the code is optimal, but it works. Note that the Live Migration command (Move-ClusterVirtualMachineRole) has been commented out so you can see what the script will do without it actually doing anything to your VM placement. Feel free to use, modify, etc.

#List your CSVs 
$CSV1 = "CSV1" 
$CSV2 = "CSV2"

#List your hosts 
$CSV1Node = "Host01" 
$CSV2Node = "Host02"

function ListVMs () 
{ 
    Write-Host "`n`n`n`n`n`nAnalysing the cluster $Cluster ..."

    $Cluster = Get-Cluster 
    $AllCSV = Get-ClusterSharedVolume -Cluster $Cluster | Sort-Object Name

    $VMMigrationList = @()

    ForEach ($CSV in $AllCSV) 
    { 
        $CSVVolumeInfo = $CSV | Select -Expand SharedVolumeInfo 
        $CSVPath = ($CSVVolumeInfo).FriendlyVolumeName

        $FixedCSVPath = $CSVPath -replace '\\', '\\'

        #Get the VMs where VM placement doesn't match CSV ownership
        $VMsToMove = Get-ClusterGroup | ? {($_.GroupType –eq 'VirtualMachine') -and ( $_.OwnerNode -ne $CSV.OWnernode.Name)} | Get-VM | Where-object {($_.path -match $FixedCSVPath)} 

        #Build up a list of VMs including their memory size 
        ForEach ($VM in $VMsToMove) 
        { 
            $VMRAM = (Get-VM -ComputerName $VM.ComputerName -Name $VM.Name).MemoryAssigned

            $VMMigrationList += ,@($VM.Name, $CSV.OWnernode.Name, $VMRAM) 
        }

    }

    #Order the VMs based on memory size, ascending 
    $VMMigrationList = $VMMigrationList | sort-object @{Expression={$_[2]}; Ascending=$true}

    Return $VMMigrationList 
}

function MoveVM ($TheVMs) 
{

    foreach ($VM in $TheVMs) 
        { 
        $VMName = $VM[0] 
        $VMDestination = $VM[1] 
        Write-Host "`nMove $VMName to $VMDestination" 
        #Move-ClusterVirtualMachineRole -Name $VMName -Node $VMDestination -MigrationType Live 
        }

}

cls

#Configure which node will own wich CSV 
Move-ClusterSharedVolume -Name $CSV1 -Node $CSV1Node | Out-Null 
Move-ClusterSharedVolume -Name $CSV2 -Node $CSV2Node | Out-Null

$SortedVMs = @{}

#Get a sorted list of VMs, ordered by assign memory 
$SortedVMs = ListVMs

#Live Migrate the VMs, so that their host is also their CSV owner 
MoveVM $SortedVMs

Possible improvements:

  • My ListVMs algorithm probably can be improved.
  • The Live Migration piece also can be improved. It only does 1 VM at a time, but you could implement parallelism using jobs.
  • Quick Migration should be used for non-running VMs. I haven’t handles that situation.
  • You could opt to use Quick Migration for low priority VMs – if that’s your policy.
  • The script could be modified to start using parameters, e.g. Analyse (not move), QuickMigrateLow, QuickMigrate (instead of Live Migrate), etc.

Don’t Deploy KB3161606 To Hyper-V Hosts, VMs, or SOFS

Numerous sources have reported that KB3161606, an update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 (WS2012 R2), are breaking the upgrade of Hyper-V VM integration components. This has been confirmed & Microsoft is aware of the situation.

As noted below by the many comments, Microsoft eventually released a superseding update to resolve these issues.

The scenario is:

  1. You deploy the update to your hosts – which upgrades the ISO for the Hyper-V ICs
  2. You deploy the update to your VMs because it contains many Windows updates, not just the ICs.
  3. You attempt to upgrade the ICs in your VMs to stay current. The upgrade will fail.

Note that if you upgrade the ICs before deploying the update rollup inside of the VM, then the upgrade works.

My advice is the same as it has been for a while now. If you have the means to manage updates, then do not approve them for 2 months (I used to say 1 month, but System Center Service Manager decided to cause havoc a little while ago). Let someone else be the tester that gets burned and fired.

Here’s hoping that Microsoft re-releases the update in a way that doesn’t require uninstalls. Those who have done the deployment already in their VMs won’t want another painful maintenance window that requires uninstall-reboot-install-reboot across all of their VMs.

EDIT (6/7/2016)

Microsoft is working on a fix for the Hyper-V IC issue. After multiple reports of issues on scale-out file servers (SOFS), it’s become clear that you should not install KB3161606 on SOFS clusters either.

Playing with WS2016 Hyper-V – Nested Virtualization, Nano, SET, and PowerShell Direct

I have deployed Technical Preview 5 (TP5) of Windows Server 2016 (WS2016) to most of the hardware in my lab. One of the machines, a rather old DL380 G6, is set up as a standalone host. I’m managing it using Remote Server Administration Toolkit (RSAT) for Windows 10 (another VM).

I enabled Hyper-V on that host. I then deployed a 4 x Generation 2 VMs using Nano Server (domain pre-joined using .djoin files) – this keeps the footprint tiny and the boot times are crazy fast.

Hyper-V is enabled in the Nano VMs – thanks to the addition of nested virtualization. I’ve also clustered these machines. Networking-wise, I have given each VM 2 x vNICs, each with MAC spoofing (for nested VMs) and NIC teaming enabled.

I launched PowerShell ISE then used Enter-PSSession to connect to the host from the admin PC. And from the host, I used Enter-PSSession -VMName to use PowerShell Direct to get into each VM – this gives me connectivity without depending on the network. That’s because I wanted to deploy Switch Embedded Teaming (SET) and provision networking in the Nano VMs. This script configure the VMs each with 3 vNICs for the management OS, connected to the vSwitch that uses both of the Nano VMs vNICs as teamed uplinks:

$idx = 54

New-VMSwitch -Name External -NetAdapterName "Ethernet","Ethernet 2" -EnableEmbeddedTeaming $true -AllowManagementOS $false

Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName External
Add-VMNetworkAdapter -ManagementOS -Name "SMB1" -SwitchName External
Add-VMNetworkAdapter -ManagementOS -Name "SMB2" -SwitchName External

Sleep 10

New-NetIPAddress -InterfaceAlias "vEthernet (Management)" -IPAddress 172.16.2.$idx -PrefixLength 16  -DefaultGateway 172.16.1.1
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management)" -ServerAddresses "172.16.1.40"

New-NetIPAddress -InterfaceAlias "vEthernet (SMB1)" -IPAddress 192.168.3.$idx -PrefixLength 24
New-NetIPAddress -InterfaceAlias "vEthernet (SMB2)" -IPAddress 192.168.4.$idx -PrefixLength 24

Note: there’s no mention of RDMA because I’m working in a non-RDMA scenario – a test/demo lab. Oh yes; you can learn Hyper-V, Live Migration, Failover Clustering, etc on your single PC now!

And in no time, I had myself a new Hyper-V cluster with a tiny physical footprint, thanks to 4 new features in WS2016.

Windows Server Technical Preview 5 is Out

Microsoft has released Technical Preview 5 of Windows Server 2016 and Hyper-V Server 2016. There is also an Essentials edition preview.

image

As you can see, plans for licensing have not changed since I last spoke about this topic … and you voted … a lot.

Here is a listing of what’s new in the technical preview (this includes TP1-4). And here are the official listings for:

 

 

Technorati Tags: ,

Linux Integration Services 4.1 for Hyper-V

Microsoft has released a new version of the integration components for Linux guest operating systems running on Hyper-V (2008, 2008 R2, 2012, 2012 R2, and 2016 Technical Preview, Windows 8, Windows 81, and Azure).

What’s new?

  • Expanded Releases: now applicable to Red Hat Enterprise Linux, CentOS, and Oracle
  • Linux with Red Hat Compatible Kernel versions 5.2, 5.3, 5.4, and 7.2.
  • Hyper-V Sockets.
  • Manual Memory Hot Add.
  • SCSI WNN.
  • lsvmbus.
  • Uninstallation scripts.
Technorati Tags: ,

Broadcom & Intel Network Engineers Need A Good Beating

Your virtual machines lost network connectivity.

Yeah, Aidan Smash … again.

READ HERE: I’m tired of having to tell people to:

Disable VMQ on 1 GbE NICs … no matter what … yes, that includes you … I don’t care what your excuse is … yes; you.

That’s because VMQ on 1 GbE NICs is:

  • On by default despite the requests and advice of Microsoft
  • It breaks Hyper-V networking

Here’s what I saw on a brand new dell R730, factory fresh with a NIC firmware/driver update:

image

Now what do you think is the correct action here? Let me give you the answer:

  1. Change Virtual Machine Queues to Disabled
  2. Click OK
  3. Repeat on each 1 GbE NIC on the host.

Got any objections to that? Go to READ HERE above. Still got questions? Go to READ HERE above. Got some objections? Go to READ HERE above. Want to comment on this post? Go to READ HERE above.

This BS is why I want Microsoft to disable all hardware offloads by default in Windows Server. The OEMs cannot be trusted to deploy reliable drivers/firmware, and neither can many of you be trusted to test/configure the hosts correctly. If the offloads are off by default then you’ve opted to change the default, and it’s up to you to test – all blame goes on your shoulders.

So what modification do you think I’m going to make to these new hosts? See READ HERE above 😀

EDIT:

FYI, basic 1 GbE networking was broken on these hosts when I installed WS2012 R2 with all Windows Updates – the 10 GbE NICs were fine. I had to deploy firmware and driver updates from Dell to get the R730 to reliably talk on the network … before I did what is covered in READ HERE above.

My WS2016 Hyper-V Session at Future Decoded

I had fun presenting at this Microsoft UK event in London. Here’s a recording of my session on Windows Server 2016 (WS2016) Hyper-V, featuring failover clustering, storage, and networking:

 

More sessions can be found here.

Russinovich on Hyper-V Containers

We’ve known since Ignite 2015 that Microsoft was going to have two kinds of containers in Windows Server 2016 (WS2016):

  • Windows Server Containers: Providing OS and resource virtualization and isolation.
  • Hyper-V Containers: The hypervisor adds security isolation to machine & resource isolation.

Beyond that general description, we knew almost nothing about Hyper-V Containers, other than expect them in preview during Q4 of 2015 – Technical Preview 4 (TPv4), and that it is the primary motivation for Microsoft to give us nested virtualization.

That also means that nested virtualization will come to Windows Server 2016 Hyper-V in TPv4.

We have remained in the dark since then, but Mark Russinovich appeared on Microsoft Mechanics (a YouTube webcast by Microsoft) and he explained a little more about Hyper-V Containers and he also did a short demo.

Some background first. Normally, a machine has a single user mode running on top of kernel mode. This is what restricts us to the “one app per OS” best practice/requirement, depending on the app. When you enable Containers on WS2016, an enlightenment in the kernel allows multiple user modes. This gives us isolation:

  • Namespace isolation: Each container sees it’s own file system and registry (the hives in the containers hosted files).
  • Resource isolation: How much process, memory, and CPU a container can use.

Kernel mode is already running when you start a new container, which improves the time to start up a container, and thus it’s service(s). This is great for deploying and scaling out apps because a containerised app can be deployed and started in seconds from a container image with no long term commitment, versus minutes for an app in a virtual machine with a longer term commitment.

image

But Russinovich goes on to say that while containers are great for some things that Microsoft wants to do in Azure, they also have to host “hostile multi-tenant code” – code uploaded by Microsoft customers that Microsoft cannot trust and that could be harmful or risky to other tenants. Windows Server Containers, like their Linux container cousins, do not provide security isolation.

In the past, Microsoft has placed such code into Hyper-V (Azure) virtual machines, but that comes with a management and direct cost overhead. Ideally, Microsoft wants to use lightweight containers with the security isolation of machine virtualization. And this is why Microsoft created Hyper-V Containers.

Hyper-V provides excellent security isolation (far fewer vulnerabilities found than vSphere) that leverages hardware isolation. DEP is a requirement. WS2016 is introducing IOMMU support, VSM, and Shielded Virtual Machines, with a newly hardened hypervisor architecture.

Hyper-V containers use the exact same code or container images as Windows Server Containers.That makes your code interchangeable – Russinovich shows a Windows Server Container being switched into a Hyper-V container by using PowerShell to change the run type (container attribute RuntimeType).

The big difference between the two types, other than the presence f Hyper-V, is that Hyper-V Containers get their own optimized instance of Windows running inside of them, as the host for the single container that they run.

image

The Hyper-V Container is not a virtual machine – Russinovich demonstrates this by searching for VMs with Get-VM. It is a container, and is manageable by the same commands as a Windows Server Container.

In his demos he switches a Windows Server Container to a Hyper-V Container by running:

Set-Container -Name <Container Name> -RuntimeType HyperV

And then he queries the container with:

Get-Container -Name <Container Name> | fl Name, State, RuntimeType

So the images and the commands are common across Hyper-V Containers and Windows Server Containers. Excellent.

It looked to me that starting this Hyper-V Container is a slower operation than starting a Windows Server Container. That would make sense because the Hyper-V container requires it’s own operating system.

I’m guessing that Hyper-V Containers either require or work best with Nano Server. And you can see why nested virtualization is required. A physical host will run many VM hosts. A VM host might need to run Hyper-V containers – therefore the VM Host needs to run Hyper-V and must have virtualized VT-x instructions.

Russinovich demonstrates the security isolation. Earlier in the video he queries the processes running in a Windows Server Container. There is a single CSRSS process in the container. He shows that this process instance is also visible on the VM host (same process ID). He then does the same test with a Hyper-V Container – the container’s CSRSS process is not visible on the VM host because it is contained and isolated by the child boundary of Hyper-V.

What about Azure? Microsoft wants Azure to be the best place to run containers – he didn’t limit this statement to Windows Server or Hyper-V, because Microsoft wants you to run Linux containers in Azure too. Microsoft announced the Azure Container Service, with investments in Docker and Mesospehere for deployment and automation of Linux, Windows Server, and Hyper-V containers. Russinovich mentions that Azure Automation and Machine Learning will leverage containers – this makes sense because it will allow Microsoft to scale out services very quickly, in a secure manner, but with less resource and management overhead.

That was a good video, and I recommend that you watch it.

 

Windows 10 Build 10565 Makes Nested Hyper-V Virtualisation … Possible!

One of the biggest hitting articles on my site, written in 2009 (!!!) is “Can You Install Hyper-V in a VM?”. The short answer has always been “yes, if you know how”, but the long/compelete answer continues with “the hypervisor will not start and you will not be able to boot any virtual machines”.

This was because Hyper-V did not support nested virtualization – the ability to run Hyper-V in a VM that is running on Hyper-V (yes, I know there are hacks to get Hyper-V to run in a VM on VMware). A requirement of Hyper-V is a processor feature, VT-x from Intel or AMD-V from AMD. Hyper-V takes control of this feature and does not reveal it to the guests running on the host. This means that a system requirement of Hyper-V is not present in the virtual machine, and you cannot use the virtual machine as a real host.

Microsoft released Build 10565 of Windows 10 to Windows Insiders this week and announced that the much anticipated nested Hyper-V virtualization is included. Yup, I’ve tried it and it works. Microsoft has made this work by revealing processor virtualization on a per-VM basis to VMs that will be Hyper-V hosts – let’s call these VM hosts to keep it consistent with the language of Windows Server Containers. This means that I can:

  1. Install Hyper-V on a physical host
  2. Create a VM
  3. Enable nested virtualization for that VM, making it a VM host
  4. Install a guest OS in that VM host and enable Hyper-V
  5. Create VMs that will actually run in the VM host.

Applications of Nested Virtualization

I know lots of you have struggled with learning Hyper-V due to lack of equipment. You might have a PC with some RAM/CPU/fast disk and can’t afford more, so how can you learn about Live Migration, SOFS, clustering, etc. With nested virtualization, you can run lots of VMs on that single physical machine, and some of those VMs can be VM hosts, in turn hosting more VMs that you can run, back up, migrate, failover, and so on (eventually, because there are limitations at this point).

Consultants and folks like me have struggled with doing demonstrations on the road. At TechEd Europe and Ignite, I used a VPN connection back to a lab in Dublin where a bunch of physical machines resided. I know one guy that travels with a Pelicase full of of Intel NUC PCs (a “cloud in a case”). Now, one high spec laptop with lots of SSD could do the same job, without relying on dodgy internet connections at event venues!

A big part of my job is delivering training. In the recent past, we nearly bought 20 rack servers (less space consumed than PCs, and more NICs than NUC can do) to build a hands-on training lab. With a future release of WS2016, all I need is some CPU and RAM, and maybe I’ll build a near-full experience hands-on training lab that I can teach Hyper-V, Failover Clustering, and SOFS with, instead of using the limited experience solution that Microsoft uses with Azure VMs (no nested virtualization at this time). Personally I think this feature could revolutionize how Hyper-V training is delivered, finally giving Microsoft something that is extremely badly required (official Hyper-V training is insufficient at this time).

Real world production uses include:

  • The possibility of hosted private cloud: Imagine running Hyper-V on Azure, so you can do private cloud in a public cloud! I think that might be pricey, but who knows!
  • Hyper-V Containers: Expected with TPv4 of WS2016, Hyper-V Containers will secure the boundaries between containerized apps.

It’s the latter that has motivated Microsoft to finally listen to our cries for this feature.

Release Notes

  • Nested virtualization is a preview feature and not to be used in production.
  • AMD-v is not supported at this time. Intel VT-x must be present and enabled in the physical host.
  • You cannot virtualize third-party hypervisors at this time – expect VMware to work on this.
  • The physical host and the VM host must be running Build 10565 or later. You cannot use Windows 10 GA, WS2012 R2 or WS2016 TPv3 as the physical host or the VM host.
  • Dynamic Memory is not supported.
  • The following features don’t work yet: Hot-memory resize, Live Migration, applying checkpoints, save/restore.
  • MAC spoofing must be enabled on the VNIC of the VM host.
  • Virtual Secure Mode (VSM) / Virtualization Based Security (VBS) / Credential Guard (a Windows 10 Enterprise feature) must be disabled to allow virtualization extensions.

Enabling Nested Virtualization

1 – Install the Physical Host

Install Build 10565 of Windows or later on the physical host. Enable the Hyper-V role and configure a virtual switch.

2- Create a VM Host

Deploy a VM (static RAM) with Build 10565 or later as the guest OS. Connect the VM to the virtual switch of the physical host.

2 - Create VM with Static RAM

3 – Enable Nested Virtualization

Run the following, using an elevated PowerShell window, on the physical host to execute the enablement script (shared on GitHub):

Invoke-WebRequest https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/hyperv-tools/Nested/Enable-NestedVm.ps1 -OutFile ~/Enable-NestedVm.ps1

~/Enable-NestedVm.ps1 -VmName <VmName>

3 - Enable Nested Virtualization

4 – Enable MAC Spoofing

Run the following on the physical host, targeting the VM host. This will enable MAC spoofing on the VM host. Modify this cmdlet to specify a vNIC if the VM will have NIC just for nested VMs to communicate on.

Set-VMNetworkAdapter -VMName <VMName> -MacAddressSpoofing on

5 – Enable Hyper-V in the VM Host

Enable the Hyper-V role in the VM host and configure a virtual switch on the vNIC that is enabled for MAC spoofing.

1 - Enable Hyper-V

6 – Create Nested VMs

Create VMs in the VM host, power them up and deploy operating systems. Have fun!

10 - Nested Virtualization in Action

And bingo, there you go!

How Useful is Nested Virtualization Now?

I won’t be rushing out to buy a new laptop or re-deploy the lab yet. I want to run this with WS2016 so I have to wait. I’ll wait longer for Live Migration support. So right now, it’s cool, but with WS2016 TPv4 (hopefully), I’ll have something substantial.

Create a WS2016 Nano Server Hyper-V VM

Setting up a Nano Server VM requires running some PowerShell. The instructions that I found out there aren’t that clear for a non-PowerShell guru , are wrong, or are incomplete. So let me clear up everything by showing you exactly what I am using to deploy Nano Server as a Windows Server 2016 (TPv3/Technical Preview 3) Hyper-V virtual machine.

Note: The process will probably change after I published this post.

Step 1 – Make Folders

Create three folders on a computer with a fast disk. Note that I’ll use C: but maybe you should use a D: or something.

  • C:\Nano
  • C:\Nano\Base
  • C:\Scripts

Step 2 – Copy Scripts

Mount the WS2016 ISO – let’s assume that it mounts as E:. Copy two scripts from E:\NanoServer from the ISO to C:\Scripts:

  • new-nanoserverimage.ps1
  • convert-windowsimage.ps1

Step 3 – Dot The Scripts

Note that I missed out on this step because I had never encountered this sort of thing before – I’m an advocate of PowerShell but I’m no guru! If you do not run this step, New-NanoServerImage.ps1 will do nothing at all and wreck your head for 3 hours (it did for me!).

Open a PowerShell window with elevated privileges. Navigate to C:\Scripts. Run the following:

. .\convert-windowsimage.ps1

I know – it looks funny. Enter it exactly as above. This appears to load the contained script as a cmdlet that can later be executed.

Do the same again for New-NanoServerVHD.ps1:

. .\new-nanoserverimage.ps1

Now we can build a new VHD with Nano Server pre-installed.

Step 4 – Create a VHD

You can now run New-NanoServerImage. Here’s what I ran:

New-NanoServerImage -MediaPath e:\ -BasePath C:\Nano\Base -TargetPath C:\Nano\Nano1 -GuestDrivers -ComputerName "Nano1" -DomainName "prev.internal" -EnableIPDisplayOnBoot -AdministratorPassword (convertto-securestring -string "AVerySecurePassPhrase" -asplaintext -force) -EnableRemoteManagementPort -Language EN-US

The above will prep a VHD with a VM called Nano1. I have configured the VM to join the prev.internal domain – note that this will require me to have suitable domain creds – a computer account is created in the domain. I enabled the Hyper-V guest drivers and allowed the IP of the VM to appear on the console. The VHD will be stored in C:\Nano\Nano1. Note that if this folder exists then the process will abort:

WARNING: The target directory already exists. If you want to rebuild this image, delete the directory first.
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

Note that I had to specify EN-US because, at this time, my default region of EN-IE was not available:

WARNING: The ‘en-ie’ directory does not exist in the ‘Packages’ directory (‘g:\NanoServer\Packages’).
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

I could have added other roles/packages to the VHD such as:

  • -Storage: For a SOFS cluster.
  • -Compute: To enable Hyper-V … useful when TPv4 (we guess) introduces guest virtualization.
  • -Clustering: To enable failover clustering in the VM.
  • -Defender: Adding security to the guest OS.

A minute or so later, a 439 MB was created in the newly created C:\Nano\Nano1.

Recreating a Nano Server VM

If you’re playing with Nano Server in a lab then you’ll create VMs with name reuse. If you do this with domain join then you might encounter a failure:

WARNING: Failed with 2224.
WARNING: Terminating due to an error. See log file at:
C:\Users\ADMINI~1.LAB\AppData\Local\Temp\2\New-NanoServerImage.log

Open the log and you’ll find:

Provisioning the computer…

Failed to provision [Nano1] in the domain [prev.internal]: 0x8b0.

It may be necessary to specify /REUSE when running

djoin.exe again with the same machine name.

Computer provisioning failed: 0x8b0.

The account already exists.

That’s one of those “ding-ding-ding aha!” moments. The computer account already exists in AD so delete the account and start over.

Creating Additional VMs

Once you have run the above process, C:\Nano\Base will be populated with files from the ISO (\NanoServer). This means that you can drop the -MediaPath flag and eject the ISO.

New-NanoServerImage -BasePath C:\Nano\Base -TargetPath C:\Nano\Nano2 -GuestDrivers -ComputerName "Nano2" -DomainName "prev.internal" -EnableIPDisplayOnBoot -AdministratorPassword (convertto-securestring -string "AVerySecurePassPhrase" -asplaintext -force) -EnableRemoteManagementPort -Language EN-US

Step 5 – Move the Computer Account

In AD, move the computer account for the new Nano server to the required OU so it get’s any requierd policies on the first boot – remember that this sucker has no UI so GPO and stuff like Desired State Configuration (DSC) will eventually be the best way to configure Nano Server.

Step 6 – Create a VM

The above process prepare a VHD for a Generation 1 virtual machine. Create a Generation 1 VM, and attach the VHD to the boot device. Connect to the VM and power it up. A couple of seconds will pass and a log in screen will appear:

image

Log in with your local admin or domain credentials and you’ll be greeted with the console. Note that I enabled the IP address to be displayed during the setup:

image

Step 7 – Manage the Nano Server VM

If you want to do some management work then you’ll need to:

  • Wait for the eventual remote management console that was quickly shown at Ignite 2015.
  • Use PowerShell remoting.
  • Use PowerShell Direct (new in WS2016).

If you have network access to the VM then you can use remoting:

Enter-PSSession -ComputerName Nano1 -Credential prev\administrator

Troubleshooting network issues with Nano Server can be a dog because there is no console that you can log into. However … you can use PowerShell Direct with no network access to the VM, via the Hyper-V guest OS integration components:

Enter-PSSession -VMName Nano1 -Credential prev\administrator

Tip: Most AD veterans start network troubleshooting with DNS – it’s nearly always the cause. In my lab, I have 3 domains, so 3 sets of DNS. My DHCP scope sets up on domain’s DNS server as the primary, and that can cause issues. Some PowerShell Direct to the VM with some Set-DnsClientServerAddress sorted things out.