Windows Server 8 Hyper-V Management Improvements

Another big investment by Microsoft in Windows Server 8 Hyper-V was how we interact with the product. With lots more functionality, and some of it being very advanced and not required by everyone, they had to decide how to present it in the GUI.  And with huge cluster scale out (up to 64 hosts per cluster) and target markets such as hosting and large enterprise, automation was of great importance.

The GUI – Hyper-V Manager console (HMC)

On the face of it, the GUI has not changed much.  There is no ribbon bar and things can be found where they previously were in the Windows Server 2008 Hyper-V and Windows Server 2008 R2 Hyper-V HMCs.

Often we fire up the HMC to just look for information.  Tabs have been added in the lower centre pane of HMC to show us information, e.g. summary, memory, networking, and Hyper-V Replica (aka Replica).

Nested Nodes

When you open the settings of a VM to change it’s configuration you will notice that the CPU and Networking nodes on the left are nested.  There are sub-nodes with more settings.  This is done for some reasons including:

  • It cleans up the GUI  Even with newly added scroll bars, there’s only so much you can squeeze into a single screen without making things messy and unusable.
  • It hides away advanced features that should only be used by engineers who know that they do and know that they need them, e.g. the NUMA override settings.

Clustering Interaction

A classic problem for the forums was when a person would edit the settings of a VM in the HCM and live migrate the VM from one host to another in a host cluster.  Their new settings were lost because the cluster database was not updated.  You had to either use Failover Cluster Manager (FCM) to edit the VM settings (auto-update of settings) or remember to manually update the VM resource in FCM after editing it the VM in HCM.

Now, HCM will detect that a VM is clustered and prevent you from editing the settings.  You must use the FCM instead, and quite right too!

VMConnect

Have you ever been ticked off when you use VMConnect to get a console connection to a VM and then you fail it over to another node in a Hyper-V Cluster?  Actually, ticked off isn’t the right word the first time you see this: you crap yourself when VMConnect loses the console connection to the VM and confusingly tells you that the VM must have been deleted!  That’s changing in Windows Server 8.  Yes, VMConnect will disconnect – briefly.  The source host for the VM will redirect the VMConnect session to the VM on the destination host.  No more tingling in the left arm or tightening of the heart when working on a VM at midnight.  Hyper-V engineers and their doctors thank you, Microsoft!

PowerShell 3.0 Cmdlets

The big change is that Hyper-V will have built-in PowerShell (POSH) cmdlets for the very first time in Windows Server 8.  Even from a POSH-disabled person like me, the cmdlets looked easy to use and very powerful to me in the hands on lab that I did.  For you POSH purists, I’ve been told that the Hyper-V POSH cmdlet specs were written so things would be done the POSH way.

With some hep I had created a script that read in some specs from a CSV file, created a bunch of differential disks, created lots of VMs, and connected them to those disks.  One test lab up and running in a few minutes, that could be recreated in a moment’s notice.  I’m sure with more practice, I could have made the script much more elegant than I had in the limited time window.

It’s this sort of thing that the POSH cmdlets are intended to enable.  Big hosting companies can automate deployment from their “control panels”.  Enterprises can automate bulk configuration changes.  We people who demo can deploy a new lab in seconds.  And with POSH 3.0 Workflows you can build complex scripts that work reliably and in an orchestrated manner across many machines and applications.

Just like Exchange, not every admin function is in the GUI.  Some things will have to be done in POSH.  I guess I will have to learn it after all these years of saying “I will have to learn PowerShell”.

Windows Server 2012 Hyper-V Virtual Fibre Channel

You now have the ability to virtualise a fibre channel adapter in WS2012 Hyper-V.  This synthetic fibre channel adapter allows a virtual machine to directly connect to a LUN in a fibre channel SAN.

Benefits

It is one thing to make a virtual machine highly available.  That protects it against hardware failure or host maintenance.  But what about the operating system or software in the VM?  What if they fail or require patching/upgrades?  With a guest cluster, you can move the application workload to another VM.  This requires connectivity to shared storage.  Windows 2008 R2 clusters, for example, require SAS, fibre channel, or iSCSI attached shared storage.  SAS is right for connecting VMs to storage.  iSCSI consumers were OK.  But those who made the huge investment in fibre channel were left in the cold, sometimes having to implement an iSCSI gateway to their FC storage.  Woudn’t it be nice to allow them to use their FC HBAs in the host to create guest clusters?

Another example is where we want to provision really large LUNs to a VM.  As I just posted a little while ago, VHDX expands out to 64 TB so really we would need to have a requirement for LUNs beyond 64 TB to justify this reason to provide physical LUNs to a VM and limit mobility.  But I guess with the expanded scalability of VMs, big workloads like OLTP can be virtualised on Windows 8 Hyper-V and they require big disk.

What It Is

Virtual Fibre Channel allows you to virtualise the HBA in a Windows 8 Hyper-V host, have a virtual fibre channel in the VM with it’s own WWN (actually, 2 to be precise) and connect the VM directly to LUNs in a FC SAN.

Windows Server 2012 Hyper-V Virtual Fibre Channel is not intended or supported to do boot from SAN.

The VM will share bandwidth on the host’s HBA, unless I guess you spend extra on additional HBAs, and cross the SAN to connect to the controllers in the FC storage solution.

The SAN must support NPIV (N_Port ID Virtualization).  Each VM can have up to 4 virtual HBAs.  Each HBA has it’s own identification on the SAN.

How It Works

You create a virtual SAN on the host (parent partition), for each HBA on the host that will be virtualised for VM connectivity to the SAN.  This is a 1-1 binding between virtual SAN and physical HBA, similar to the old model of virtual network and physical NIC.  You then create virtual HBAs in your VMs and connect them to virtual SANs.

And that’s where things can get interesting.  When you get into the FC world, you want fault tolerance with MPIO.  A mistake people will make is that they will create two virtual HBAs and put them both on the same virtual network, and therefore on a single FC path on a single HBA.  If that single cable breaks, or that physical HBA port fails, then the VM has pointless MPIO because both virtual HBAs are on the same physical connection.

The correct approach for fault tolerance will be:

  1. 2 or more HBA connections in the host
  2. 1 virtual SAN for each HBA connection in the host.
  3. 1 virtual HBA in each VM, with each one connected to a different virtual SAN
  4. MPIO configured in the VM’s guest OS.  In fact, you can (and should) use your storage vendor’s MPIO/DSM software in the VM’s guest OS.

Now you have true SAN path fault tolerance at the physical, host, and virtual levels.

Live Migration

One of the key themes of Hyper-V is “no new features that prevent Live Migration”.  So how does a VM that is connected to a FC SAN move from one host to another without breaking the IO stream from VM to storage?

There’s a little bit of trickery involved here.  Each virtual HBA in your VM must have 2 WWNs (either automatically created or manually defined), not just one.  And here’s why.  There is a very brief period where a VM exists on two hosts during live migration.  It is running on HostA and waiting to start on HostB.  The switchover process is that the VM is paused on A and started on B.  With FC, we need to ensure that the VM is able to connect and process IO.

So in this below example, the VM is connecting to storage using WWN A.  During Live Migration the new instance of the VM on the destination host is set up with WWN B.  When LM un-pauses on the destination host, the VM can instantly connect to the LUN and continue IO uninterrupted.  Each subsequent LM, either to the original host or any other host, will cause the VM to alternate between WWN A and WWN B.  That’ holds true of each virtual HBA in the VM.  You can have up to 64 hosts in your Hyper-V cluster, but each virtual fibre channel adapter will alternate between just 2 WWNs.

Alternating WWN addresses during a live migration

What you need to take from this is that your VM’s LUNs need to be masked or zoned for two WWNs for every VM.

Technical Requirements and Limits

Fist and foremost, you must have a FC SAN that supports NPIV.  Your host must run Windows Server 2012.  The host must have a FC HBA with a driver that supports Hyper-V and NPIV.  You cannot use virtual fibre channel adapters to boot VMs from the SAN; they are for data LUNs only.  The only supported guest operating systems for virtual fibre channel at this point are Windows Server 2008, Windows Server 2008 R2, and Windows Server 2102.

This is a list of the HBAs that have support built into the Windows Server 2012 Beta:

Vendor Model
Brocade BR415 / BR815
Brocade BR425 / BR825
Brocade BR804
Brocade BR1860-1p / BR1860-2p
Emulex LPe16000 / LPe16002
Emulex LPe12000 / LPe12002 / LPe12004 / LPe1250
Emulex LPe11000 / LPe11002 / LPe11004 / LPe1150 / LPe111
QLogic Qxx25xx Fibre Channel HBAs

Summary

With supported hardware, virtual fibre channel support allows supported Windows Server 2012 Hyper-V guests to connect to and use fibre channel SAN LUNs for data purposes that enable extreme scalable storage and in-guest clustering without compromising the uptime and mobility of Live Migration.

The Virtualisation Smackdown – Hyper-V VHDX Scales Out to 64 TB – Yes, I said 64 Terabytes!

I was gobsmacked when I learned this week that the new Windows 8/Windows Server 2012 (WS2012) format for virtual disks, VHDX, would have a maximum size of 64 TB.  64 TB!  Damn, I was impressed with the Build announcement that it would go out to 16 TB.  Even then, it was dwarfing the paltry 2040 GB that vSphere 5.0 VMDK can do.  Wow, Hyper-V has vSphere smacked down on storage scalability; isn’t that a shocker!?!?!

Back to the serious side of things … what does this mean?  One of the big reasons that people have implemented virtualisation (28.19% – Great Big Hyper-V Survey of 2011) was flexibility.  What makes that possible is that virtual machines are normally just files, unbound to their hardware they reside on unlike legacy hardware OS installations and data storage.  A limiting factor on that has been the scalability of virtual disks.  Both VHD (pre-Windows 8 Hyper-V) and VMDK (all current versions of vSphere) are limited to 2040 GB.  The alternative is Raw Device Mapping (vSphere) or Passthrough disk (Hyper-V).

I hate this type of storage.  It’s bound to disk because it’s just a raw LUN presented to a VM and therefore it’s a hardware boundary that limits mobility, flexibility, and precludes other things that we can do such as Hyper-V Replica, snapshots, VSS backups of running VMs, etc.  Way too often I see people using Passthrough for “performance” reasons (usually with no assessment done and based on pure guesswork) without realising that even VHD has great performance (and I cannot wait for VHDX performance results to be published publicly).  The only real reason to use Passthrough disk in my opinion has been to scale a VM’s LUN beyond 2040 GB.

That changes with Windows Server 2012 Hyper-V.  I am thinking that Passthrough disk will become one of those things that is theoretical to 99.999999% of us.  It’ll be that exam question that no one can answer because they never do it.  Think about it: a 64 TB virtual disk that performs near as good as the physical disk it sits on.  Wow!

Question: So which virtualisation platform isn’t scalable or enterprise ready? *sniggers* I cannot wait to see the excuses that the competition come up with next.

There are other benefits to the VHDX format:

  • “Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  • A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  • The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  • Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)”

There are other things I’d love to share about VHDX, but I’m not sure of their NDA status at the moment so I’ll be sitting on those facts until later.  Being an MVP aint easy Smile

Looking Back On MVP Summit 2012

This week was the highlight of the calendar for any Microsoft Valuable Professional (MVP), the annual MVP Summit in Redmond.  I cannot go into details on the sessions.  But I can say it was excellent, with some serious deep depth stuff being presented by the product groups.

Throw in the mix with meeting and chatting with many of the program managers who help in the direction of the products, chatting with Jeff Woolsey over dinner, hanging out and chatting with Ben “Virtual PC Guy” Armstrong before a session, spending time with lots and lots of my fellow MVPs, asking the hard questions so I can give the easy answers (trademark pending!), listening to serious experts who were genuinely interesting in sharing and hearing our feedback, and even learning some PowerShell Winking smile

The Summit is a great chance to give feedback.  There were certain things (which I won’t go into) that we did get a chance to talk about.  It’s not often you get a chance to sit down with decision makers, talk over things they might not have considered, or try to get an understanding of why they did go in some direction.  That helps us MVPs get your message to the product groups, and helps us in explaining product direction to you.

And oh yeah, Thanks to Didier van Hoye (Virtual Machine), a bunch of us had a visit to the Bill and Melinda Gates Foundation on Monday to your their IT installation and chat about what they are doing.  Wow!  That was some impressive installation (again, sharing details is not possible).

I had my last Summit session this afternoon, with an impromptu presentation by a PM that I asked for and he jumped at and helped me assemble a bunch of things in my mind.  Tonight, it’s a relaxed night out with Didier, Carsten Rachfahl (Virtual Machine) and Kerstin Rachfahl (Office 365).  I start the journey back to Ireland tomorrow night.  And then it’s the countdown to the System Center 2012 launch events with MSFT Ireland and another VAD Roadshow for work later this month.

BTW, I cannot wait to return next year.

Windows 8 App Switching Made Easy – AKA App Peekaboo

I found switching between apps on a busy touch-only machine kind of frustrating in the developer preview release of Windows 8.  There was no keyboard to hit ALT-TAB, and flipping between each app with the swipe-from-left gesture was annoying when there were more than just a couple of running apps.

When I was showing my updated slate to my MVP Summit room mate I accidentally stumbled onto a feature I’d never noticed before, and I am now officially dubbing it as App Peekaboo.

Normally, you swipe from the left of the screen to flip between apps.  If you pause 1/4 of the way through the screen then the new app is docked/snapped to take up a small piece of the screen for a side-by-side view.  However, if you just pull an app out and then return it back to the left, a different thing happens.  The side-by-side docked/snapped window appears and shows all of your alternative running apps.  You just have to touch one to bring it up.  A big improvement.  At least I think it is … I’ve never noticed this gesture before installing the Consumer Preview (beta).

Technorati Tags: