Windows Server 2012 R2 Hyper-V – Storage QoS

A fear of cloud administrators is that some tenant or VM goes nuts and eats up a host’s bandwidth to the storage.  System Center has the ability to deal with this.  VMM Dynamic Optimization is like DRS in vSphere; it will load balance workloads at a triggered threshold.  And Performance and Resource Optimization (PRO) allows OpsMgr to detect an immediate issue and instruct VMM to use Intelligent Placement to react to it.

But maybe we want to prevent the issue from happening at all.  Maybe we want to cap storage bandwidth based on price bands – you pay more and you get faster storage.  Maybe a VM has gone nuts and we want to limit the damage it does while we figure out what has gone wrong.  Maybe we want alerts when certain VMs don’t have enough bandwidth; we could have an automated response in System Center to deal with that.

WS2012 R2 Hyper-V gives us Storage QoS.  We can configure Storage QoS on a per-virtual hard disk basis using the IOPS measurement:

  • Maximum: This is a hard cap on how many IOPS a virtual hard disk can perform
  • Minimum alert: We will get an alert if a virtual hard disk cannot perform at this minimum level

The settings can be configured while a virtual machine is running.  That allows a tenant to improve their plan and get more storage bandwidth.

Note: there are IOPS PerfMon counters to help you figure out what good and bad metrics actually are.

Comparing WS2012 R2 Hyper-V and vSphere 5.1

Bring on the hate!  (which gets *ahem* moderated but those vFanboys will attempt to post anyway).  Matt McSpirit of Microsoft did his now regular comparison of the latest versions of Microsoft Windows Server 2012 R2 Hyper-V and VMware vSphere 5.1 at TechEd NA 2013 (original video & deck here).  Here are my notes on the session, where I contrast the features of Microsoft’s and VMware’s hypervisors.

Before we get going, remember that the free Hyper-V Server 2012 R2, Windows Server Standard, and Windows Server Datacenter all have the exact same Hyper-V, Failover Clustering, and storage client functionality.  And you license your Windows VMs on a per-host basis – and that’s that same on Hyper-V, VMware, XenServer, etc.  Therefore, if you run Windows VMs, you have the right to run Hyper-V on Std/DC editions, and therefore Hyper-V is always free.  Don’t bother BSing me with contradictions to the “Hyper-V is free” fact … if you disagree then send me your employer’s name and address so I can call the Business Software Alliance to make an easy $10,000 reward.

Scalability

Most of the time this information is just “Top Gear” numbers.  Do I need a 1000 BHP car that can accelerate to 100 MPH in 4 seconds?  Nope, but it’s still nice to know that the muscle is there if I need it.  Microsoft agrees, so they haven’t done any work on this basic figures to extend maximum capacities from where they are with WS2012 Hyper-V.  The focus instead is on cloud, efficiency, and manageability.  But here you go anyway:

image

If you want to compare like with like, then the free Hyper-V crushes the free vSphere hypervisor in every way, shape, and form.

The max VM numbers per host are a bit of a stretch.  But interestingly, I did encounter someone last year in London who would have used the maximum VM configuration.

Storage

Storage is the most expensive piece of the infrastructure and that has had a lot of focus from Microsoft over the past 2 releases (WS2012 and WS2012 R2). 

image

In the physical world, WS2012 added virtual fibre channel, with support for Live Migration.  MPIO is possible using the SAN vendor’s own solution in the guest OS of the VM.  In the vSphere world, MPIO is only available to the most expensive versions of vSphere.  VMware still does not support native 4-K sector disks.  That eliminates new storage from being used, and limits them to the slow RMW process for 512E disks.

In the VM space, Microsoft dominates.  WS2012 R2 allows complete resizing of VHDX attached to SCSI controllers (remember that Gen 2 VMs only use SCSI controllers, and data disks should always be on a SCSI controller in Gen 1 VMs).  In the vSphere world, you can grow your storage, but that cloud customer doesn’t get elasticity … no shrink I’m afraid so keep on paying for that storage that you temporarily used!

VHDX scales out to 64 TB.  Meanwhile, VMware are stuck in the 1990’s with a 2TB VMDK file.  I hate passthrough disks (raw device mapping) so I’m not even bothering to mention that Microsoft wins there too … oh wait … I just did Smile

ODX is supported in all versions of Hyper-V (that’s the way Hyper-V rolls) but you’ll only get that support in the 2 most expensive versions of vSphere.  That’ll slow down your cloud deployments, e.g. VMM 2012 R2 will deploy VMs/Services from a library via ODX and we can nearly instantly create zeroed out fixed VHD/X files on ODX enabled storage.

Both platforms support boot from USB.  To be fair, this is only supported by MSFT if it is done using Hyper-V Server by an OEM.  No OEM offers this option that I know of.  And VMware does offer boot from SD which is offered by OEMs.  VMware wins that minor one.

When you look at file based storage, SMB 3.0 versus NFS, then Microsoft’s Storage Spaces crushes not just VMware, but the block storage market too.  Tiered storage is added in WS2012 R2 for read performance (hot spots promoted to SSD) and write-through performance (Write-Back Cache where data is written temporarily to SSD during write activity spikes).

Memory

The biggest work a vendor can do on hypervisor efficiency is in memory, because host RAM is normally the first bottleneck to VM:host density.  VMware offers:

  • Memory overcommit: closest to Hyper-V Dynamic Memory as you can get.  However, DM does not overcommit – overcommitting forces hosts to do second-level paging.  That requires fast disk and reduces VM performance.  That’s why Hyper-V focuses on assigning memory based on demand without lying to the guest OS, and why DM does not overcommit.
  • Compression
  • Swapping
  • Transparent Page Sharing (TPS): This deduping is not in Hyper-V.  I wonder how useful this is when the guest OS is Windows 8/Server 2012 or later?  Randomization and large page tables make render this feature pretty useless.  This deduping also requires CPU effort (4K page deduping) … and it only occurs when host memory is under pressure.

image

Hyper-V does do Resource Metering, and presents that data into System Center (Windows Azure Pack and Operations Manager).  VMware does make the data more readily available in a simpler virtualization (versus cloud) installation via vCenter.  vSphere free does not present this data because there is no vCenter, whereas that data is gathered and available in all versions of Hyper-V.

Network QoS is a key piece in the converged networks story of Hyper-V, in all editions.  You’ll need the most expensive edition of vSphere to do Network QoS.

Before the vFanboys get all fired up, WS2012 R2 (all editions of Hyper-V) adds Storage QoS, configurable on a per virtual hard disk basis.  vSphere Enterprise Plus is required for Storage QoS.  Cha-ching!

Security & Multi-tenancy

Hyper-V is designed from the network up for multi-tenancy and tenant isolation:

  • Extensible virtual switch – add (not replace as with vSphere vSwitch) 3rd party functionality (more than 1 if you want) to the Hyper-V virtual switch
  • Hyper-V Network Virtualization (HNV aka Software Defined Networking aka SDN) – to be fair it requires VMM 2012 R2 to be used in production

image

Don’t give me guff about number of partners; WS2012 Hyper-V had more network extension partners at RTM time than vSphere did after years of support for replacing their vSwitch.

image

So, we keep the Hyper-V virtual switch and all of its functionality (such as QoS and HNV) if we add 3rd party network functionality, e.g. Cisco Nexus 1000v for Hyper-V.  On the other hand, the vSphere vSwitch is thrown out if you add 3rd party network functionality, e.g. Cisco Nexus 1000v for vSphere.

The number of partner extensions for Hyper-V shown above is actually out of date (it’s higher now).  I also think that the VMware number is now 3 – I’d heard something about IBM adding a product.

I’m not going line-by-line with this one.  Long-story short on cloud/security networking:

  • All versions of Hyper-V: yes
  • vSphere free: no or very restricted
  • vSphere: pay up for add-ons and/or the most expensive edition of vSphere

Networking Performance

Lots of asterisks for VMware on this one:

image

DVMQ automatically and elastically scales acceleration and hardware offload of inbound traffic to VMs beyond core 0 on the host.  Meanwhile in VMware-land, you’re bottlenecked to core 0.

On a related note, WS2012 R2 leverages DVMQ on the host to give us VRSS (virtual receive side scaling) in the guest OS.  That allows VMs to elastically scale processing of inbound traffic beyond just vCPU 0 in the guest OS.

IPsec Task Offload is still just a feature on Hyper-V for offloading CPU processing that is required for enabling IPsec in a guest OS for security reasons.

SR-IOV allows host scalability and low latency VM traffic.  vSphere supports Single-Root IO Virtualization, but vMotion is disabled for those enabled VMs.  Not so on Hyper-V; all Hyper-V features must support Live Migration.

BitLocker is supported for the storage where VM files are placed in Hyper-V, including on CSV (the Hyper-V alternative to clustered VMFS).  In the VMware world, VM files are there for anyone to take if they have physical access – not great for insecure locations like branch offices or frontline military.

Linux

Let’s do myth debunking: Linux is supported on Hyper-V.  There is an ever-increasing number of explicitly supported (meaning you can call MSFT support for assistance, not just works on Hyper-V) distros.  And the Hyper-V Linux Integration Services are a part of the Linux kernel since v3.3.  That means lots of other distros work just as well as the explicitly supported distros.  Features include:

  • 64 vCPU per VM
  • Virtual SCSI, hot-add, and hot-resize of VHDX
  • Full support Dynamic Memory
  • File system consistent hot-backup of Linux VMs
  • Hyper-V Linux Integration Services already in the guest OS

Flexibility

The number one reason for virtualization: flexibility.  And that is heavily leveraged to enable self-service, a key trait of cloud computing.  Flexibility starts with vSphere (not the free edition) vMotion and Hyper-V (all editions) Live Migration:

image

WS2012 Hyper-V added unlimited (only hardware/network limitations) simultaneous or concurrent Live Migration.  vSphere has arbitrary limits of 4 (1 GbE) or 8 (10 GbE) vMotions at a time.  This is where VMware’s stealth marketing asks if draining your host more quickly is really necessary.  Cover your jewels you-know-who …

WS2012 R2 Hyper-V adds support for doing Live Migration even more quickly:

  • Live Migration will be compressed by default, using any available CPU on the involved hosts, while prioritizing host/VM functionality.
  • With RDMA enabled NICs, you can turn on SMB Live Migration.  This is even quicker by offloading to RDMA, and can leverage SMB Multichannel over multiple NICs.

Neither of these are in vSphere 5.1.

vCenter has DRS.  While Hyper-V does not have DRS and DPM, we have to get into the apples VS oranges debate.  System Center Virtual Machine Manager (the equivalent + MORE) of vCenter does give us Dynamic Optimization and Power Optimization (OpsMgr not required).

Storage Live Migration was added in WS2012 Hyper-V.  I love that feature.  Shared-Nothing Live Migration allows us to move between hosts that are clustered or not – I hear that the VMware equivalent doesn’t allow you to vMotion a VM between vSphere clusters.  That seems restrictive in my opinion.

And There’s More On Flexibility

image

All versions of 2012 R2 Hyper-V allow us to do Live VM cloning.  For example, you can clone an entire VM from a snapshot deep down in a snapshot tree.  DevOps will love that feature.

Network Virtualization was added in WS2012 R2.  Yes, the real world requires VMM to coordinate the lookups and the gateway.  While third party NVGRE gateways now exist (F5 and Iron Networks) WS2012 R2 adds a built-in NVGRE gateway (in RRAS) that you can run in VMs that are placed in an edge network.  The VMware solution requires more than just vCenter (vCloud Networking & Security) + has the same need for an NVGRE gateway.

High Availability

image

Ideally (not everyone though because of the cost of storage/redundant hosts), you want your hosts to be fault tolerant.  This HA is done by HA in vSphere (paid only) and Failover Clustering in Hyper-V (all versions).

Failover Prioritization, Affinity, and NIC teaming are elements to be found in vSphere and Hyper-V. 

Hyper-V can do guest OS application monitoring.  To me, this is a small feature because it’s not a cloud feature … the boundary between physical and virtual is crossed (not just blurred).  Moving on …

Cluster-Aware Updating is there in both vSphere (paid) and Hyper-V to live migrate VMs on a cluster to allow zero service downtime maintenance of hosts.  Note that Hyper-V will:

  • Support third party updates.  Dell in particular has done quite a bit in this space to update their hardware via CAU
  • Take advantage of Live Migration enhancements to make this process very quick in even the biggest of clusters

With CAU, you don’t care that MSFT does a great job at identifying and fixing issues on a monthly basis.  The host update process is quick and automated, with no impact on the business.

That’s just the start …

image

A Hyper-V cluster can scale out way beyond that of a vSphere cluster.  Not many will care, but those people will like having fewer administration units.  A Hyper-V cluster scales to 64 nodes and 8,000 VMs, compared to 32 nodes and 4,000 VMs in vSphere.

HA is more than a host requirement.  Guest OSs fail too.  Guest OSs need maintenance.  So Hyper-V treats guest clusters just like physical clusters, supporting iSCSI, Fiber Channel, and SMB 3.0/NFS shared storage with up to 64 guest cluster nodes …. all with Live Migration.  Meanwhile vSphere supports iSCSI as long as you use nothing newer than W2008 R2 (16 node restriction).   Fibre Channel guest clusters are supported up to 5 nodes.  Guest clusters with file based storage (SMB 3.0 or NFS) are not supported.  Ouch!

Oh yeah … Hyper-V guest clusters do support Live Migration and vSphere does not support vMotion of guest clusters.  There goes your flexibility in a vWorld!  Host maintenance will impact tenant services in vSphere in this case.

Hyper-V adds support for Shared VHDX guest clusters.  This comes with 2 limitations:

  • No Storage Live Migration of the Shared VHDX
  • You need to backup the guest cluster from within the guest OS

Sounds like VMware might be better here?  Not exactly: you lose vMotion and memory overcommit (their primary memory optimization) if you use Shared VMDK.  Ouch!  I hope too many tenants don’t choose to deploy guest clusters or you’re going to (a) need to blur the lines of physical/virtual with block storage or (b) charge them lots for non-optimized memory usage.

DR & Backup

image

Both Hyper-V and vSphere have built-in backup and VM replication DR solutions.

In the case of 2012 R2 Hyper-V, the replication is built into the host, rather than as a virtual appliance.  Asynchronous replication is every 30 seconds, 5 minutes or every 15 minutes in the case of Hyper-V, and just every 15 minutes in vSphere.  Hyper-V allows A->B->C replication whereas vSphere only allows A-> replication.

Hyper-V Replica is much more flexible and usable in the real world, allowing all sorts of failover, reverse replication/failback, and IP address injection.  Not so with vSphere.  Hyper-V Replica also offers historical copies of VMs in the DR site, something you won’t find in vSphere.  vSphere requires SRM for orchestration.  Hyper-V Replica offers you a menu:

  • PowerShell
  • System Center Orchestrator
  • Hyper-V Recovery Manager (Azure SaaS)

Cross-Premises

I’m adding this.  Hyper-V offers 1 consistent platform across:

  • On-premise
  • Hosting company public cloud
  • Windows Azure IaaS

With HNV, a company can pick and choose where to place their services, and even elements of services, in this hybrid cloud.  Hyper-V is tested at scale more than any other hypervisor: it powers Windows Azure and that’s one monster footprint that even Godzilla has to respect.

Summary

Hyper-V wins, wins, wins.  If I was a CIO then I’d have to question any objection to Hyper-V:

  • Are my techies vFanboys and their preferences are contrary to the best needs to the business?
  • Is the consultant pushing vSphere Enterprise Plus because they get a nice big cash rebate from VMware for just proposing the solution, even without a sale?  Yes, this is a real thing and VMware promote it at partner events.

I think I’d want an open debate with both sides (Hyper-V and vSphere) being fairly represented at the table if I was in that position.  Oh – and all that’s covered here is the highlights of Hyper-V versus the vSphere hypervisor.  vCenter and the vCloud suite haven’t a hope against System Center.  That’s like putting a midget wrestler up against The Rock.

Anywho, let the hate begin Smile

Oh wait … why not check out Comparing Microsoft Cloud with VMware Cloud.

The First 2012 R2 Doc – Test Lab Guide For System Center 2012 R2 & WS2012 R2 Hyper-V Network Virtualization

*sniff sniff*  It’s that time in the schedule when documentation starts to appear right before a scheduled Microsoft release, this time it’s the preview of Windows Server 2012 R2 and System Center 2012 R2 (WSSC 2012 R2).

Microsoft has released a step-by-step guide for building a test lab to help you learn & evaluate Hyper-V Network Virtualization (HNV aka software defined networking aka SDN), using:

  • Windows Server 2012 R2
  • Windows Server 2012 Hyper-V
  • System Center 2012 R2 – Virtual Machine Manager

This document contains instructions for setting up the Windows Server® 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM test lab by deploying four physical server computers running Windows Server 2012 R2 and ten virtual machines running Windows Server 2012 R2. The resulting configuration simulates two customer private intranets, one simulated hoster datacenter environment, and the Internet.

image

The lab requires 4 physical servers:

  • WNVHOST1: Running Windows Server 2012 R2 Hyper-V, DC, and DNS
  • WNVHOST2: WS2012 R2 Hyper-V host, SQL server, IPAM server, and System Center 2012 R2 Virtual Machine Manager.  Some tenant VMs (simulated on-premise) are also running here.
  • WNVHOST3: Another WS2012 R2 host, but this is going to run the VM running the new WS2012 R2 HNV Gateway role, integrating the on-premise networks with the hosted VM Networks.
  • WNVHOST4: Another host running a bunch of “hosted” tenant VMs in isolated VM Networks.

The doc goes step-by-step through building the lab.  Bet you can’t wait to get your hands on WSSC 2012 R2 now Smile

Windows Server 2012 R2 Hyper-V – Linux Support Improvements

Yes, Hyper-V supports many Linux distros, architectures, and versions, and that support has been improved in WS2012 R2 Hyper-V.

It’s no secret that there were some changes to the Linux Integration Services that are built into the Linux kernel.  Those changes were intended for and supported on WS2012 R2 Hyper-V (not WS2012 Hyper-V).  Those two changes are:

  • Dynamic Memory: Linux guest OSs can use the balloon driver to have the exact same support for Dynamic Memory as Windows (add and remove).  Bear in mind the constraints of the Linux distro itself.  And remember the recommendations of Linux when assigning large amounts of CPU/RAM to a machine.  These are Linux recommendations/limits, not Hyper-V ones.
  • Online backup: You can now perform a file system freeze in the Linux guest OS to get a file system consistent backup of a Linux guest OS without pausing the VM.  Linux does not have VSS (like Windows) so we cannot get application consistency.  But this is still a huge step forward.  According to Microsoft, WS2012 R2 Hyper-V is now the best way to virtualize and backup Linux; you can use any backup tool that supports Hyper-V to reliably backup your Linux VMs without using some script that does a dumb file copy.

Remember that online VHDX resizing is a host function, so Linux guest OSs support this too.  Don’t ask me how to resize Linux partitions or make use of the new free space Smile

There is also a new video driver for Linux.  This gives you a better video experience as with the Windows guest OS, including better mouse support – but hey, real Linux admins don’t click!

To take advantage of these features, make sure you have an up-to-date Linux kernel in your VMs, and your running them on WS2012 R2 Hyper-V.

Windows Server 2012 R2 Hyper-V – Live Virtual Machine Export/Clone

This is a feature that DevOps, administrators, app-owners, and testers are going to like.  You can clone a running virtual machine!  This will produce an exact copy of the running virtual machine without a sysprep.  You can then power up this virtual machine on an isolated virtual switch or VM Network, connect to it via the new Remote Desktop enabled Connect, and do whatever you need to do:

  • Test an OS upgrade
  • Test an app upgrade
  • Test patches or hotfixes
  • Test backup/restore
  • Troubleshoot an OS or service
  • Troubleshoot an issue

And you can do all that without affecting production systems because you’re doing it with an exact clone of the production VM(s).  This is great because you don’t need to delay doing diagnostics.  You can figure out the fix, and then implement the fix really quickly to the production system.  Those for you with change control will have had the opportunity to test those upgrades/fixes and the rollback plan too.

A very cool sub-feature of live cloning/export is that you can export a snapshot of a virtual machine.  Get your head around that!  It exports a clone of the VM as it was when the snapshot was created.

When you create a clone of a running VM, the wizard allows you to configure what to do with the resulting VM so that you can keep that identical IP address, computer name, and SID on a different virtual switch/VM Network.  If you clone to a VMM library then the resulting VM is in a saved state. If you clone to another computer then it will be auto-started.  The wizard allows you to select a different network to avoid conflicts … so be careful.

Upgrading From Windows Server 2012 Hyper-V To Windows Server 2012 R2 Hyper-V

This post is being written before the preview release is out, and before guidance has been written. It is based on what we know from TechEd NA 2013.

Upgrading a non-clustered Hyper-V host has never been easier.  Microsoft did some work to increase compatibility of VM states between WS2012 and WS2012 R2.  That means you don’t need to delete snapshots.  You don’t need to power up VMs that were in saved states and shut them down.  Those files are compatible with WS2012 R2 Hyper-V.

There are 2 ways to upgrade a WS2012 R2 Hyper-V host.

Do An In-Place Upgrade

You log into your WS2012 R2 host, shutdown your VMs or put them in a saved state, pop in the WS2012 R2 media, and do the upgrade.  The benefit is that you retain all your settings, and the VMs are right there in Hyper-V Manager with no effort.  The downside is that any crap you might have had on the Management OS is retained.  Microsoft always recommends a fresh install over an in-place upgrade.

Replace The Management OS & Import/Register The VMs

I prefer this one.  But be careful – do not use this approach if any of your VM files/settings are on the C: drive of the host – I hate those default locations in Hyper-V host settings.

You shutdown the host, pop in the media, and do a fresh install over the C: drive of the host.  This gives you a completely fresh install.  Yes, you have to rebuild your settings but that can all be scripted if you’re doing this a lot.  The final step is to import the VMs using the register option.  This simply loads up the VMs, and then you start up whatever VMs you require.

Upgrading to Windows 8.1 Client Hyper-V

This is a little off-topic but it’s related.  You can upgrade a PC from Windows 8 with Client Hyper-V to Windows 8.1.  The upgrade will automatically put running VMs into a saved state.  After the upgrade, the previously running VMs will be running.

Integration Components

The final step in any Hyper-V upgrade is to upgrade the Hyper-V Integration Components in the guest OS of each virtual machine.

Windows Server 2012 R2 Hyper-V – Online Resizing of VHDX

The most common excuse given for using pass-through disks instead of VHDX files was “I want to be able to change the size of my disks without shutting down my VMs”.  OK, WS2012 R2 Hyper-V fixes that by adding hot-resizing of VHDX files.

Yes, on WS2012 R2 Hyper-V, you can resize a VHDX file that is attached to a VM’s SCSI controller without shutting down the VM.  There’s yet another reason to place data in dedicated VHDX files on the SCSI controller.

You can:

  • Expand a VHDX – you’ll need to expand the partition in Disk Manager (or PoSH) in the VM – maybe there’s an Orchestrator runbook possibility
  • Shrink a VHDX – the VHDX must has un-partitioned space to be shrunk

This resize is a function of the host and has no integration component dependencies.

That’s one more objection to eliminating the use of pass-through disks eliminated.

Windows Server 2012 R2 Hyper-V – Live Migration Improvements (Compression and SMB 3.0)

Live Migration is pretty quick in WS2012 Hyper-V, with support for huge bandwidth with no arbitrary hard limit on the number of concurrent Live Migrations.  With the potential for huge hosts (4 TB RAM) and huge VMs (1 TB), Microsoft wanted to make it quicker to move VMs, vacate hosts, and to do general maintenance, such as Cluster Aware Updating.  That’s why they added support for faster Live Migration using SMB 3.0 and compression in WS2012 R2 Hyper-V.

Out of the box, WS2012 R2 Hyper-V is able to compress Live Migration traffic.  It does this by taking any free CPU resources that are available on the host – typically CPU is underutilized on hosts.  Hyper-V will prioritize other tasks when scheduling the processor.  That means if a VM needs more CPU, then Live Migration compression will get less processor access and not impact production systems.  Compression is enabled by default and does not require any special hardware.  It is expected that Live Migration compression will halve the time it takes to move a VM.

image

If you have access to high end networking then you will want to enable Live Migration over SMB.  This will leverage the improvements in SMB from WS2012:

  • SMB Multichannel: Live Migration will be able to use more than one NIC which means it can get more overall bandwidth.  SMB Multichannel automatically discovers new NICs between the SMB client and server and automatically deals with NIC/path failure.
  • SMB Direct: This is where you have an RDMA enabled NIC/network, such as iWARP (10 Gbps), ROCE (10/40 Gbps), or Infiniband (56 Gbps).  The flow of traffic is faster (less latency) and has less impact on the CPU of the SMB client and server.

image

On servers with PCI3 slots, 3 of these NICs can give you Live Migration speeds where RAM access speeds become the bottleneck Open-mouthed smile

There are 3 scenarios I can think of now, and here are the recommendations from Microsoft for them:

10 GbE or Slower NIC for Live Migration

Use the default compressed Live Migration.  This applies even if you have lots of 1 GbE NICs – compression will be more effective than SMB 3.0 at these speeds.

2 or More 10 GbE NICs for Live Migration

Use SMB Live Migration.  This will leverage SMB Multichannel to span all of the Live Migration NICs.  But watch that CPU utilization on the host.  And that leads us to …

1 or More RDMA NICs for Live Migration

Use SMB Live Migration.  This will leverage SMB Direct for really fast Live Migration with low CPU utilization (RDMA offloads processing to the NIC).  And if you have more than one NIC you get the best of both worlds by also leveraging SMB Multichannel.

Long story short: Live Migration will be very fast on WS2012 R2 Hyper-V, and you’ll see those improvements even on typical 1 GbE networking.

Windows Server 2012 R2 Hyper-V – Cross-Version Live Migration

With the increased release cadence (that’s 1 shot in the WS2012 R2 drinking game) that Microsoft has adopted, they want to make it easier for us to upgrade our hosts.  The first critical steps in enabling that were actually delivered in WS2012 Hyper-V:

  • VHDX: You’re a sucker if you’re still deploying passthrough disks.  You’re being negligent, in my opinion, if you’re a so-called-expert (like a consultant) and still typing your customers’ VMs to specific hardware devices.  I won’t hold any punches on this, and I don’t have time for defensive excuses.
  • Shared-Nothing Live Migration: We can move virtual machine files and the VMs themselves between any mix of clustered and non-clustered hosts.

That latter one is important, because we still cannot do an in-place upgrade of a Failover Cluster.  Yes, Microsoft has heard the feedback.  Just give them more of it if you are talking to them directly, especially if it’s a local/visiting rep from the Windows Server & System Center product group (giving feedback to a person from the local subsidiary is pointless).

We can use cross-platform Live Migration from WS2012 Hyper-V to WS2012 R2 Hyper-V to get zero-downtime “upgrades”.  Scenarios include:

  • Doing a Shared-Nothing Live Migration from a WS2012 Hyper-V cluster to a Hyper-V Server 2012 R2 cluster
  • Performing a Live Migration from a non-clustered Hyper-V Server 2012 host to a non-clustered WS2012 R2 Hyper-V host where they share common SMB 3.0 (WS2012/WS2012 R2) storage.
  • And more!

image

“Upgrading” a Hyper-V cluster using Cross-Version Live Migration

All mixes of Hyper-V Server and Windows Server 2012 and 2012 R2 are included.  This is a one-way Live Migration, from 2012 to 2012 R2.  And it means you can move your VMs from 2012 to 2012 R2 without impacting uptime for your services and business operations.

This also means that those of you planning WS2012 upgrades/installs shouldn’t stop.  You, of course, should be buying Software Assurance for your VM Windows Server licensing (which includes the host), and can use this zero downtime to get from a great hypervisor to an even better one with minimal effort.

Windows Server 2012 R2 Hyper-V – Automatic Activation

A pain point for virtualization and cloud administrators is the activation of Windows.  In particular, large enterprises and hosters that are running multi-tenant clouds with network isolation find this particularly troubling.  You could activate VMs by hand but that would eliminate self-service.  You could fire up KMS – but that means you need to route all your VM Networks to an administrative network to have access to your KMS machine.  Even smaller companies hate the added complexity – “I’ve paid for Datacenter edition so why do I need to activate all the VMs that I’m entitled to?”.  Microsoft engineering agreed.

Windows Server 2012 R2 Datacenter edition hosts (no matter what source of license you have) will provide automatic activation of the VMs running on them, as long as the host is activated.  There’s nothing more to it than that … deploy a Windows Server 2012 R2 VM on WS2012 R2 Datacenter edition, and it will activate automatically.  Your licensing, your networking, your administrative costs all just got easier and lower.  You gotta love that!  And there’s a reason to deploy WS2012 R2 in your VMs Smile

Note: you don’t enter product keys at all in the VMs.  This is great for hybrid cloud and VM mobility.  Say a customer creates a VM in a hosting company cloud from a template/gallery item.  It is automatically activated with no product key entered using this new feature.  Say the customer downloads the VM.  Now it’ll need re-activation (maybe automatically if placed on WS2012 R2 Hyper-V), and possibly a product key, depending on the customer’s virtualization.