Windows Server Technical Preview – Hot Memory Resizing

Dynamic Memory was added in W2008 R2 SP1 to allow Hyper-V to manage the assignment of memory to enabled virtual machines based on the guest OSs demands. Today in WS2012 R2, a VM boots up with a start-up amount of RAM, can grow, based on in-guest pressure and host availability, up to the maximum amount allowed by the host administrator, and shrink to a minimum amount.

But what if we need more flexibility? Not all workloads are suitable for Dynamic Memory. And in my estimation, only about half of those I encounter are using the feature.

The next version of Hyper-V includes hot memory resizing. This allows you to add and remove memory from a running virtual machine. The operation is done using normal add/remove administration tools. Some notes:

  • At this time you need a vNext guest OS
  • You cannot add more memory than is available on the host
  • Hyper-V cannot remove memory that is being used – you are warned about this if you try, and any free memory will be de-allocated.

This new feature will save a lot of downtime for virtualised services when administrators/operators need to change the memory for a production VM. I also wonder if it might lead to a new way of implementing Dynamic Memory.

Windows Server Technical Preview – Network Adapter Identification

Have you ever built a Hyper-V virtual machine with 2 or more NICs, each on different networks, and struggled with assigning IP stacks in the guest OS? I sure have. When I was writing materials with virtual SOFS clusters, I often had 3-6 NICs per VM, each requiring an IPv4 stack for a different network. I needed to ensure that VMs were aligned and able to talk on those networks.

With modern physical hardware, we get a feature called Consistent Device Naming (CDN). This allows the BIOS to name the NIC and that’s how the NIC appears in the physical WS2012 (or later) install. Instead of the random “Local Area Connection” or “Ethernet” name, you get a predictable “Slot 1 1” or similar, based on the physical build of the server.

With Windows Server vNext, we are getting something similar, but not identical. This is not vCDN as some commentators have called it because it does require some work in the guest OS to enable the feature. Here’s how it works (all via PowerShell):

  1. You create a vNIC for a VM, and label that vNIC in the VM settings (you can actually do that now on WS2012 or later, as readers of WS2012 Hyper-V Installation and Configuration Guide might know!).
  2. Run a single cmdlet in the guest OS to instruct Windows Server vNext to name the connection after the adapter.

Armed with this feature, the days of disconnecting virtual NICs in Hyper-V Manager to rename them in the guest OS are numbered. Thankfully!

Windows Server Technical Preview – Hot-Add & Hot-Remove of vNICs

You might have noticed a trend that there are a lot of features in the next version of Hyper-V that increase uptime. Some of this is by avoiding unnecessary failovers. Some of this is by reducing the need to shutdown a VM/service in order to do engineering. This is one of the latter.

Most of the time, we only need single-homed (single vNIC) VMs. But there are times where we want to assign a VM to multiple networks. If we want to do this right now on WS2012 R2, we have to shut down the VM, add the NIC, and start it up again.

With Hyper-V vNext we will be able to hot-add and hot-remove vNICs and this much requested feature will save administrators some time and grief from service owners.

Windows Server Technical Preview – Distributed Storage QoS

One of the bedrocks of virtualization or a cloud is the storage that the virtual machines (and services) are placed on. Guaranteeing performance of storage is tricky –some niche storage manufacturers such as Tintrí (Irish for lightning) charge a premium for their products because they handle this via black box intelligent management.

In the Microsoft cloud, we have started to move towards software-defined storage based on the Scale-Out File Server (SOFS) with SMB 3.0 connectivity. This is based on commodity hardware, and with WS2012 R2, we currently have a very basic form of storage performance management. We can set:

  • Maximum IOPS per VHD/X: to cap storage performance
  • Minimum IOPS per VHD/X: not enforced, purely informational

This all changes with vNext where we get distributed storage QoS for SOFS deployments. No, you do not get this new feature with legacy storage system deployments.

A policy manager runs on the SOFS. Here you can set storage rules for:

  • Tenants
  • Virtual machines
  • Virtual hard disks

Using a new protocol, MS-SQOS, the SOFS passes storage rule information back to the relevant hosts. This is where rate limiters will enforce the rules according to the policies, set once, on the SOFS. No matter which host you move the VM to, the same rules apply.

image

The result is that you can:

  • Guarantee performance: Important in a service-centric world
  • Limit damage: Cap those bad boys that want everything to themselves
  • Create a price banding system: Similar to Azure, you can set price bands where there are different storage performance capabilities
  • Offer fairly balanced performance: Every machine gets a fair share of storage bandwidth

At this point, all management is via PowerShell, but we’ll have to wait and see what System Center brings forth for the larger installs.

Windows Server Technical Preview – Cluster Quarantine

Most of us have dealt with some piece of infrastructure that is flapping, be it a switch port that’s causing issues, or a driver that’s causing a server to bug-check. These are disruptive issues. Cluster Compute Resiliency is a feature that prevents unwanted failovers when a host is having a transient issue. But what if that transient issue is repetitive? For example, what if a cluster keeps going into network-isolation and the VMs are therefore going offline too often?

If a clustered host goes into isolation too many times within a set time frame then the cluster will place this host into quarantine. The cluster will move virtual machines from a quarantined host, ideally using your pre-defined migration method (defaulting to Live Migration, but allowing you to set Quick Migration for a priority of VM or selected VMs).

The cluster will not place further VMs onto the quarantined host and this gives administrators time to fix whatever the root cause is of the transient issues.

This feature, along with Cluster Compute Resiliency are what I would call “maturity features”. They’re the sorts of features that make life easier … and might lead to fewer calls at 3am when a host misbehaves because the cluster is doing the remediation for you.

Two New Hyper-V Books

I am not writing a WS2012 R2 Hyper-V book, but some of my Hyper-V MVP colleagues have been busy writing. I haven’t read these books, but the authors are more than qualified and greatly respected in the Hyper-V MVP community.

Hyper-V Security

By Eric Siron & Andy Syrewicze

Available on Amazon.com and Amazon.co.uk

image

Keeping systems safe and secure is a new challenge for Hyper-V Administrators. As critical data and systems are transitioned from traditional hardware installations into hypervisor guests, it becomes essential to know how to defend your virtual operating systems from intruders and hackers.

Hyper-V Security is a rapid guide on how to defend your virtual environment from attack.

This book takes you step by step through your architecture, showing you practical security solutions to apply in every area. After the basics, you’ll learn methods to secure your hosts, delegate security through the web portal, and reduce malware threats.

Hyper-V Best Practices

By Benedict Berger

Available on Amazon.com and Amazon.co.uk

image

Hyper-V Server and Windows Server 2012 R2 with Hyper-V provide best in class virtualization capabilities. Hyper-V is a Windows-based, very cost-effective virtualization solution with easy-to-use and well-known administrative consoles.

With an example-oriented approach, this book covers all the different guides and suggestions to configure Hyper-V and provides readers with real-world proven solutions. After applying the concepts shown in this book, your Hyper-V setup will run on a stable and validated platform.

The book begins with setting up single and multiple High Availability systems. It then takes you through all the typical infrastructure components such as storage and network, and its necessary processes such as backup and disaster recovery for optimal configuration. The book does not only show you what to do and how to plan the different scenarios, but it also provides in-depth configuration options. These scalable and automated configurations are then optimized via performance tuning and central management.

Windows Server Technical Preview – Distributed Storage QoS

In a modern data centre, there is more and more resource centralization happening. Take a Microsoft cloud deployment for example, such as what Microsoft does with CPS or what you can do with Windows Server (and maybe System Center). A chunk of a rack can contain over a petabyte of RAW storage in the form of a Scale-Out File Server (SOFS) and the rest of the rack is either hosts or TOR networking. With this type of storage consolidation, we have a challenge: how do we ensure that each guest service gets the storage IOPS that it requires?

From a service providers perspective:

  • How do we provide storage performance SLAs?
  • How do we price-band storage performance (pay more to get more IOPS)?

Up to now with Hyper-V you required a SAN (such as Tintrí) to do some magic on the backend. WS2012 R2 Hyper-V added a crude storage QoS method (maximum rule only) that was performed on at the host and not at the storage. So:

  • There was no minimum or SLA-type rule, only a cap.
  • QoS rules were not distributed so there was no accounting on host X what Hosts A-W were doing to the shared storage system.

Windows Server vNext is adding Distributed Storage QoS that is the function of a partnership between Hyper-V hosts and a SOFS. Yes: you need a SOFS – but remember that a SOFS can be 2-8 clustered Window Servers that are sharing a SAN via SMB 3.0 (no Storage Spaces in that design).

Note: the hosts use a new protocol called MS-SQOS (based on SMB 3.0 transport) to partner with the SOFS.

image

Distributed Storage QoS is actually driven from the SOFS. There are multiple benefits from this:

  • Centralized monitoring (enabled by default on the SOFS)
  • Centralized policy management
  • Unified view of all storage requirements of all hosts/clusters connecting to this SOFS

Policy (PowerShell – System Center vNext will add management and monitoring support for Storage QoS) is created on the SOFS, based on your monitoring or service plans. An IO Scheduler runs on each SOFS node, and the policy manager data is distributed. The Policy Manager (a HA cluster resource on the SOFS cluster) pushes (MS-SQOS) policy up the Hyper-V hosts where Rate Limiters restrict the IOPS of virtual machines or virtual hard disks.

image

There are two kinds of QoS policy that you can create:

  • Single-Instance: The resources of the rule are distributed or shared between VMs. Maybe a good one for a cluster/service or a tenant, e.g. a tenant gets 500 IOPS that must be shared by all of their VMs
  • Multi-Instance: All VMs/disks get the same rule, e.g. each targeted VM gets a maximum of 500 IOPS. Good for creating VM performance tiers, e.g. bronze, silver, gold with each tier offering different levels of performance for an individual VM

You can create child policies. Maybe you set a maximum for a tenant. Then you create a sub-policy that is assigned to a VM within the limits of the parent policy.

Note that some of this feature comes from the Predictable Data Centers effort by Microsoft Research in Cambridge, UK.

Hyper-V storage PM, Patrick Lang, presented the topic of Distributed Storage QoS at TechEd Europe 2014.

MVP Carsten Rachfahl Interviews Me About Windows Server vNext

While at the MVP Summit in Redmond, my friend Carsten Rachfahl (also a Hyper-V MVP) recorded a video interview with me to talk about Windows Server vNext and some of our favourite new features.

image

Windows Server Technical Preview – Replica Support for Hot-Add of VHDX

Shared VHDX was introduced in WS2012 R2 to enable easier and more flexible deployments of guest clusters; that is, a cluster that is made from virtual machines. The guest cluster allows you to make services highly available, because sometimes a HA infrastructure is just not enough (we are supposed to be all about the service, after all).

We’ve been able to do guest clusters with iSCSI, SMB 3.0, or Fiber Channel/FCoE LUNs/shares but this crosses the line between guest/tenant and infrastructure/fabric. That causes a few issues:

  • It reduces flexibility (Live Migration of storage, backup, replication, etc)
  • There’s a security/visibility issue for service providers
  • Self-service becomes a near impossibility for public/private clouds

That’s why Microsoft gave us Shared VHDX. Two virtual machines can connect to the same VHDX that contains data. That disk appears in the guest OS of the two VMs as a shared SAS disk, i.e cluster-supported storage. Now we have moved into the realm of software and we enable easy self service, flexibility, and no longer cross the hardware boundary.

But …

Shared VHDX was a version 1.0 feature in Windows Server 2012 R2. It wasn’t a finished product; Microsoft gave us what they had ready at the time. Feedback was unanimous: we need backup and replications support for Shared HDX, and we’d also like Live Migration support.

The Windows Server vNext Technical Preview gives us support to replicate Shared VHDX files using Hyper-V Replica (HVR). This means that you can add these HA guest clusters to your DR replication set(s) and offer a new level of availability to your customers.

I cannot talk about how Microsoft is accomplishing this feature yet … all I can report is what I’ve seen announced.