Webinar – Affordable Hyper-V Clustering for the Small/Medium Enterprise & Branch Office

I will be presenting another MicroWarehouse webinar on August 4th at 2PM (UK/Ireland), 3 PM (central Europe) and 9AM (Eastern). The topic of the next webinar is how to make highly available Hyper-V clusters affordable for SMEs and large enterprise branch offices. I’ll talk about the benefits of the solution, and then delve into what you get from this hardware + software offering, which includes better up-time, more affordability, and better performance than the SAN that you might have priced from HPE or Dell.

image

Interested? Then make sure that you register for our webinar.

RunAsRadio Podcast – Hyper-V in Server 2016

I recently recorded an episode of the RunAsRadio podcast with Richard Campbell on the topic of Windows Server 2016 (WS2016) Hyper-V. We covered a number of areas, including containers, nested virtualization, networking, security, and PowerShell.

image

Optimize Hyper-V VM Placement To Match CSV Ownership

This post shares a PowerShell script to automatically live migrate clustered Hyper-V virtual machines to the host that owns the CSV that the VM is stored on. The example below should work nicely with a 2-node cluster, such as a cluster-in-a-box.

For lots of reasons, you get the best performance for VMs on a Hyper-V cluster if:

  • Host X owns CSV Y AND
  • The VMs that are stored on CSV Y are running on Host X.

This continues into WS2016, as we’ve seen by analysing the performance enhancements of ReFS for VHDX operations. In summary, the ODX-like enhancements work best when the CSV and VM placement are identical as above.

I wrote a script, with little bits taken from several places (scripting is the art of copy & paste), to analyse a cluster and then move virtual machines to the best location. The method of the script is:

  1. Move CSV ownership to what you have architected.
  2. Locate the VMs that need to move.
  3. Order that list of VMs based on RAM. I want to move the smallest VMs first in case there is memory contention.
  4. Live migrate VMs based on that ordered list.

What’s missing? Error handling 🙂

What do you need to do?

  • You need to add variables for your CSVs and hosts.
  • Modify/add lines to move CSV ownership to the required hosts.
  • Balance the deployment of your VMs across your CSVs.

Here’s the script. I doubt the code is optimal, but it works. Note that the Live Migration command (Move-ClusterVirtualMachineRole) has been commented out so you can see what the script will do without it actually doing anything to your VM placement. Feel free to use, modify, etc.

#List your CSVs 
$CSV1 = "CSV1" 
$CSV2 = "CSV2"

#List your hosts 
$CSV1Node = "Host01" 
$CSV2Node = "Host02"

function ListVMs () 
{ 
    Write-Host "`n`n`n`n`n`nAnalysing the cluster $Cluster ..."

    $Cluster = Get-Cluster 
    $AllCSV = Get-ClusterSharedVolume -Cluster $Cluster | Sort-Object Name

    $VMMigrationList = @()

    ForEach ($CSV in $AllCSV) 
    { 
        $CSVVolumeInfo = $CSV | Select -Expand SharedVolumeInfo 
        $CSVPath = ($CSVVolumeInfo).FriendlyVolumeName

        $FixedCSVPath = $CSVPath -replace '\\', '\\'

        #Get the VMs where VM placement doesn't match CSV ownership
        $VMsToMove = Get-ClusterGroup | ? {($_.GroupType –eq 'VirtualMachine') -and ( $_.OwnerNode -ne $CSV.OWnernode.Name)} | Get-VM | Where-object {($_.path -match $FixedCSVPath)} 

        #Build up a list of VMs including their memory size 
        ForEach ($VM in $VMsToMove) 
        { 
            $VMRAM = (Get-VM -ComputerName $VM.ComputerName -Name $VM.Name).MemoryAssigned

            $VMMigrationList += ,@($VM.Name, $CSV.OWnernode.Name, $VMRAM) 
        }

    }

    #Order the VMs based on memory size, ascending 
    $VMMigrationList = $VMMigrationList | sort-object @{Expression={$_[2]}; Ascending=$true}

    Return $VMMigrationList 
}

function MoveVM ($TheVMs) 
{

    foreach ($VM in $TheVMs) 
        { 
        $VMName = $VM[0] 
        $VMDestination = $VM[1] 
        Write-Host "`nMove $VMName to $VMDestination" 
        #Move-ClusterVirtualMachineRole -Name $VMName -Node $VMDestination -MigrationType Live 
        }

}

cls

#Configure which node will own wich CSV 
Move-ClusterSharedVolume -Name $CSV1 -Node $CSV1Node | Out-Null 
Move-ClusterSharedVolume -Name $CSV2 -Node $CSV2Node | Out-Null

$SortedVMs = @{}

#Get a sorted list of VMs, ordered by assign memory 
$SortedVMs = ListVMs

#Live Migrate the VMs, so that their host is also their CSV owner 
MoveVM $SortedVMs

Possible improvements:

  • My ListVMs algorithm probably can be improved.
  • The Live Migration piece also can be improved. It only does 1 VM at a time, but you could implement parallelism using jobs.
  • Quick Migration should be used for non-running VMs. I haven’t handles that situation.
  • You could opt to use Quick Migration for low priority VMs – if that’s your policy.
  • The script could be modified to start using parameters, e.g. Analyse (not move), QuickMigrateLow, QuickMigrate (instead of Live Migrate), etc.

Webinar – What’s New In Windows Server 2016 Hyper-V

I’ll be joining fellow Cloud and Datacenter Management (Hyper-V) MVP Andy Syrewicze for a webcast by Altaro on June 14th at 3PM UK/Irish time, 4PM CET, and 10AM Eastern. The topic: What’s new in Windows Server 2016 Hyper-V (and related technologies). There’s quite a bit to cover in this new OS that we expect to be release during Microsoft Ignite 2015. I hope to see you there!

image

Cloud & Datacenter Management 2016 Videos

I recently spoke at the excellent Cloud and Datacenter Management conference in Dusseldorf, Germany. There was 5 tracks full of expert speakers from around Europe, and a few Microsoft US people, talking Windows Server 2016, Azure, System Center, Office 365 and more. Most of the sessions were in German, but many of the speakers (like me, Ben Armstrong, Matt McSpirit, Damian Flynn, Didier Van Hoye and more) were international and presented in English.

image

You can find my session, Azure Backup – Microsoft’s Best Kept Secret, and all of the other videos on Channel 9.

Note: Azure Backup Server does have a cost for local backup that is not sent to Azure. You are charged for the instance being protected, but there is no storage charge if you don’t send anything to Azure.

DataOn CiB-9112 V12 Cluster-in-a-Box

In this post I’ll tell you about the cluster-in-a-box solution from DataOn Storage that allows you to deploy a Hyper-V cluster for a small-mid business or branch office in just 2U, at lower costs than you’ll pay to the likes of Dell/HP/EMC/etc, and with more performance.

Background

So you might have noticed on social media that my employers are distributing storage/compute solutions from both DataON and Gridstore. While some might see them as competitors, I see them as complimentary solutions in our portfolio that are for two different markets:

  • Gridstore: Their hyper-converged infrastructure (HCI) products remove fear and risk by giving you a pre-packaged solution that is easy and quick to scale out.
  • DataON: There are two offerings, in my opinion. SMEs want HA but at a budget they can afford – I’ll focus on that area in this article. And then there are the scaled-out Storage Spaces offerings, that with some engineering and knowledge, allow you to build out a huge storage system at a fraction of the cost of the competition – assuming you buy from distributors that aren’t more focused on selling EMC or NetApp 🙂

The Problem

There is a myth out there that the cloud has or will remove servers from SMEs. The category “SME” covers a huge variety of companies. Outside of the USA, it’s described as a business with 5-250 users. I know that some in Microsoft USA describe it as a company with up to 2,500 users. So, sure, a business with 5-50 users might go server-less pretty easily today (assuming broadband availability), but other organizations might continue to keep their Hyper-V (more likely in SME) or vSphere (less likely in SME) infrastructures for the foreseeable future.

These businesses have the same demands for applications, and HA is no less important to a 50 user business than it is for a giant corporation; in fact, SMEs are hurt more when systems go down because they probably have a single revenue operation that gets shut down when some system fails.

So why isn’t the Hyper-V (or vSphere) cluster the norm in an SME? It’s simple: cost. It’s one thing to go from one host to two, but throw in the cost of a modest SAS/iSCSI SAN and that solution just became unaffordable – in case you don’t know, the storage companies allegedly make 85% margin on the list price of storage. SMEs just cannot justify the cost of SAN storage.

Storage Spaces

I was at the first Build conference in LA when Microsoft announced Windows 8 and Windows Server 2012. WS2012 gave us Storage Spaces, and Microsoft implored the hardware vendors to invest in this new technology, mainly because Microsoft saw it as the future of cluster storage. A Storage Spaces-certified JBOD can be used instead of a SAN as shared cluster storage, and this could greatly bring down the cost of Hyper-V storage for customers of all sizes. Tiered storage (SSD and HDD) that combines the speed of SSD with the economy of large hard drives (now up to 10 TB) with transparent and automatic demand-based block based tiering meant that economy doesn’t mean a drop in performance – it actually increases performance!

Cluster-in-a-Box

One of the sessions, presented by Microsoft Clustering Principal PM Lead Elden Christensen, focused on a new type of hardware solution that MSFT wanted to see vendors develop. A Cluster-in-a-Box (CiB) would provide a small storage or Hyper-V cluster in a single pre-packaged and tested enclosure. That enclosure would contain:

  • Up to 2 or 4 independent blade servers
  • Shared storage in the form of a Storage Spaces “JBOD”
  • Built in cluster networking
  • Fault tolerant power supplies
  • The ability to expand via SAS connections (additional JBODs)

I loved this idea; here was a hardware solution that was perfect for a Hyper-V cluster in an SME or a remote office/branch office (ROBO), and the deployment could be really simple – there are few decisions to make about the spec, performance would be awesome via storage tiering, and deployment could be really quick.

DataON CiB-9112 V12

This is the second generation of CiBs that I have worked with from DataON, a company that specialises in building state-of-the-art and Mcirosoft-certified Storage Spaces hardware. My employers, MicroWarehouse Ltd. (an Irish company that has nothing to do with an identically named UK company) distributes DataON hardware to resellers around Europe – everywhere from Galway in west Ireland to Poland so far.

The CiB concept is simple. There are two blade servers in the 2U enclosure. Each has the following spec:

  • Dual Intel® Xeon® E5-2600v3 (Haswell-EP)
  • DDR4 Reg. ECC memory up to 512GB
  • Dual 1G SFP+ & IPMI management “KVM over IP” port
  • Two PCI-e 3.0 x8 expansion slots
  • One 12Gb/s SAS x4 HD expansion port
  • Two 2.5” 6Gb/s SATA OS drive bays

Networking wise, there are 4 NICs per blade:

  • 2 x LAN facing Intel 1 GbE NICs, which I team for a virtual switch with management OS sharing enabled (with QoS enabled).
  • 2 x internal Intel 10 GbE , which I use for cluster communications and SMB 3.0 Live Migration. These NICs are internal copper connections so you do not need an external 10 GbE switch. I do not team these NICs, and they should be on 2 different subnets for cluster compatibility.

You can use the PCI-e expandability to add more SAS or NIC interfaces, as required, e.g. DataON work closely with Mellanox for RDMA networking.

The enclosure also has:

  • 12-bay 3.5”/2.5“ shared drive slots (with caddies)
  • 1023W (1+1) redundant power

image

Typically, the 12 shared drive bays are used as a single storage pool with 4 x SSDs (performance) and 8 x 7200 RPM HDDs (capacity). Tiering in Storage Spaces works very well. Here’s an anecdote I heard while in a pre-sales meeting with one of our resellers:

They put a CiB (6 GB SAS, instead of 12 GB as on the CiB-9112)  into a customer site last year. That customer had the need to run a regular batch job that would normally takes hours, and they had gotten used to working around that dead time. Things changed when the VMs were moved onto the CiB. The batch job ran so quickly that the customer was sure that it hadn’t run correctly. The reseller double-checked everything, and found that Storage Spaces tiering and the power of the CiB blades had greatly improved the performance of the database in question, and everything was actually fine – great actually!

And here was the kicker – that customer got a 2 node Hyper-V cluster with shared storage in the form of a DataON CiB for less than the cost of a SAN, let alone the cost of the 2 Hyper-V nodes.

How well does this scale? I find that CPU/RAM are rarely the bottlenecks in the SME. There are plenty of cores/logical processors in the E5-2600v3, and 512 GB RAM is more than enough for any SME. Disk is usually the bottleneck. With a modest configuration (not the max) of 4 x 200 GB SSDs and 8 x 4 TB drives you’re looking at around 14 TB of usable 2-way mirrored (like RAID 10) storage. Or you could have 4 x 1.6 TB SSDs and 8 x 8 TB HDDs and have around 32 TB of usuable 2-way mirrored storage. That’s plenty!

And if that’s not enough, then you can expand the CiB using additional JBODs.

My Hands-On Experience

Lots of hardware goes through our warehouse that I never get to play with. But on occasion, a reseller will ask for my assistance. A couple of weeks ago, I got to do my first deployment of the 12 Gb SAS CiB-9112. We got it out of the box, and immediately I was impressed. This design indicates that engineers had designed the hardware for admins to manage. It really is a very clever and modular design.

image

The two side-bezels on the front of the 2U enclosure have a power switch and USB port for each blade server.

On the top, you can easily access the replaceable fans via a dedicated hinged panel. At the back, both fault-tolerant power supplies are in the middle, away from the clutter at the side of a rack. The blades can be removed separately from their SAS controllers. And each of the RAID1 disks for the blades’ OS (the management OS for a Hyper-C cluster) can be replaced without removing the blade.

Racking a CiB is a simple task – the entire Hyper-V cluster is a single 2U enclosure so there are no SAN controllers, SAN switches, SAN cables, and multiple servers. You slide a single 2U enclosure into it’s rail kit, plug in power, networking, and KVM, and you’re done.

Windows Server is pre-installed and you just need to modify the installation type (from eval) and enter your product key using DISM. Then you prep the cluster – DataON pre-installs MPIO, Hyper-V, and Failover Clustering to make your life easy.

My design is simple:

  • The 1 GbE NICs are teamed, connected to a weight-based QoS Hyper-V switch, and shared with the parent. A weight of 50 is assigned to the default bucket QoS rule, and 50 is assigned to the management OS virtual NIC.
  • The 10 GbE NICs are on 2 different subnets.
  • I enable SMB 3.0 Live Migration on both nodes in Hyper-V Manager.
  • MPIO is configured with the LB policy.
  • I ensure that VMQ is disabled on the 1 GbE NICs and enabled on the 10 GbE NICs.
  • I form the cluster with no disks, and configure the 10 GbE NICs for Live Migration.
  • A single clustered storage pool is created in Failover Cluster Manger.
  • A 1 GB (it’s always bigger) 2-way mirrored virtual disk is created and configured as the witness disk in the cluster.
  • I create 2 virtual disks to be used as CSVs in the cluster, with 64 KB interleaves and formatted with 64 KB allocation unit size. The CSVs are tiered with some SSD and some HDD … I always leave free space in the pool to allow expandability of one CSV over the other. HA VMs are balanced between the 2 CSVs.

What about DCs? If the customer is keeping external DCs then everything is done. If they want DCs running on the CiB then I always deploy them as non-HA DCs that are stored on the C: of each CiB blade. I know that since WS2012, we are supposed to be able to run DCs are HA VMs on the cluster, but I’ve experienced issues with that.

With some PowerShell, the above process is very quick, and to be honest, the slowest bit is always the logistics of racking the CiB. I’m usually done in the early afternoon, and that includes some show’n’tell.

Summary

If you want a tidy, quick & easy to deploy, and affordable HA solution for an SME or ROBO then the DataOn CiB-9112 V12 is an awesome option. If I was doing our IT from scratch, this is what I would use (we had existing servers and added a DataON JBOD, and recently replaced the servers while retaining the JBOD). I love how tidy the solution is, and how simple it is to set up, especially with some fairly basic PowerShell. So check it out, and see what it can do for you.

My WS2016 Hyper-V Session at Future Decoded

I had fun presenting at this Microsoft UK event in London. Here’s a recording of my session on Windows Server 2016 (WS2016) Hyper-V, featuring failover clustering, storage, and networking:

 

More sessions can be found here.

Microsoft News – 19 October 2015

It turns out that Microsoft has been doing some things that are not Surface-related. Here’s a summary of what’s been happening in the last while …

Hyper-V

image

Windows Server

Windows Client

Azure

Office 356

Miscellaneous

Microsoft News – 30 September 2015

Microsoft announced a lot of stuff at AzureCon last night so there’s lots of “launch” posts to describe the features. I also found a glut of 2012 R2 Hyper-V related KB articles & hotfixes from the last month or so.

Hyper-V

Windows Server

Azure

Office 365

EMS

Microsoft News – 28 September 2015

Wow, the year is flying by fast. There’s a bunch of stuff to read here. Microsoft has stepped up the amount of information being released on WS2016 Hyper-V (and related) features. EMS is growing in terms of features and functionality. And Azure IaaS continues to release lots of new features.

Hyper-V

Windows Client

Azure

System Center

Office 365

EMS

Security

Miscellaneous