Microsoft Ireland Partner Event: Virtualisation & Management

This is a follow on from my post earlier today on the 2010 Microsoft Ireland Partner event.  This post will focus on the virtualisation track.

Ronan Geraghty (owner of the Server business in Ireland, former DPE) introduces the session.  Wilbour Craddock (partner technical sales) takes over to talk about Windows Server 2008 R2.

The story for Windows Server 2008 R2 is:

  • Streamlined management
  • Robust web platform
  • “Better Together” with Windows 7
  • Virtualisation with Hyper-V

It’s an evolution of Windows Server 2008, not an entirely new operating system.  However, there is a lot more stuff in there.  Read the stuff on that link; it’ll save me typing a bunch of stuff.

The key to the MS platform is System Center. OpsMgr for fault/performance/audit collection, VMM for virtualisation, DPM for backup and ConfigMgr for deployment, auditing and reporting.  Service Desk will be a complete helpdesk solution implementing MOF/ITIL.

Liam Cronin (Compete Lead) takes over to talk about the compete message.  MS Ireland is big on competing with VMware.

VMware

Strategy:

  • 100% of Fortune 100 and 96% of Fortune 1000
  • Win the desktop through VDI
  • Win the cloud

Evolution:

  • Rich virtualisation portfolio
  • Acquiring a lot of technology through company take over

Partners:

  • 700+ tech partners
  • 65% of partner revenue through OEM’s
  • Rich virtual appliance market

MS Differentiators

  • MS is a platform company.  VMware is a product company, not a platform one.
  • MS is competitive with Windows Server 2008 R2 – claims by Liam that MS is ahead on features … I’m as pro Hyper-V as it gets and I disagree.  MS has the core stuff and it works excellently but does not have the same set of features as VMware.
  • MS is more cost effective
  • Management & security (very true)

Why pay a “vTax” to VMware when virtualisation is built into Windows?

Objection Handling

Made a commitment to VMware already: Don’t need to rip/replace.  You can use System Center to manage, maybe use Hyper-V for newer stuff.  The virtualisation platform isn’t as important as the management of it.

4 questions to ask VMware customers:

  • Why does VMware have a mandatory support contract that doesn’t include upgrades?
  • Why do they have to pay more money for VMotion?  Live Migration is in the free Hyper-V Server 2008 R2.
  • How does VMware provide management for operating systems and applications running on their hypervisor?
  • Ask VMware what their virtualised desktop solution is for roaming or remote users who are disconnected.

Citrix V-Alliance

Matthew Brenchley – Strategic Alliances Manager from the UK.

Citrix has 21 years of partnership with Microsoft. 

Essentials For Hyper-V

OK – I want a Citrix person to say exactly what this is.  I have yet to see a clear explanation.  Where is the comic book store guy when you need him …. oh … “Worst Marketing Ever”.  We get the pitch on VDI and how Citrix can work on the MS platform.  Not much meat on these bones; the trend continues unfortunately.  I guess I’ll have to wait until PubForum to hear technical information on the Citrix side of things.

Over to marketing person, Karen Reilly.  This is a pitch for recruiting members into the V-Alliance.  Focus appears to be on desktop virtualisation.  Lead generation support and POC funding.  I’m glad I have guest wifi access.  Seriously, VDI is an expensive model and is a very niche solution.

Afterwards

I chat with Will and he tells me what Citrix Essentials is about.  (a) It allows block level and de-duplicated replication of VM’s between sites.  You can use different storage systems that don’t have replication engines and you do not need dark fibre – unlike controller based replication systems (b) It provides a lab/development deployment solution where the MS solution is purely developer driven in Visual Studio 2010.

Publish Internet Explorer From XP Mode

Do you want to use an older version of IE on Windows 7?  You cannot install an older version natively; you have to use IE8.  Compatibility mode may fix most things but there’s possibly that LOB application that won’t play nice.  You can do things like get the app vendors to update it (maybe they are gone, maybe it takes too long, or will cost too much).  You can run an older generation OS on Terminal Services. 

However, a tidy way to do it is to use XP Mode and use the older versions of IE that will run on XP.  The shortcut can be published to the Windows 7 start menu to make it easy to use for the end user.  We showed this and explained this scenario in some of the Windows 7/Server 2008 R2 launch events in Ireland.  I put it together at the last second in the Dublin events as one of the speakers talked about the scenario … yeah we were that “seat of the pants” and we reacted to questions being asked.  Flexibility rules.

Ben Armstrong (product manager AKA the Virtual PC Guy) blogged how to do this.  It is very easy.

First 5000 Downloads Free: Partition Manager 10 for Virtual Machines

I’d normally post this one in the evening after work but it is a limited time offer.  I just got an email and the contents were:

“Partition Manager 10 for Virtual Machines is out.

Now all IT administrators have a great chance to have Partition Manager 10 for Virtual Machines for FREE – currently we’re announcing this giveaway for up to 5000 copies.

It is a special version of our Linux/DOS bootable environment that contains fully functional Paragon Partition Manager 10 Professional. It is optimized to work with virtual disks of any virtualization software vendor √ backup/restore virtualized systems, re-partition and clone virtual disks, fix boot problems, optimize performance of NTFS and FAT file systems, etc.

The software and user manual can be downloaded from here.

Please, note that it requires registration”.

It is for non-commercial use only.

Technorati Tags: ,,,

Windows Server 2008 R2 Licensing Overview

“The Windows Server 2008 R2 licensing guide provides an in-depth overview of the Windows Server 2008 R2 core product offerings, including product names, available sales channels, licensing models, and number of running instances allowed per license in physical and virtual operating system environments (POSEs and VOSEs)”.

I Prefer Working With VM’s

Today I was working with one of my colleagues to upgrade an application we are running on some physical servers.  We’re both working from home with a VPN connection into the data centre.  Reboots were required.  This is the bit I hate … a continuous ping times out for what feels like an eternity.  Eventually that first response appears and the tightening of the chest relaxes 🙂

VM reboots are so quick because there is no hardware to POST.  I could also take a copy of the VM to test the upgrade process before hitting production.

Using VMM 2008 R2 For V2V

It is possible using Virtual Machine Manager 2008 R2 to migrate virtual machines from one hardware virtualisation platform to another.  This is known as Virtual to Virtual or V2V.  The possible migrations you can do are:

  • Migrate from Virtual Server 2005 R2 SP1 to Hyper-V
  • Migrate a VMware Virtual Machine from the VMM Library to Virtual Server 2005 R2 SP1 or to Hyper-V
  • Migrate a VMware Virtual Machine from a VMware host to Virtual Server 2005 R2 SP1 or to Hyper-V

This is a one-way process.  You cannot go from Hyper-V back to the original host platform.

Supported V2V VM Operating Systems

Just like with P2V, there is a matrix of supported operating systems:

Operating System

VMM 2008

VMM 2008 R2

Microsoft Windows 2000 Server with Service Pack 4 (SP4) or later

Yes

Yes

Microsoft Windows 2000 Advanced Server SP4 or later

Yes

Yes

Windows XP Professional with Service Pack 2 (SP2) or later

Yes

Yes

Windows XP 64-Bit Edition SP2 or later

Yes

Yes

Windows Server 2003 Standard Edition (32-bit x86)

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Enterprise Edition (32-bit x86)

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Datacenter Edition (32-bit x86)

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 x64 Standard Edition

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Enterprise x64 Edition

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Datacenter x64 Edition

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Web Edition

Yes

Yes

Windows Small Business Server 2003

Yes

Yes

Windows Vista with Service Pack 1 (SP1)

Yes

Yes

64-bit edition of Windows Vista with Service Pack 1 (SP1)

Yes

Yes

Windows Server 2008 Standard 32-Bit

Yes

Yes

Windows Server 2008 Enterprise 32-Bit

Yes

Yes

Windows Server 2008 Datacenter 32-Bit

Yes

Yes

64-bit edition of Windows Server 2008 Standard

Yes

Yes

64-bit edition of Windows Server 2008 Enterprise

Yes

Yes

64-bit edition of Windows Server 2008 Datacenter

Yes

Yes

Windows Web Server 2008

Yes

Yes

Windows 7

No

Yes

64-bit edition of Windows 7

No

Yes

64-bit edition of Windows Server 2008 R2 Standard

No

Yes

64-bit edition of Windows Server 2008 R2 Enterprise

No

Yes

64-bit edition of Windows Server 2008 R2 Datacenter

No

Yes

Windows Web Server 2008 R2

No

Yes

Not Got VMM?

There is a manual process to convert Virtual Server 2005 R2 SP1 VM’s to Hyper-V if you do not have VMM.  There are 3rd party and free tools for this.  There are also 3rd party and free tools you can use to V2V from VMware to Hyper-V without VMM.  However, these would be very manual processes and VMM makes that all the much easier through it’s job process.

Destination Host Requirements

The destination machine should have the disk and the RAM to cater for the VM.  MS actually recommends RAM of the VM + 256MB for the conversion process.  The host should also be in a network that allows all necessary communications with the VMM server.

Original VM Requirements

Before you migrate any VMware machine to a Microsoft platform you must uninstall the VMware additions/tools.  That’s the VMware equivalent of the Microsoft integration components/services.  You also need to remove any checkpoints.

Library V2V

There are then two possible ways to do the conversion.  As I stated earlier, you can copy a VMware VM into the library and V2V the VM from there.  To do this in VMM, choose to use the Convert Virtual Machine Wizard.  You cannot V2V a VMware VM that uses raw disks (same idea as pass through disks).  You need access to the .VMX file (describes the VM) and the VMDK file(s) (the virtual hard disks).  Each VMDK will be converted into a VHD.

Host V2V

If your VM is on another host, e.g. Virtual Server 2005 R2 SP1 or VMware, then make sure the source host is being managed by VMM.  You can then use an offline migration, i.e. power off the VM, right-click the VM and Migrate it.  Make sure the hosts filter is adjusted to show your destination Microsoft virtualisation host.

Integration Components

When the job is completing, you’ll see that VMM will install the integration components/services for Hyper-V.  That will optimise the performance of the VM and cuts down on the manual labour.

Linux VM’s

Interestingly, Microsoft says you can V2V a Linux VM.  However, any OS not in the above table will not get the integration components.  And remember, only certain enterprise versions of SUSE (no IC’s) and RedHat (no IC’s) are supported.  If you V2V a supported SLES VM you will have to manually install the Linux integration components.

Monitor CSV Free Space?

This is something that struck me today.  I was doing some checks in Operations Manager to see what free space was like on some of the servers we run online backup services with.  Then I thought – let’s have a look at the cluster shared volume on our Hyper-V cluster.  The problem is that Operations Manager deals with logical drives that have a letter.  It seems to ignore drive such as the CSV: a mounted drive that appears as a folder in C:ClusterStorageVolume 1, Volume2, etc.

There are two ways to check this manually that I have found so far.  The first is to open up the Failover Clustering MMC and connect to the cluster.  You’ll see the size and free space for the Cluster Shared Volume there. 

image

You can also do it in VMM by right-clicking on the cluster object and viewing the properties.

image

You can ignore the witness disk (at the top); I really hope you’re not so desperate for VM storage that you consider that!

I cannot find anything in Operations Manager for tracking this critical function.  It’s not in the Failover Clustering MP (where it probably should be), Hyper-V or VMM management packs.

I’d advise that you keep an eye on this, especially if you are experiencing growth or using self service in VMM.  For example, I’ve switched to using dynamic VHD’s.  Yeah, early on that means I save on storage space.  My C: VHD’s are half the size they were with Windows Server 2008 fixed VHD’s.  But eventually they will grow and consume space on the CSV.  You need to know when to trigger a growth of the LUN on the SAN and expand the NTFS volume before we reach critical levels.  Bad things happen when a growing VHD doesn’t have any space left.

Analyse Memory Of Saved State VM’s – And Host Security

Ben Armstrong (MS virtualisation whiz, The Virtual PC Guy) blogged overnight about a tool that allows administrators or developers to get at and analyse the contents of RAM in a saved state Hyper-V VM.  The tool is called VM2DMP.  It will convert a Hyper-V saved state memory to a DMP file that DMP analysis tools can load up.

This brings up a question: security.  Lets forget about TV shows like 24 and movies like the Net.  That stuff can be fun.  Sit back and think: what is the easiest way to gain access to some piece of data or files?  The answer is simple.  Gain physical access and literally steal the disks.

If I had access to a saved state VM then in theory (if I had the skills) I could use that tool to convert the memory, poke around and gain access to sensitive items that were stored in RAM.

Virtualisation makes this even easier.  You don’t have to remove the disks because they’re files.  Gain access to the host and away you go.  I remember when I started working on server virtualisation and having a chat with my cousin who is a senior security consultant with a major international company.  His previous role had him working in a lab and projects were to think up scenarios and find threats.  So he asked me: “how do you secure VM’s when they are only files?”.

It’s possible.  But you’ve got to do all the right things.

Security starts and ends with physical access.  Control access to the computer room(s) and monitor that access.  Be very strict about it.  The data centre I work in doesn’t care if they see you every day.  If you are not expected or not properly processed then you don’t get past the front door.  It sounds inflexible and it is.  But damn is that place secure!

Hyper-V run on Windows Server 2008 and Windows Server 2008 R2.  You have the option of enabling BitLocker on the host.  That’ll work on standalone hosts but not on a cluster.

Maintain control of who can log into the host.  You’ve got to treat host logon permissions the same way as you would treat computer room access.  That logon prompt and those drive access rights must be at least as important as access through the door.  If you can log into a host or gain access to drives remotely then the door is wide open to play. 

There is no need to give access (administrative or interactive logon) to a host beyond the virtualisation team.  Rights can be delegated.  The ideal solution for that is VMM.  You can allow delegated administrators to do admin work via the VMM console.  Members of self-service roles can use the portal to deploy and manage VM’s.  If you don’t have VMM then you can use the Hyper-V authorisation manager to delegate access.

And yes, you can enable and RDP into a VM.

Most of this stuff goes back to the basics of what you should be doing already.  Membership of domain admins should be very limited.  Nested groups and local group population via Group Policy (restricted groups) allows delegation.  Give only the access that is required.  Treat physical access like getting into somewhere like the NSA.  Use the right tools for the right reasons and don’t be lazy.  And the stuff I’m talking about here is not unique to Hyper-V.  You need to take precautions with all hardware virtualisation solutions.

The tool that Ben blogged about has legitimate uses; just be sure that only the right people get to use it on your Hyper-V hosts.

Technorati Tags: ,,

0x0000007C BUGCODE_NDIS_DRIVER Blue Screen on Windows Server 2008 R2 with NLB

There is a blog post by a Microsoft employee that describes an issue where a virtual machine (Hyper-V or VMware) running Windows Server 2008 R2 will crash.  The VM is configured with Windows Network Load Balancing.  Their research found that the problem occurred with “certain” antivirus packages installed.  They didn’t (and probably won’t) specify which ones.  The two proposed solutions are:

  1. Configure NLB before installing the antivirus package
  2. Uninstall the antivirus package

Rough Guide To Setting Up A Hyper-V Cluster

EDIT: 18 months after I wrote it, this post continues to be one of my most popular.  Here is some extra reading for you if this topic is of interest:

A lot of people who will be doing this have never set up a cluster before.  They know of clusters from stories dating back from the NT4 Wolfpack, Windows Server 2000 and Windows Server 2003 days when consultants made a fortune from making things like Exchange and SQL on 5 days per cluster projects.

Hyper-V is getting more and more widespread.  And that means setting up highly available virtual machines (HAVM) on a Hyper-V cluster will become more and more common.  This is like Active Directory.  Yes, it can be a simple process.  But you have to get it right from the very start or you have to rebuild from scratch.

So what I want to do here is walk through what you need to do in a basic deployment for a Windows Server 2008 R2 Hyper-V cluster running a single Cluster Shared Volume (CSV) and Live Migration.  There won’t be screenshots – I have a single laptop I can run Hyper-V on and I don’t think work would be too happy with me rebuilding a production cluster for the sake of blog post screenshots 🙂  This will be rough and ready but it should help.

Microsoft’s official step by step guide is here.  It covers a lot more detail but it misses out on some things, like “how many NIC’s do I need for a Hyper-V cluster?”, “how do I set up networking in a Hyper-V cluster?”, etc.  Have a read of it as well to make sure you have covered everything.

P2V Project Planning

Are you planning to convert physical machines to virtual machines using Virtual Machine Manager 2008 R2?  If so and you are using VMM 2008 R2 and Operations Manager 2007 (R2), deploy them now (yes, before the Hyper-V cluster!) and start collecting information about your server network.  There are reports in there to help you identify what can be converted and what your host requirements will be.   You can also use the free MAP toolkit for Hyper-V to do this.  If your physical machine uses 50% of a quad core Xeon then the same VM will use 50% of the same quad core Xeon in a Hyper-V host (actually, probably a tiny bit more to be safe).

Buy The Hardware

This is the most critical part.  The requirements for Hyper-V are simple:

  • Size your RAM.  Remember that a VM has a RAM overhead of up to 32MB for the first GB of RAM and up to 8MB for each additional GB of RAM in that VM.
  • Size the host machine’s “internal” disk for the parent partition or host operating system.  See the Windows Server 2008 R2 requirements for that.
  • The CPU(s) should be x64 and feature assisted virtualisation.  All of the CPU’s in the cluster should be from the same manufacturer.  Ideally they should all be the same spec but things happen over time as new hardware becomes available and you’re expanding a cluster.  There’s a tick box for disabling advanced features in a virtual machine’s CPU to take care of that during a VM migration.
  • It should be possible to enable Data Execution Prevention (DEP) in the BIOS and it should work.  Make that one a condition of sale for the hardware.  DEP is required to prevent break out attacks in the hypervisor.  Microsoft took security very, very seriously when it came to Hyper-V.
  • The servers should be certified for Windows Server 2008 R2.
  • You should have shared storage that you will connect to the servers using iSCSI or Fibre Channel.  Make sure the vendor certifies it for Windows Server 2008 R2.  It is on this shared storage (a SAN of some kind) that you will store your virtual machines.  Size it according to your VM’s storage requirements.  If a VM has 2GB of RAM and 100GB of disk then size the SAN to be 102GB plus some space for ISO images (up to 5GB) and some free space for a healthy volume.
  • The servers will be clustered.  That means you should have a private network for the cluster heartbeat.  A second NIC is required in the servers for that.
  • The servers will need to connect to the shared storage.  That means either a fibre channel HBA or a NIC suitable for iSCSI.  The faster the better.  You may go with 2 instead of 1 to allow MPIO in the parent partition.  That allows storage path failover for each physical server.
  • Microsoft recommends a 4th NIC to create another private physical network between the hosts.  It would be used for Live Migration.  See my next page link for more information.  I personally don’t have this in our cluster and have not had any problems.  This is supported AFAIK.
  • Your servers will have virtual machines that require network access.  That requires at least a third NIC in the physical servers.  A virtual switch will be created in Hyper-V and that connects the virtual machines to the physical network.  You may add a 4th NIC for NIC teaming.  You may add many NIC’s here to deal with network traffic.  I’ve talked a good bit about this, including this post.  Just search my blog for more.
  • Try get the servers to be identical.  And make sure everything has Windows Server 2008 R2 support and support for failover clustering.
  • You can have up to 16 servers in your cluster.  Allow for either N+1 or N+2.  The latter is ideal, i.e. there will be capacity for two hosts to be offline and everything is still running.  Why 2?  (a) stuff happens in large clusters and Murphy is never far away.  (b) if a Windows 8 migration is similar to a Windows Server 2008 R2 migration then you’ll thank me later – it involved taking a host from the old cluster and rebuilding it to be a host in a new cluster with the new OS.  N+1 clusters lost their capacity for failover during the migration unless new hardware was purchased.
  • Remember that a Hyper-V host can scale out to 64 logical processors (cores in the host) and 1TB RAM.

The Operating System

This one will be quick.  Remember that the Web and Standard editions don’t support failover clustering.

  • Hyper-V Server 2008 R2 is free, is based on the Core installation type and adds Failover Clustering for the first time in the free edition.  It also has support for CSV and Live Migration.  It does not give you any free licensing for VM’s.  I’d only use it for VDI, Linux VM’s or for very small deployments.
  • Windows Server 2008 R2 Enterprise Edition supports 8 CPU sockets and 2TB RAM.  What’s really cool is that you get 4 free Windows Server licenses to run on VM’s on the licensed host.  A host with 1 Enterprise license effectively gets 4 free VM’s.  You can over license a host too: 2 Enterprise licenses = 8 free VM’s.  These licenses are not transferable to other hosts, i.e. license 1 host and run the VM’s on another host.
  • Windows Server 2008 R2 DataCenter Edition allows you to reach the maximum scalability of Hyper-V, i.e. 64 logical processors (cores in the host) and 1TB RAM.  DataCenter edition as a normal OS has greater capacities than this; don’t be fooled into thinking Hyper-V can reach those.  It cannot do that despite what some people are claiming is supported.

All hosts in the cluster should be running the same operating system and the same installation type.  That means all hosts will be either Server Core or full installations.  I’ve talked about Core before.  Microsoft recommends it because of the smaller footprint and less patching.  I recommend a full installation because the savings are a few MB of RAM and a few GB of disk.  You may have fewer patches with Core but they are probably still every month.  You’ll also find it’s harder to repair a Core installation and 3rd party hardware management doesn’t have support for it.

Install The Hardware

First thing’s first, get the hardware installed.  If you’re unsure of anything then get the vendor to install it.  You should be buying from a vetted vendor with cluster experience.  Ideally they’ll also be a reputed seller of enterprise hardware, not just honest Bob who has a shop over the butchers.  Hardware for this stuff can be fiddly.  Firmwares across the entire hardware set all have to be matching and compatible.  Having someone who knows this stuff rather than searches the Net for it makes a big difference.  You’d be amazed by the odd things that can happen if this isn’t right.

As the network stuff is being done, get the network admins to check switch ports for trouble.  Ideally you’ll use cable testers to test any network cables being used.  Yes, I am being fussy but little things cause big problems.

Install The Operating Systems

Make sure they are all identical.  An installation that is done using using an answer file helps there.  Now you should identify which physical NIC maps to which Local Area Connection in Windows.  Take care of any vendor specific NIC teaming – find out exactly what your vendor prescribes for Hyper-V.  Microsoft has no guidance on this because teaming is a function of the hardware vendor.  Rename each Local Area Connection to it’s role, e.g.

  • Parent
  • Cluster
  • Virtual 1

What you’ll have will depend on how many NIC’s you have and what roles you assigned to them.  Disable everything except for the first NIC.  That’s the one you’ll use for the parent partition.  Don’t disable the iSCSI ones.

Patch the hosts for security fixes.  Configure the TCP/IP  for the parent partition NIC.  Join the machines to the domain.  I strongly recommend setting up the constrained delegation for ISO file sharing over the network.

Do whatever antivirus you need to.  Remember you’ll need to disable scanning of any files related to Hyper-V.  I personally advise against putting AV on a Hyper-V host because of the risks associated with this.  Search my blog for more.  Be very sure that the AV vendor supports scanning files on a CSV.  And even if they do, there’s no need to be scanning that CSV.  Disable it.

Enable the Cluster NIC for the private heartbeat network.  This will either be a cross over cable between 2 hosts in a 2 host cluster or a private VLAN on the switch dedicated just to these servers and this task.  Configure TCP/IP on this NIC on all servers with an IP range that is not routed on your production network.  For example, if your network is 172.168.1.0/16 then use 192.168.1.0/24 for the heartbeat network.  Ping test everything to make sure every server can see every other server.

If you have a Live Migratoin NIC (labelled badly as CSV in my examples diagrams) then set it up similarly to the Cluster NIC.  It will have it’s own VLAN and it’s own IP range, e.g. 192.168.2.0/24.

Enable the Virtual NIC.  Unbind every protocol you can from it, e.g. if using NIC teaming you won’t unbind that.  This NIC will not have a TCP configuration so IPv4 and IPv6 must be unbound.  You’re also doing this for security and simplicity reasons.

Here’s what we have now:

image

Once you have reached here with all the hosts we’re ready for the next step.

Install Failover Clustering

You’ll need to figure out how your cluster will gain a quorum, i.e. be able to make decisions about failover and whether it is operational or not.  This is to do with host failure and how the remaining hosts vote.  It’s done in 2 basic ways.  There are actually 4 ways but it breaks down to 2 ways for most companies and installations:

  1. Node majority: This is used when there are an odd number of hosts in the cluster, e.g. 5 hosts, not 4.  The hosts can vote and there will always be a majority winner, e.g. 3 to 2.
  2. Node majority + Disk: This is used when there are an even number of hosts, e.g. 16.  It’s possible there would be an 8 to 8 vote with no majority winner.  The disk acts as a tie breaker.

Depending on who you talk to or what GUI in Windows you see, this disk is referred to either as a Witness Disk or a Quorum Disk.  I recommend creating it in a cluster no matter what.  Your cluster may grow or shrink to an uneven number of hosts and may need it.  You can quickly change the quorum configuration based on the advice in the Failover Clustering administration MMC console.

The disk only needs to be 500MB size.  Create it on the SAN and connect the disk to all of your hosts.  Log into a host and format the disk with NTFS.  Label it with a good name like Witness Disk.

I’m ignoring the other 2 methods because they’ll only be relevant in stretch clusters than span a WAN link and I am not talking about that here.

Use Server Manager to install the role on all hosts.  Now you can set up the cluster.  The wizard is easy enough.  You’ll need a computer name/DNS name for your cluster and an IP address for it.  This is on the same VLAN as the Parent NIC in the hosts.  You’ll add in all of the hosts.  Part of this process does a check on your hardware, operating system and configuration.  If this passes then you have a supported cluster.  Save the results as a web archive file (.MHT).  The cluster creation will include the quorum configuration.  If you have an even number of hosts then go with the + Disk option and select the witness disk you just created.  Once it’s done your cluster is built.  It is not hard and only takes about 5 to 10 minutes.  Use the Failover Clustering MMC to check the health of everything.  Pay attention to the networks.  Stray networks may appear if you didn’t unbind IPv4 or IPv6 from the virtual network NIC in the hosts.

If you went with Node Majority then here’s my tip.  Go ahead and launch the Failover Clustering MMC.  Add in the storage for the witness disk.  Label it with the same name you used for the NTFS volume.  Now leave it there should you ever need to change the quorum configuration.  A change is no more than 2 or 3 mouse clicks away.

Now you have:

image

Install Hyper-V

Enable the Hyper-V role on each of your hosts, one at a time.  Make sure the logs are clean after the reboot.  Don’t go experimenting yet; Please!

Cluster Shared Volume

CSV is seriously cool.  Most installations will have most, if not all, VM’s stored on a CSV.  CSV is only supported for Hyper-V and not for anything else as you will be warned by Microsoft.

Set up your LUN on the physical storage for storing your VM’s.  This will be your CSV.  Connect the LUN to your hosts.  Format the LUN with NTFS.  Set it to use GPT so it can grow beyond 2TB.  Label it with a good name, e.g. CSV1.  You can have more than 1 CSV in a cluster.  In fact, a VM can have its VHD files on more than one CSV.  Some are doing this to attempt to maximise performance.  I’m not sold that will improve performance but you can test it for yourself and do what you want here.

DO NOT BE TEMPTED TO DEPLOY A VM ON THIS DISK YET.  You’ll lose it after the next step.

Use the Failover Clustering MMC to add the disk in.  Label it in Failover Clustering using the same name you used when you formatted the NTFS volume.  Now configure the the CSV.  When you’re done you’ll find the disk has no drive letter.  In fact, it’ll be “gone” from the Windows hosts.  It’ll actually be mounted as a folder on the C: drive of all of your hosts in the cluster, e.g. C:ClusterStorageVolume1.  This can be confusing at first.  It’s enough to know that all hosts will have access to this volume and that your VM’s are not really in your C: drive.  They are really on the SAN.  C:ClusterStorageVolume1 is just a mount point to a letterless drive.

Now we have this:

image

Virtual Networking

Hopefully you have read the previously linked blog post about networking in Hyper-V.  You should be fully educated about what’s going on here.

Here’s the critical things to know:

  • You really shouldn’t put private or internal virtual networks on a Hyper-V cluster when using more than one VM on those virtual networks.  Why?  A private or internal virtual network on host A cannot talk with a private or internal network on host B.  If you set up VM1 and VM2 on such a virtual network on host A what happens when one of those VM’s is moved to another host?  It will not be able to talk to the other VM.
  • If you create a virtual network on one host then you need to create it on all hosts.  You also must use identical names across all hosts.  So, if I create External Network 1 on host 1 then I must create it on host 2.

Create your virtual network(s) and bind them to your NIC’s.  In my case, I’m binding External Network 1 to the NIC we called Virtual 1.  That gives me this:

image

All of my VM’s will connect to External Network 1.  An identically named external virtual network exists on all hosts.  The physical Cluster 1 NIC is switched identically on all servers on the physical network.  That means if VM1 moves from host 1 to host 2 it will be able to reconnect to the virtual network (because of the identical name) and be able to reach the same places on the physical network.  What I said for virtual network names also applies to tags and VLAN ID’s if you use them.

Get Busy!

Believe it or not, you have just built a Hyper-V cluster.  Go ahead and build your VM’s.  Use the Failover Clustering MMC as much as possible.  You’ll see it has Hyper-V features in there.  Test live migration of the VM between hosts.  Do continuous pings to/from the VM during a migration.  Do file copies during a migration (pre-Vista OS on the VM is perfect for this test).  Make sure the VM’s have the integration components/integration services/enlightenments (or additions for you VMware people) installed.  You should notice no downtime at all.

Remember that for Linux VM’s you need to set the MAC in the VM properties to be static or they’ll lose the binding between their IP configuration and the virtual machine NIC after a migration between hosts.

Administartion of VM’s

I don’t know why some people can’t see or understand this.  You can enable remote desktop in your VM’s operating system to do administration on them.  You do not to use the Connect feature in Hyper-V Manager to open the Virtual Machine Connection.  Think of that tool as your virtual KVM.  Do you always use KVM to manage your physical servers?  You do?  Oh, poor, poor you!  You know there’s about 5 of you out there.

Linux admins always seem to understand that they can use SSH or VNC.

Virtual Machine Manager 2008 R2

VMM 2008 R2 will allow you to manage a Hyper-V cluster(s) as well as VMware and Virtual Server 2005 R2 SP1.  There’s a workgroup edition for smaller clusters.  It’s pretty damned powerful and simplifies many tasks we have to do in Hyper-V.  Learn to love the library because that’s a time saver for creating templates, sharing ISO’s (see constrained delegation above during the OS installation), administration delegation, self service portal, etc.

You can install VMM 2008 R2 as a VM on the cluster but I don’t recommend it.  If you do, then use the Failover Clustering and Hyper-V consoles to manage the VMM virtual machine.  I prefer that VMM be a physical box.  I hate the idea of chicken and egg scenarios.  Can I think of one now?  No, but I’m careful.

To deploy the VMM agent you just need to add the Hyper-V cluster.  All the hosts will be imported and the agent will be deployed.  Now you can do all of your Hyper-V management via PowerShell, the VMM console and the Self Service console.

You also can use VMM to do a P2V conversion as mentioned earlier.  VSS capable physical machines that don’t run transactional databases can be converted using a live or online conversion.  Those other physical machines can be converted using an offline migration that uses Windows PE (pre-installation environment).  Additional network drivers may need to be added to WinPE.

You can enable PRO in your host group(s) to allow VMM to live migrate VM’s around the cluster based on performance requirements and bottlenecks.  I have set it to fully automatic on our cluster.  Windows 2008 quick migration clusters were different: automatic moves meant a VM could be offline for a small amount of time.  Live Migration in Windows Server 2008 R2 solves that one.

Figure out your administration model and set up your delegation model using roles.  Delegated administrators can use the VMM console to manage VM’s on hosts.  Self service users can use the portal.

Populate your library with hardware templates, VHD’s and machine templates.  Add in ISO images for software and operating systems.  An ISO create and mounting tool will prove very useful.

Operations Manager 2008 R2

My advice is “YES, use it if you can!”.  It’s by using System Center that makes Hyper-V so much better.  OpsMgr will give you all sorts of useful information on performance and health.  Import your management packs for Windows Server, clustering, your hardware (HP and Dell do a very nice job on this.  IBM don’t do so well at all – big surprise!), etc.  Use the VMM integration to let OpsMgr and VMM to work together.  VMM will use performance information from OpsMgr for intelligent placement of VM’s and for PRO.

I leave the OpsMgr agent installation as a last step on the Hyper-V cluster.  I want to know that all my tweaking is done … or hopefully done.  Otherwise there’s lots of needless alerts during the engineering phase.

Backup

Deploy your backup solution.  I’ve talked about this before so check out that blog post.  You will also want to backup VMM.  Remember that DPM 2007 cannot backup VM’s on a CSV.  You will need DPM 2010 for that.  Check with your vendor if you are using backup tools from another company.

Pilot

Don’t go running into production.  Test the heck out of the cluster.  Deploy lots of VM’s using your templates.  Spike the CPU in some of them (maybe a floating point calculator or a free performance tool) to test OpsMgr and VMM PRO.  Run live migrations.  Test P2V.  Test the CSV coordinator failover.  Test CSV path failover by disconnecting a running host from the SAN – the storage path should switch to using the Ethernet and route via another host.  Get people involved and have some fun with this stage.  You can go nuts while you’re not yet in production.

Go Into Production

Kick up your feet, relax, and soak in the plaudits for a job well done.

EDIT #1:

I found this post by a Microsoft Failover Clustering program manager that goes through some of this if you want some more advice.

My diagrams do show 4 NIC’s, including the badly named CSV (Live Migration dedicated).  But as I said in the OS installation section, you only need 3 for a reliable system: (1) parent, (2) heartbeat/live migration, and (3) virtual switch.

EDIT #2

There are some useful troubleshooting tips on this page.  Two things should be noted.  Many security experts advise that you disable NTLM in group policy across the domain.  You require NTLM for this solution.  There are quotes out there about Windows Server 2008 failover clusters not needing a heartbeat network. But “If CSV is configured, all cluster nodes must reside on the same non-routable network. CSV (specifically for re-directed I/O) is not supported if cluster nodes reside on separate, routed networks”.