Multi-Site Live Migration Clustering Whitepaper By HP

I just saw a tweet by one of the Microsoft virtualisation feeds, announcing that HP had released a white paper on how to do Hyper-V Live Migration/clustering across multiple sites using the HP StorageWorks Cluster Extension.

“This paper briefly touches upon the Hyper-V and Cluster Extension (CLX) key features, functionality, best practices, and various use cases. It also objectively describes the various scenarios in which Hyper-V and CLX complement each other to achieve the highest level of disaster tolerance.

This paper concludes with the step-by-step details for creating a CLX solution in Hyper-V environment from scratch. There are many references also provided for referring to relevant technical details”.

HP StorageWorks VDS & VSS Hardware Providers

You can find this here on the HP site.  The EVA download for VDS and VSS is here.

HP StorageWorks Arrays support Microsoft Virtual Disk Services (VDS) and Volume Shadow Copy Services (VSS) for Windows Server 2003Services (VSS) for Windows Server 2003 / 2008 / 2008 R2 Enterprise and Data Center Editions.

VDS hardware providers enable volume and logical unit management of HP StorageWorks arrays from a central Windows Server Microsoft Management Console. Administrators can discover, configure and monitor supported HP storage devices from Windows Server operating environments.

VSS hardware providers enable point-in-time copies with nearly instant recovery of a single volume or multiple volumes. VSS providers are typically used in combination with a requestor application such as backup and recovery. Microsoft VSS services enable business applications to interface seamlessly with HP StorageWorks Arrays to perform point-in-time copies with nearly instant recovery.

Technorati Tags:

Hardware Monitoring Using System Center Operations Manager

Hardware management is the one thing I am most worried about.  Sure, I could deploy the manufacturers management solution.  But do I want consoles to manage lots of different systems?  Really, you don’t.  You want one central point and that can be the Operations Manager console.

I’m most familiar with what HP does so I’ll explain it.  They provide and Insight Manager agent that detects health and performance issues of the hardware.  This includes all of the components, e.g. CPU, fans, disks, network cards, etc.  You can deploy and OpsMgr agent to this server.  If you install the HP Insight Manager management pack then, after discovery, OpsMgr will be aware of the Insight Manager agent.  All data collected by that agent will be detected by OpsMgr.  So now, if a disk fails you learn about it in OpsMgr.  If memory degrades, you learn about it in OpsMgr.  This is so handy – because this is where you also get performance and health alerts for Windows, SQL, Exchange, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, etc.  You can extend with 3rd party solutions to include your Cisco network, etc.  Heck, there’s even a coffee pot management pack!!!

Back in the day, there appeared to be only support from HP and Dell.  But that has changed.

  • HP: Hewlett Packard has management packs for ProLiant servers, BladeSystem, and Integrity.  There is also a management pack for StorageWorks systems (e.g. EVA SAN).
  • Dell: I’ve never managed Dell machines with OpsMgr.  But I am told that Dell did a very nice job.  They are significant Microsoft partners.
  • IBM: I’m not the biggest fan of IBM – we have some X series stuff which I detest.  We had to get a IBM employee to download the management pack because all external links failed.  At the time, it appeared their “shared” download was only available from the IBM corporate network. A Dutch friend had the same issue and I ended up sending him what I was given by IBM.  I’ll be honest, the IBM Director management pack is poor compared to the HP one.  IBM wants you to spend lots of money on consultancy led Tivoli.  IBM Director is pretty poor too.  IBM Ireland employees have been unable to figure out how to monitor IBM DAS nor give me the documentation to do it.
  • Fujitsu: I have not seen a Fujitsu server since 2005.  Back then there was no MOM management pack for the Fujitsu Siemens servers; they wanted you to use a native solution only.  That has changed.  They have ServerView Integration for Microsoft System Center Operations Manager 2007 and System Center Essentials 2007 and ServerView Integration Pack for Microsoft System Center Operations Manager 2007.

That should get you started.  Each of the manufacturers seems to do things differently.  HP, for example, uses the above system for ProLaints.  But blade enclosures require a piece of middleware.  Make sure you read the accompanying documentation from the OEM before you do anything.

Thanks to fellow MVP Mark Wilson for finding the links for Fujitsu. 

Technorati Tags: ,

Planning HP EVA SAN Firmware Update

HP have contacted us requesting that we do a firmware update on our HP EVA SAN.  They seem pretty eager; they’re going to send in an engineer for free to do the update.  We’ll be responsible for the blade HBA mezzanine cards and virtual connect updates.  There’s compatibility lists for the entire set of firmwares in the blade enclosure and the SAN.  So I’m expecting precise instructions and schedules on this stuff.

Technorati Tags:

Thinking About HP and Microsoft’s Announcement

You’ve probably already read about the announcement where HP and Microsoft are aligning their technologies on the virtualisation, management, storage and deployment front.  Slap-bang in the middle of this is Hyper-V.

This was interesting.  Even up to last month, HP in Ireland was pretending Hyper-V didn’t exist.  Every bit of their marketing was 100% around VMware.  Their biggest enterprise storage/server reseller only started to play with Hyper-V for the very first time in December, only to see if it was something they wanted to sell or not.

Now we read that HP considers Hyper-V to be their primary virtualisation platform.  I’d wonder if that’s something to do with EMC and VMware cosying up to each other.  HP would prefer their LeftHand, EVA and XP SAN to be in that position, I’m sure.  If they’d hung around then I’m sure Dell would have taken a strong position with Microsoft on this front.  They’re equally as capable with their server, storage and System Center integration – which you could argue is as good, if not better, than HP’s.  And there is the NetApp alliance with MS on the Hyper-V front.  Talking with HP people from time to time and reading HP blogs, they really do not like NetApp!

I’m a HP Blade, EVA, Hyper-V and System Center customer so the HP/MS announcement is good for me.  I’d guess we won’t see anything of substance this year.  I’d hope whatever comes won’t just be some paid for bolt on in the HP catalog.  I’d expect to see developments on EVA CLX to give us CSV between EVA SAN’s in different sites (only support per LUN VM deployments at the moment) and a solution for the XP.  There’ll probably be some Hyper-V branded LeftHand as well – HP are really pushing LeftHand iSCSI storage and I can see why.  It’s an attractive looking package.

Technorati Tags: ,,

HP Updates Sizing Tool For W2008 Hyper-V

HP has released an updated version of their Hyper-V sizing tool to include Windows Server 2008 R2 Hyper-V.

“The HP Sizer for Microsoft Hyper-V 2008 R2 is an automated, downloadable tool that provides quick and helpful sizing guidance for “best-fit” HP server and storage configurations running in a Hyper-V R2 environment. The tool is intended to assist with the planning of a Hyper-V R2 virtual server deployment project. It enables the user to quickly compare different solution configurations and produces a customizable server and storage solution complete with a detail Bill of Materials (BOM) that includes part numbers and prices.

The HP Sizer for Microsoft Hyper-V 2008 R2 allows users to create new solutions, open already existing solutions, or use other types of performance data collecting tools, such as the Microsoft Assessment and Planning tool (MAP), to build rich Hyper-V R2 configurations based on HP server and storage technology. The tool allows rapid comparison of Hyper-V R2 characterizations using various HP server and storage choices”.

It is available for download now.  An older version for Windows Server 2008 is still available.

W2008 R2 Hyper-V Network Tests on HP G6 Blades

I posted earlier today about my network transfer tests on HP ProLiant BL460C G5 blade servers with Windows Server 2008 R2 Hyper-V.  Hans Vredevoort also did some tests, this time using BL460C G6 blades.  This gave Hans the hardware to take advantage of some of the new technologies from Microsoft.  Check out his results.

W2008 R2 Hyper-V Network Speed Comparisons

Hans Vredevoort asked what sort of network speed comparisons I was getting with Windows Server 2008 R2 Hyper-V.  With W2008 R2 Hyper-V you get new features like Jumbo Frames and VMQ (Virtual Machine Queue) but these are reliant on hardware support.  Hans is running HP G6 ProLiant servers so he has that support.  Our current hardware are HP G5 ProLiant servers.  I decided this was worth a test.

I set up a test on our production systems.  It’s not a perfect test lab because there are VM’s doing their normal workload and thing like continuous backup agents running.  This means other factors that are beyond my control have played their part in the test.

The hardware was a pair of HP BL460C “G5” blades in a C7000 enclosure with Ethernet Virtual Connects.  The operating system was Windows Server 2008 R2.  The 2 virtual machines were also running Windows Server 2008 R2.  I set them up with just 512MB RAM and a single virtual CPU.  Both VM’s had 1 virtual NIC, both in the same VLAN.  They had dynamic VHD’s. The test task would be to copy the W2008 R2 ISO file from one machine to the other.  The file is 2.79 GB (2,996,488 bytes) in size.

There were three tests.  In each one I would copy the file 3 times to get an average time required.

Scenario 1: Virtual to Virtual on the Same Host

I copied the ISO from VM1 to VM2 while both VM’s were running on host one.  After I ran this test I realised something.  The first iteration took slightly longer than all other tests.  The reason was simple enough – the dynamic VHD probably had to expand a bit.  I took this into account and reran the test.

With this test the data stream would never reach the physical Ethernet.  All data would stay within the physical host.  Traffic would route via the NIC in VM1 to the virtual switch via its VMBus and then back to the NIC in VM2 via its VMBus.

The times (seconds) taken were 51, 55 and 50 with an average of 52 seconds.

Scenario 2: Virtual to Virtual on Different Hosts

I used live migration to move VM2 to a second physical host in the cluster.  This means that data from VM1 would leave the virtual NIC in VM1, traverse VMBus and the Virtual Switch and physical NIC in host 1, the Ethernet (HP C7000 backplane/Virtual Connects) and then the physical NIC and virtual switch in physical host 2 to reach the virtual NIC of VM2 via its VMBus. 

I repeated the tests.  The times (seconds) taken were 52, 54 and 66 with an average of 57.333 seconds.  We appear to have added 5.333 seconds to the operation by introducing physical hardware transitions.

Scenario 3: Virtual to Virtual During Live Migration

With this test we would start with the scenario in the first set of tests.  We would introduce Live Migration to move VM2 from physical host 1 to physical host 2 during the copy.  This is why I used on 512MB RAM in the VMs; I wanted to be sure the live migration end-to-end task would complete during the file copy.  The resulting scenario would have VM2 on physical host 2, matching the second test scenario.  I want to see what impact Live Migration would have on getting from scenario 1 to scenario 2.

The times (seconds) taken were 59, 59 and 61 with an average of 59.666 seconds.  This is 7 seconds slower than scenario 1 and 2.333 seconds slower than scenario 2.

Note that Live Migration is routed via a different physical NIC than the virtual switch.

Scenario 4: Physical to Physical

This time I would copy the ISO file from one parent partition to another, i.e. from host 1 to host 2 via the parent partition NIC.  This removes the virtual NIC, virtual switch and the VMBus from the equation.

The times (seconds) taken were 34, 28 and 27 with an average of 29.666 seconds.  This makes the test scenario physical data transfer 22.334 seconds faster than the fastest of the virtual scenarios (scenario 1).

Comparison

Scenario Average Time Required (seconds)
Virtual to Virtual on Same Host

52

Virtual to Virtual on Different Hosts

57.333

Virtual to Virtual During Live Migration

59.666

Physical to Physical

29.666

Waiver

As I mentioned, these tests were not done in lab conditions.  The parent partition NIC’s had no traffic to deal with other than an OpsMgr agent.  The Virtual Switch NIC’s had to deal with application, continuous backup, AV and OpsMgr agent traffic.

It should also be noted that this should not be a comment on the new features Windows Server 2008 R2 Hyper-V.  Using HP G5 hardware I cannot avail of the new hardware offloading improvements such as VMQ and Jumbo Frames.  I guess I have to wait until our next host purchase to see some of that in play!

This is just a test of how things compare on the hardware that I have in a production situation.  I’m actually pretty happy with it and I’ll be happier when we can add some G6 hardware.

Planning W2008 R2 Hyper-V on HP ProLiant Blade Servers

I’ve not been keeping up with my reading as of late.  I missed that this document from HP came out – I was distracted with actually deploying a Windows Server 2008 R2 Hyper-V cluster on HP ProLiant Blade Servers and HP EVA SAN storage instead of reading about it 🙂

This document appears to be essential reading for any engineer or consultant who is sizing, planning or deploying Windows Server 2008 R2 Hyper-V onto HP Blade servers and HP EVA, MSA or LeftHand storage.

It starts off with a sizing tool.  That’s probably the trickiest bit of the whole process.  Disk used to be easy because we normally would have used Fixed VHDs in production.  But now we can use Dynamic VHDs knowing that the performance is almost indistinguishable. The best process for disk sizing now is base it on data, not the traditional approach of how many disks do you need.  Allow some budget for purchasing more disk.  You can quickly expand a LUN, then the CSV and then the VHD/file system.  Next comes the memory.  Basically each GB of VM ram costs a few MB in overhead charges.  You need to also allow 2GB for the host or parent partition.  What that means is that a host with 32GB of RAM realistically has about 29GB available for VM’s. The HP tool is pretty cool because it will pull in information from Microsoft’s MAP.  The free Microsoft Assessment and Planning Toolkit for Hyper-V will scan your servers and identify potential virtualisation candidates.  This gives you a very structured approach to planning.

The document talks about the blade components and blade servers.  There’s 3 types of blade from HP.

  • Full height: These are expensive but powerful.  You can get 8 of them in an enclosure.  Their size means you can get more into them.
  • Half height: You can get 16 of these into an enclosure, the same kind used by the full heights.  16 is coincidentally the maximum number of nodes you can put in a Windows cluster.  These are the ones we use at work.  Using Mezzanine cards you can add enough HBA’s and NIC’s to build a best practice W2008 R2 Hyper-V cluster.
  • Quarter height or Shorties: These machines are smaller and thus can have less components.  Using some of the clever 10Gig Ethernet stuff you can oversubscribe their NIC’s to create virtual NIC’s for iSCSI and Virtual Switches.  I’d say these are OK for limited requirements deployments.  Their custom enclosure can be a nice all-in-one featuring storage and tape drives (note you can also do this with the other blades but you’ll never get the capacities to match the server numbers).

What is really cool is that HP then gives you reference architectures:

  • Small: A single C3000 enclosure with internalised storage.  MSA or JBOD (un-clustered hosts) storage is something I would also consider
  • Medium: A single C7000 enclosure with LeftHand storage.  I’d also consider MSA or EVA storage here.  LeftHand is incredibly flexible and scalable but it is expensive.
  • Large: I’m drooling while looking at this.  Multiple (you can get 4 in a 42U rack, with 64 half height blades) C7000 enclosures and 2 racks of EVA 8400 storage.  Oooh Mama!

There’s even a bill of materials for all this!  It’s a great starting point.  Every environment is going to be different so make sure you don’t just order from the menu.

It’s not too long of a document.  The only thing really missing is a setup guide.  But hey, that’s all the more reason to read my blog 😉