Windows 2008 User Group Event: Hyper-V and Virtual Machine Manager 2008

The Windows Server 2008 User Group (Ireland) will be running an event on Hyper-V and System Center VMM 2008.  There will be 3 sessions:

  • Dave Northey (Microsoft): Hyper-V – we can get a little deeper on this topic now that the product has been released.
  • Aidan Finn (ME) (C Infinity): Lessons I’ve learned about Hyper-V – Aidan will share his experiences with the product and things you should be aware of when setting up a lab or production environment.
  • Mark Gibson (Microsoft): Virtual Machine Manager 2008 – System Center VMM2008 is due to be released in Q3 2008.  It is Microsoft’s answer to VMware’s Virtual Center and will be an essential tool for managing production Hyper-V deployments.

Attending The Event

The session is free to attend for members of the Windows Server 2008 Users Group.  Membership and joining the group are free.  Once you are joined, we will send an invite out to you – assuming there are places still free. 

Places are limited to 20 so book now while you can.

Patch For Hyper-V in Clustered Environments

I was told a little while ago to watch out for this patch from Microsoft.  It improves how Hyper-V works in a clustered host environment.  KB951308 can be downloaded once you accept a EULA.  You should have a read because there is a long list of improvements.

Note that:

  • If you apply this update to a computer that is functioning as a Windows Server 2008 failover cluster, the failover cluster service must be restarted before the changes will take effect.
  • If you apply this update to a system that is running the Failover Cluster Management console (the Cluamin.msc file), any open versions of this management console be closed and reopened before the changes will take effect.

Make sure you test this update before you or your company decide to install this update.

Hyper-V Clusters – There Are Only 26 Letters In the Alphabet

If you’ve looked at putting Hyper-V in a cluster you might have read Jose Barreto’s blog post on clustering options, viewed Dave Northey’s videos demonstrating it in action or considered trying to recreate what ESX with Virtual Center does.  You’ll soon see that to have failover or mobility on a per-VM basis with Hyper-V on Windows Server 2008, each VM must reside in it’s on disk/LUN on your shared storage.  Windows Server 2008 doesn’t have the ability (yet) to do shared file systems like that in ESX’s VMFS.

You’ll now think … I can have 16 nodes in a cluster and potentially dozens of VM’s in my N+1 or N+2 architecture.  Wait … how many drive letters am I going to need?  I’ve already consumed A, B, C and D … does this mean a cluster can have only 22 VM’s?  This is probably something where some certain-product-fanatic gets to write some blog FUD without digging just a little deeper.  It’s amazing to see how prejudice is tainting the commentary and reviews that are out there right now 🙂

You have the option to use "letterless" drives in Windows Server 2008.  Instead of using a drive letter to identify the physical drive that each VM can reside on, you can use a GUID to identify the drives. 

The only question now is, how do you use these drives?  VirtuallyAware has done a post on the subject.  The hardest part of the process is getting the GUID of the LUN that you’re working with.  Who really wants to type out something nasty like "fc247e42-0a5e-11dd-94db-001b785788b0"?  PowerShell helps at there as the blog post indicates. 

You’ll now have a virtually unlimited set of drive identifiers that will allow your cluster to scale out to the limitations of your CPU, storage and RAM.

On a tangent, this is just another example of where PowerShell is a necessary skill, not only in PowerShell but in all new MS technologies.  I’ve started learning it.  It’s different, that’s for sure, but it’s not optional any longer.

Beware Anti-Virus and Hyper-V

I released the July updates onto our network this past weekend.  I’d also deployed our new AV the previous week.  Let’s just say that AV mixed with Hyper-V and followed by a reboot made for a nice mess.

I logged into the Hyper-V lab this morning to find half of my VM’s were missing.  They’re sitting find (but idle) on the storage.  It’s just Hyper-V has "forgotten" that they ever existed.

I trawled through the Windows Event logs (Application and Service logs – Microsoft – Windows – Hyper-V-Config – Admin) and found a series of these:

Source: Hyper-V-Config

Event ID: 4096

Level: Error

The Virtual Machines configuration <big long GUID> at <path to VM> is no longer accessible: The requested operation cannot be performed on a file with a user-mapped section open. (0x800704C8)

Ok.  A bit of googling found an entry on the TechNet forums that says you need to disable scanning for the VHD’s and the XML files of your VM’s.  Ouch!

OK, so I did that and rebooted by lab server.  Still no dice.  Actually, Hyper-V doesn’t even bother attempting to load these VM’s now.  OK, I’ll do what I would in any other virtualisation product; I’ll open them.  Ick … no open command.  Import?  Nope; because MS in their wisdom (!) decided that the import/export format should be different to that of a normal VM. 

So I’ve got a plethora of VM’s that are sitting on my disk in a saved state that I cannot load up.  My only way forward is to re-add the virtual hard disks as new VM’s.  This is a pain:

  • I lose my saved states.
  • I have to reconfigure every single VM that is missing.
  • Each VM has to do the PNP dance with a "new" NIC and I have to reconfigure IPv4 addressing.
  • It’s just lots of work I shouldn’t have to do.

I’ve logged a bug report with MS.  I’m open to any constructive suggestions.

Hyper-V RAM Calculator

This download has been superseded by a newer Hyper-V Calculator spreadsheet.

I’ve previously discussed how RAM is used by Hyper-V in terms of:

  • The parent partition
  • Hyper-V services
  • Drivers
  • Guest RAM allocation overhead.

I’ve put together an Excel spreadsheet that calculates how much RAM is consumed by a VM as you load it onto a host.  Using it is easy:

  1. Specify how much RAM is in the physical host machine.
  2. Add each guest VM and enter how much RAM (in GB) you want to allocate to the guest.
  3. The RAM utilised by the guest is calculated and the amount remaining on the host is presented.

The numbers you need to enter are highlighted in yellow.

The formula used assumes maximum RAM overhead, i.e. the worst case scenario of 32MB for the first GB and 8MB for each GB after that on a per VM basis.  I’m also allowing 300MB in addition to the 2GB recommended as the reserve for the parent partition.  Often, this can be considered a part of the 2GB.  You can recalculate things by adding in another line item to specify driver requirements for the parent OS if you want.

EDIT:

I’ve done some testing on hosts with 32GB RAM and the theory seems to match the practice.

Hyper-V Controllers: IDE or SCSI?

There’s been plenty of blog posts out there saying that there is no support for SCSI in Hyper-V.  That’s not true.  What is true is this.  You can use SCSI controllers for disks but not for your boot disk.  Your boot disk must be on an IDE controller.

When using emulated storage controllers, i.e. no integration components then IDE is slower than SCSI.  However, there is no discernable difference between SCSI and IDE when using sythentic drivers, i.e. integration components or VM additions.

Setting Up VM’s

How do you set up your VM’s?  You have no choice about your boot disk.  You must use a disk connected to the IDE controller.  You can’t move that to the SCSI controller because you cannot boot from a Hyper-V SCSI controller.  Lightweight VM’s can probably put everything on one virtual disk and run on the IDE controller.

However, best practice is to separate your data/workload from your operating system.  Consider a virtual application server where the operating system is on C: and the workload is on D:.  C: will be a virtual disk on the IDE controller.  D: should be a virtual disk on a SCSI controller if you don’t have integration components.  This makes the most of the underlying Hyper-V architecture and optimises CPU utilisation on the host server.  However, if you have integration components then it makes no difference whether you use SCSI or IDE for the workload disk.

What really makes a difference is the underlying physical storage and the types of VHD that you use.  Passthrough disks are physical speed.  Fixed Sized VHD currently get to within 6% of the speed of the underlying physical LUN, assuming you have 1 VHD per LUN.  Dynamic and Differencing VHD’s have great impacts on performance.