Hyper-V and VMM 2008: 6 Months Later

Much like the upcoming NFL draft, I think it’s impossible to judge something in advance.  Those draft grades that’ll be all over the USA sports press in just over 2 weeks time are pointless.  Instead the only grade that works is the one that looks at your draft from a few years ago. 

The same goes with complex IT solutions.  You need to use it and evaluate them.  I put Windows Server 2008 Hyper-V and System Center Virtual Machine Manager 2008 into production about 6 months ago.  Earlier today I thought it might be a good idea to look back.

My preparation took months.  We got in a single DL380 G5 which I ran the beta and RC builds of Hyper-V on.  I was able to build up a lab environment and get to grips with things.  I saw how OEM’s could cause problems.  The HP NC373i NIC required some fudging with VLAN settings, similar to an Intel NIC, to get VLAN trunking working through to the virtual network and the VM’s.  The OEM NIC teaming solutions failed to work on Hyper-V – in fact I found you should keep the software off of the server completely.  Microsoft continued their policy of not supporting NIC teaming in Hyper-V.  That’s a pity and I strongly hope MS reconsiders this and builds NIC teaming into the virtual network to make it like the ESX virtual switch.

My lab machine proved something to me.  Hyper-V could realistically be a production virtualisation environment.  The performance was excellent.  It was stable and the architecture screams security.  I downloaded the RTM release of Hyper-V immediately after it was released and installed it.  Things looked good.

Something was evident though.  I’ve noticed that Microsoft has become poor at documenting their software for installation, configuration and operations.  I’d learned in college that nothing was finished until it was complete.  All we had were scatterings of blog posts from many sources and all too often they focused on coding Hyper-V rather than answering the questions that people had.  That left an opportunity for people like me to do some blogging about our experiences and a bit of writing.  I’d been entered into the IT Pro Momentum program and that gave me a great mechanism for getting questions answered and testing out scenarios.

My employers then made the strategic decision to go with Hyper-V as our virtualised server hosting solution.  We saw where things were going.  Microsoft System Center had proven itself to the directors and to our customers.  We wanted to continue this throughout the infrastructure.

We purchased our servers and storage for our Hyper-V cluster.  We all read how we should use a Core installation for the parent partition.  That’s what I started with.  I liked using a tiny C: drive leaving more space for customers.  Pretty soon I saw this wasn’t going to work.  The hardware management software provided by the OEM’s assumed there was a GUI and couldn’t be configured.  I ditched that and went with a Full installation instead, something I’ve heard many are doing.

I built up WDS images and used those to deploy the servers.  I went through 3 iterations of the host server builds.  The first showed more issues with an OEM NIC, the NC326m.  I’d tested the network trunk by configuring a VLAN tag on the driver.  That forever broke the ability to run the VLAN’s into Hyper-V.  Rebuild.  I then did some serious testing, trying out everything I could think of.  When I was ready, I rebuilt the production system.  This was so quick and so easy I did it in a hotel room in one hour, in between speaking sessions.

The main complaint from people about Hyper-V was Quick Migration.  Instead of Live Migration or VMotion we had something where moving a VM from one node to another on a Windows 2008 Failover Cluster took a little time.  The VM would save its state, the disk would transfer ownership and the VM would restart.  I reckon 90% of servers can tolerate this.  Not every server needs Live Migration – that’s coming in Windows Server 2008 R2 anyway.  Building the cluster takes minutes.  It’s so easy with Windows Server 2008.  The one thing I have liked is that you really need to have 1 storage LUN for every VM.  You can put many VM’s on one LUN but they all must failover at once.  This also isn’t supported by VMM 2008.  This one VM per LUN thing is a little tedious and hampers my ability to do true self service because of the dependence on storage management.  R2 Cluster Shared Volumes (CSV) will sort that out.

The next criticism of Hyper-V has been non-Microsoft OS support.  I think this one is valid.  It is changing, though.  Not long after RTM we got the integration components for SUSE Linux Enterprise 10.  In my testing I saw how RedHat and CentOS ran on Hyper-V but not with great performance.  SLES was the same until I installed the integration components.  It then ran quite nicely.  I did away with the GUI for SLES because I had no mouse driver.  A joint program between MS and Citrix recently released a mouse driver.

We got the release of VMM 2008 in November 2008.  I deployed that and it allowed us to finish up our testing before going live.  This gave us a single management point for the virtualisation layer: Hyper-V and Failover Clustering.  We also got integration with Operations Manager 2007 SP1 giving us top to bottom and cradle to grave management of the infrastructure.  It immediately showed how well it worked. 

Deploying VM’s became easier.  I’ve made great use of the library; it’s stuffed full with compressed VHD’s and ISO’s.  I haven’t found the Powershell functionality at all useful to be honest.  ISO sharing doesn’t work with Hyper-V unfortunately.  ISO’s must be copied to the VM and that’s a lot of wasted time when dealing with DVD images.  I’ve seen some funnies in VMM but my participation in IT Pro Momentum allowed me to report them and some even have turned into hotfixes.  In fact, while writing this I was told one would be released to the general public next week.  One other which has stung me is a weird one related to synthetic NIC drivers and VLAN tagging.  For some reason, a NIC created from a hardware template sometimes (rarely) acts up when deployed and needs to be recreated.  It’s immediately visible in a new VM; it does not turn up later.

We did a P2V of the single machine that was a candidate – we’re not your typical server network.  That ran seamlessly in terms of VMM and Hyper-V.  The only thing to watch out for is that you strip out all of the OEM software before you P2V if you want a clean migration.  Otherwise, be prepared for some safe modes and some “hacking”.

Hyper-V Server 2008 was released.  I tried it.  It was simple to get going and it’s basically the same as Hyper-V on Windows Standard Core installation in terms of architecture and performance.  It’s not something I’d use in production because I couldn’t manage it with System Center.  The VMM agent would constantly crash and the OpsMgr agent can’t be installed on it.

Don’t get caught up on the negatives I’ve mentioned.  Hyper-V has worked very nicely.  Before this I ran VMware ESX 3.X with Virtual Center.  I’ve had similar performance.  However I get better management with Hyper-V thanks to System Center.  Server deployment is rapid.  Our licensing costs are low.  And it’s been very stable.  We’re a server hosting company and customers are running happily on it.  I’d easily recommend Hyper-V and VMM. 

But don’t think I’m bashing the likes of ESX.  I think it’s a great solution too.  VMware do some other things better than MS.  Both seem to have a different focus at the moment so pick th
e one that suits your current needs – taking account for the future.

I’m looking forward to W2008 R2 and VMM 2008 R2.  We’ll likely be upgrading to W2008 R2 as soon as VMM 2008 R2 is released with integrated support with OpsMgr 2007 R2 – yeah I’ll be deploying that ASAP.  We’re looking forward to the release of the RedHat integration components, probably later this year.  It’ll be another fun year of new software.  And I can’t wait to see what the strategic data centre strategy of VHD will bring us in Windows Server 8!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.