Hyper-V, ML370 and Wireless

The after-work project I’m working on right now requires as many VM’s as I can throw at it.  I’ve got my previously mentioned Latitude E6500 laptop running W2008 R2 Hyper-V.  It’s also my domain controller and my VMM 2008 R2 server/library.  Not best practice but it’s fine for a domestic lab.

I need even more VM’s than I can run on there.  So I’ve got a HP ML370 G5 that was spare from work.  It’s got as much memory as I could scrape together and I put Windows Server 2008 R2 on it.  One problem: I do not have a wired house.  And I do not want to work beside the noisy server.  I’ll be using Office on my laptop for documentation and I can sit with that in my sitting room.  The server will stay upstairs in my office.  Just how will the communicate?

That’s easy.  I have an old Belkin 11G wifi NIC which I put into the ML370.  Windows detected it as a Broadcom.  That aint right but it works!  I’m going to set the server up as a member of my laptop’s domain.  That will allow me to put a VMM agent on there for remote management. 

My VM templates are small enough (dynamic VHD’s) but I probably might not want to copy them over wifi.  I might just configure the wired NIC’s with another subnet range and connect the machines with a hub/switch when I need to deploy stuff.  Or maybe I’ll copy the templates over to the server using a USB disk and set up a library share on the server for a faster local copy.  That might just work!

The Sanbolic Melio FS (File System) With Hyper-V

I was contacted last month by Eva Helen in Sanbolic to see if I’d be interested in learning more about their Melio FS product.  I knew little about it and was keen to learn.  So Eva got David Dupuis to give me a demo.  Dave just ran an hour long demo on LiveMeeting with me and I learned a lot.  I gotta say, I’m impressed with this solution.

There were two Sanbolic products that Dave focused on:

  • La Scala = Cluster Volume Manager used instead of disk manager.  It’s a shared volume manager.  It is aware of what nodes are attached to it. 
  • Melio = Cluster File System. 

La Scala

  • La Scala can mirror volumes across 2 SAN’s, allowing for total SAN failure.  Each server has two controllers or a dual channel HBA, one path going to each SAN.  1 write is converted to two writes on two paths.  In theory, there’s no noticeable performance hit for amazing fault tolerance.
  • On the fly volume expansion
  • Can use any block based shared storage iSCSI or fibre channel system
  • You can set up a task, e.g. expand disk, and review it before committing the transaction.
  • Windows ACL’s are integrated in the interface to control volume access rights.

I’ve got to say, the SAN mirroring is pretty amazing technology.  Note the performance will equal the slowest SAN.  It can take cheap storage solutions that might not even have controller/path fault tolerance and give them really high fault tolerance via redundant arrays and mirrored storage with an unperceivable performance hit due to the mirroring being done by simultaneous writes by 2 independent controller paths.

Melio

  • This is 64-bit symmetrical cluster file system.
  • There is no coordinator node, management server, metadata controller, etc, that manages the overall system.  So there’s no redirected I/O mode *cheers from Hyper-V admins everywhere*
  • Metadata is stored on the file system and every node in the cluster has equal access to this.  This is contrary to the CSV coordinator in W2008 R2 failover clustering.
  • QoS (quality of service) allows per process or per file/folder file system bandwidth guarantees.  This allows granular management of SAN traffic for the controlled resources.  In the Hyper-V context, you can guarantee certain VHD’s a percentage of the file system bandwidth.  You can also use wildcards, e.g. *.VHD.  This is another very nice feature.
  • There is a VSS provider.  This is similar to how SAN VSS providers would work.  Unlike CSV, there is no need for redirected I/O mode when you snap/backup the LUN. 
  • There is a bundled product called SILM that allows you to copy (via VSS) new/modified files to a specified LUN on a scheduled basis.
  • Backups solutions like BackupExec that recognise their VSS provider can use it to directly backup VM’s on the Melio file system.
  • MS supports this system, i.e. Failover Clustering and VMM 2008 R2.  For example, Live Migration uses the file system.  You’ll see no CSV or storage in Failover Clustering.  The Melio file system appears as a normal lettered drive on each node in the cluster.
  • By using advance exclusive lock detection mechanisms that CSV doesn’t have, Melio can give near raw disk performance to VHD.  They say they have faster (57%) VHD performance than CSV!
  • You can provide iSCSI accessed Melio file systems to VM’s.  You can license the product by host –> gives you 4 free VM licenses.
  • Melio isn’t restricted to just Hyper-V: web servers, SQL, file servers, etc.
  • Issues seen with things like AV on CSV aren’t likely here because there is no coordinator node.  All metadata is available to all nodes through the file system.  You need to be aware of scheduled scans: don’t have all nodes in the cluster doing redundant tasks.  The tip here: put a high percentage guarantee for *.VHD and the AV has been controlled.

It’s got to be said that you cannot think of this as some messy bolt on.  Sanbolic has a tight relationship with Microsoft.  That’s why you see their Melio file system being listed as a supported feature in VMM 2008 R2.  And that can only happen if it’s supported by Failover Clustering – VMM is pretty intolerant of unsupported configurations.

Overall, I’ve got to say that this is a solution I find quite interesting.  I’d have to give it serious consideration if I was designing a cluster from scratch and the mirroring option raises some new design alternatives.

My $64,000,000 question has probably been heard by the guys a bunch of times but it got a laugh: “when will Microsoft buy Sanbolic and have you invested a lot in the company share scheme?”.  Seriously though, you’d think this would be a quick and superb solution to get a powerful cluster file system that is way ahead of VMFS and more than “just” a virtualisation file system.

Thanks to the kind folks at Sanbolic for the demo.  It’s much appreciated!

Fujitsu “My Very First Hyper-V”

Fujitsu has launched a bundle for SME’s (small/medium enterprises) that want to do Hyper-V virtualisation for the very first time.  They’ve called it the “My Very First Hyper-V”.  It includes, servers, external storage, Windows Server 2008 R2 and System Center Virtual Machine Manager 2008 R2 Work Group Edition.  A flyer can be found here.

I wonder if they’ll replace the VMM installation with System Center Essentials 2010 when it is released.  That would make sense to me seeing as it’s aimed at this market and it gives software management, health & performance monitoring and VMM functionality.

Elastic Virtualisation With System Center

Today I was working with a customer who needed to grow their hosted presence with us due to performance and scaling requirements.  OpsMgr ProTips alerts made us aware of certain things that got the customer and us working.  A VMM library template machine was quickly deployed to meet the sudden requirements.  That got me thinking about how OpsMgr and VMM could be used in a large virtualised (and even physical) application environment to scale out and in as required.  All of this is just ideas.  I’m sure it’s possible, I just haven’t taken things to this extreme.

image

Let’s take the above crude example.  There are a number of web servers.  They’re all set up as dumb appliances with no content.  All the content and web configurations are on a pair of fault tolerance content servers.  The web servers are load balanced, maybe using appliances or maybe by reverse proxies.  It’s possible to quickly deploy these web servers from VM templates.  That’s because the deployed machines all have DHCP addresses and they store no content or website configuration data.

The next tier in the application is typically the application server.  This design is also built to be able to scale out or in.  There is a transaction queuing server.  It receives a job and then dispatches that job to some processing servers.  These transaction servers are all pretty dumb.  They have an application and know to receive workloads from the queuing server.  Again, they’re built from an image and have DHCP addresses.

All VM templates are stored in the VMM library.

All of this is monitored using Operations Manager.  Custom management packs have been written and distributed application monitoring is configured.  For example, average CPU and memory utilisation is  monitored across the web farm.  An alert will be triggered if this gets too high.  A low water mark is also configured to detect when demand is low.

The web site is monitored using a captured web/user perspective transaction.  Response times are monitored and this causes alerts if they exceed pre-agreed thresholds. 

The Queuing server’s queue is also monitored.  It should never exceed a certain level, i.e. there is more work than there are transaction servers to process it.  A low water mark is also configured, e.g. there is less work than there are transaction servers.

So now OpsMgr knows when we have more work than resources, and when we have more resources than we have work for.  This means we only need a mechanism to add VM’s when required and to remove VM’s when required.  And don’t forget those hosts!  You’ll need to be able to deploy hosts.  I’ll come back to that one later.

Deploying VM’s can be automated.  We know that we can save a PowerShell job into the library when we create a VM, etc.  Do that and you have your VM.  You can even use the GUIRunOnce option to append customisation scripts, e.g. naming of servers, installation of updates/software, etc.  Now you just need a trigger.  We have one.

When OpsMgr fires an alert it is possible to associate a recovery task with the alert.  For example, the average CPU/Memory across the web farm is too high.  Or maybe the response time across the farm is too slow.  Simple – the associated response is to run a PowerShell script to deploy a new web server.  10 minutes later and the web server is operational.  We already know it’s set to use DHCP so that’s networking sorted.  The configuration and the web content are stored off of the web server so that’s that sorted.  The load balancing needs to be updated – I’d guess some amendment to the end of the PowerShell script could take care of that.

The same goes for the queuing server.  Once the workloads exceed the processing power a new VM can be deployed within a few minutes and start taking on tasks.  They’re just dumb VM’s.  Again, the script would need to authorise the VM with the queuing process.

That’s the high water mark.  We know every business has highs and lows.  Do we want to waste Hyper-V host resources on idle VM’s?  Nope!  So when those low water marks are hit we need to remove VM’s.  That one’s a little more complex.  The PowerShell script here will probably need to be aware of the right VM to remove.  I’d think about this idea:  The deploy VM’s would update a file or a database table somewhere.  Thing of it like a buffer.  The oldest VM’ should then be the first one removed.  Why?  Because we Windows admins prefer newly built machines – they tend to be less faulty than ones that have been around a while.

With all that in place you can deploy VM’s to meet demands and remove VM’s when they are redundant to free up physical resources for other applications.

What about when you run out of Hyper-V server resources?  There most basic thing you need to do here is know that you need to buy hardware.  Few of us have it sitting around and we run on budgets and on JIT (just in time) principles.  Again, you’d need to do some clever management pack authoring (way beyond me to be honest) to detect how full your Hyper-V cluster was.  When you get to a trigger point, e.g. starting  to work on your second last host, you get an alert.  The resolution is buy a server and rack it.  You can then use whatever build mechanism you want to deploy the host.  The next bit might be an option if you do have servers sitting around and can trigger it using Wake-On-Lan.

ConfigMgr will run a job to deploy an operating system to the idle server.  It’s just a plain Windows Server installation image.  Thanks to task sequences and some basic Server Manager PowerShell cmdlets, you can install the Hyper-V role and the Failover Clustering feature after the image deployment.  A few reboots happen.  You can then add it to the Hyper-V cluster.  You can approach this one from other angles, e.g. add the host into VMM which triggers a Hyper-V installation.

Now that is optimisation and dynamic IT!  All that’s left is for the robots to rise – there’s barely a human to be seen in the process once its all implemented.  I guess your role would be to work on the next generation of betas and release candidates so you can upgrade all of this when the time comes.

I’ve not read much about Opalis (recently aquired by Microsoft) but I reckon it could play a big role in this sort of deployment.  Microsoft customers who are using System Management Suite CAL’s (SMSE/SMSD) will be able to use Opalis.  Integration packs for the other System Center products are on the way in Q3.

TechNet Wiki – Hyper-V

I talked about the TechNet Wiki recently which was announced by Keith Combs.  I won’t hold it against him for being a Cowboys fan ;-)  Ben Armstrong just blogged about the Hyper-V part of the wiki and you can see what he’s said there.  So I guess that means it’s “live” in some way, shape or form.  If you feel like you can document some facet of Hyper-V better than what has been done previously, or if you know of some tricks/work arounds, then please add them.

You can find the wiki here.  I’m not a big fan of the landing page because I’ve not really found a way to get into the wiki from it.  Maybe I’m dumb 🙂

Webcast: Hyper-V for the VMware Administrator

Microsoft did a webcast on March 1st aimed at VMware administrators/engineers/consultants who are interested in, or will be working with Hyper-V.

The fan-boys will be thinking negative thoughts and wishing me ill will now 🙂

Realistically, you need to start thinking of hardware virtualisation as being like hardware.  Some companies like HP, some like Dell, and some like Fujitsu – who really likes IBM?  I’m kidding; I don’t really care who likes IBM hardware.

This means that although a company may have a preference, they will have variations depending on circumstances.  For example, we’re told that VMware has a presence in every single Fortune 100 in the USA.  But do you think none of them are either using or considering Hyper-V as well?  There may be features that ESX offers that they use, but Hyper-V offers virtualisation at a greater price.  Bundle in System Center and you have a complete management solution rather than a point one.  With VMM you can manage both ESX (and ESXi) and Hyper-V.  Only the biggest of fan-boys will rule out Hyper-V making it’s way into some VMware sites to work along side it, just like you find a mix of server vendor types in some computer rooms.

The services industry is another interesting one.  This time last year, I could really think of one, maybe two, services companies in Ireland that I would call if I was in need of Hyper-V consulting skills.  Lots of them went to events, but they were all sticking to their VMware guns. It was probably a combination of internal evaluations and customer decision making that drove this.  But since last Summer, things shifted slightly.  Hyper-V is mentioned more as a skills requirement.  And thanks to the HP/Microsoft virtualisation alliance, HP resellers are starting to gather skills.  One of the major players in the Irish enterprise hardware space was laughing at Hyper-V a year ago.  Then they started to lose big virtualisation bids to the few companies going in with Hyper-V solutions.  CSV and Live Migration changed everything.  Customers now were happy to get the core features at a fraction of the price.

If you are a VMware person, give the webcast a watch.  Most of the criticisms of Hyper-V by fan-boys are usually based on lack of knowledge, e.g. the famous “9 things” post that was widely slammed for being ill-informed.

VMM 2008 R2: Host Needs Attention After KB978560

I saw this one last night for myself and I’ve just seen a week-old post by Mike Briggs on the subject.  When you deploy KB978560 to your VMM 2008 R2 server, it will require an update to the agents.  You’ll see a yellow exclamation mark icon appear on your hosts.  When you check their status you’ll see that you must take manual action to resolve the issue.  Simply right-click on the managed hosts, update the agent, and provide any required credentials.  It takes a minute or two, then you’ll get your “issue” resolved. 

Be sure to put the hosts in maintenance mode in OpsMgr if you’re using it.  Otherwise you’ll get a bunch of alerts for every host you upgrade.

Technorati Tags:

Some HP and Hyper-V Links

Patrick Lownds, a fellow virtualisation MVP over in the UK, has provided a couple of useful links if you are running Hyper-V on HP equipment.  The first is a post on best practice guidance if you are running Hyper-V on a HP EVA SAN.  There is a whitepaper that goes through HP’s recommendations on this.  It was interesting to see they saw a fixed VHD’s get 7% more IOPS at 7% less latency than dynamic VHD’s.

The ProTips for HP are also available.  They’re not easy to find but Patrick provided me with a link.  The idea here is that HP SIM agents (which you should be installing, even if you don’t use the HP or other management software) detect hardware issues.  OpsMgr then picks up the alert and notifies VMM using the HP Pro Tips.  VMM can then take action, e.g. migrating VM’s from one host to another in the cluster.

Technorati Tags: ,,,

Windows Server 2008 R2 Hyper-V VHD Performance White Paper

Microsoft has published a whitepaper on VHD performance.  It talks about raw disk, pass through, fixed and dynamic.  It’s must reading if you’re in a Hyper-V engineering/design role.

To be honest, it is more than just a Hyper-V document.  It does talk about VHD in general.  Windows Server 2008 is also included.

Exchange 2010 Support For Virtualisation

I hadn’t really read this one too much because I don’t deal with Exchange very often.  But it came up on the Minasi Forum over the last few days and Jetze Mellema (Exchange MVP) posted a link to the official support article.

Does Exchange 2010 support virtualisation?  Yes … barely.  There’s so many notes associated with the support statement from the Exchange teams that you really want to sit back and go Hmmm!

Obviously it supports Hyper-V and other hardware virtualisation solutions in the Windows Server Virtualization Validation Program.

Microsoft goes on to say:

  • The Unified Messaging server role is not supported in VM’s.
  • Virtual disks that dynamically expand aren’t supported by Exchange.
  • Virtual disks that use differencing or delta mechanisms (such as Hyper-V’s differencing VHDs or snapshots) aren’t supported.

Other notes from this site are:

  • You cannot run a DAG on a clustered host, e,g. a VMware cluster with VMotion or a Hyper-V cluster with Live/Quick Migration.
  • Snapshots of the VM are not supported.
  • The Exchange team supports no more than 2 virtual processors per logical processor on the host.  For example, you cannot have more than 16 virtual processors on a dual, quad core host (8 logical processors).  Normally, Hyper-V has a max of 8:1 ratio.
  • Like with SQL, snapshots are not supported.

Not that these restrictions don’t just apply to Hyper-V.  They apply to all virtualisation solutions.