Slide Deck – Private Cloud Academy: Managing Hyper-V

Here is the presentation that I gave a few months ago at the Microsoft Ireland/System Dynamics Private Cloud Academy event in Dublin.  It focused on how to manage Hyper-V using System Center Virtual Machine Manager (SCVMM/VMM) 2008 R2 and System Center Operations Manager (SCOM/OpsMgr) 2007.

Recent KB Articles Affecting Hyper-V, Etc

Here’s a few KB articles I found that were released by Microsoft recently that affect Hyper-V farms.

KB2004712: Unable to backup Live Virtual Machines in Server 2008 R2 Hyper-V

“When backing up online Virtual Machines (VMs) using Windows Server Backup or Data Protection Manager 2007 SP1, the backup of the individual Virtual Machine may fail with the following error in the hyperv_vmms Event Log:

No snapshots to revert were found for virtual machine ‘VMName’. (Virtual machine ID 1CA5637E-6922-44F7-B17A-B8772D87B4CF)”.

VM with GPT pass through disk on a Hyper-V cluster with SAS based storage array will cause VM to report “Unsupported Cluster Configuration.”

“When you attach a GPT pass-through disk provided from SAS storage (Serial attached SCSI) array to a highly available virtual machine by using the Hyper-V Manager or Failover Cluster Management Microsoft Management Console (MMC) snap-in, the System Center Virtual Machine Manager 2008 Admin Console lists the status of the virtual machine as "Unsupported Cluster Configuration."

Details on the High Availability section of the VMs Properties in SCVMM are:

Highly available virtual machine <Machinename> is not supported by VMM because the VM uses non-clustered storage. Ensure that all of the files and pass-through disks belonging to the VM reside on highly available storage”.

On a computer with more than 64 Logical processors, you may experience random crashes or hangs

“On a computer which has more than 64 logical processors, you may experience random memory corruption during boot processing. This may result in system instability such as random crashes or hangs.

This problem occurs due to a code defect in the NDIS driver (ndis.sys).

Microsoft is currently investigating this problem, and will post more details when a fix is available.

To work around this issue, reduce the number of processors so that the system has no more than 64 logical processors. For example, disable hyper-threading on the processors”.

The network connection of a running Hyper-V virtual machine may be lost under heavy outgoing network traffic on a computer that is running Windows Server 2008 R2 SP1

“Consider the following scenario:

  • You install the Hyper-V role on a computer that is running Windows Server 2008 R2 Service Pack 1 (SP1).
  • You run a virtual machine on the computer.
  • You use a network adapter on the virtual machine to access a network.
  • You establish many concurrent network connections. Or, there is heavy outgoing network traffic.

In this scenario, the network connection on the virtual machine may be lost. Additionally, the network adapter may be disabled”.

A hotfix is available to let you configure a cluster node that does not have quorum votes in Windows Server 2008 and in Windows Server 2008 R2

“Windows Server Failover Clustering (WSFC) uses a majority of votes to establish a quorum for determining cluster membership. Votes are assigned to nodes in the cluster or to a witness that is either a disk or a file share witness. You can use the Configure Cluster Quorum Wizard to configure the clusters quorum model. When you configure a Node Majority, Node and Disk Majority, or Node and File Share Majority quorum model, all nodes in the cluster are each assigned one vote. WSFC does not let you select the cluster nodes that vote for determining quorum.

After you apply the following hotfix, you can select which nodes vote. This functionality improves multi-site clusters.  For example, you may want one site to have more votes than other sites in a disaster recovery. Without the following hotfix, you have to plan the numbers physical servers that are deployed to distribute the number of votes that you want for each site.”

Interesting Survey Results on Behalf of Veeam

I’ve just read these stats on TechCentral.ie in an article called “IT departments lack visibility in virtualisation”.  They are from a survey “carried out by Vanson Bourne on behalf of VMware management solutions provider Veeam Software”.

  1. Nearly half (49%) of firms that use virtualisation say they have delays in resolving IT problems because of a lack of visibility into their whole IT infrastructure
  2. Forty five per cent of respondents also said that the lack of visibility is slowing down their organisation’s adoption of virtualisation
  3. Eighty per cent of respondents who currently use specialist tools but would prefer to use traditional enterprise-wide management tools
  4. Seventy-one per cent said they had difficulty doing so managing the VMware vSphere and Microsoft Hyper-V hypervisors from a single console …
  5. … while 68% wanted a single dashboard for managing them both

The stats are quoted are from TechCentral.ie so please check out their site for IT news.

Let’s quickly deal with the stats one by one:

  1. Want visibility into your infrastructure?  I’m curious to see how VMware will accomplish that.  They make great virtualisation software but that’s where they stop.  On the other hand, Microsoft System Center will audit and report on your infrastructure hardware and software (Configuration Manager) and monitor your hardware and applications (Operations Manager).
  2. Use the Microsoft Assessment and Planning Toolkit, ideally combined with System Center, and you have (a) the tools to figure out what you have (b) figure out what your virtualisation infrastructure will be, and (c) do that conversion process.
  3. See System Center.  vSphere will give you great VMware virtualisation management.  But as anyone who really knows what private cloud computing is will tell you, the business doesn’t care about the infrastructure – they care about the business application that lives on top of it.  You need complete end-end and top-bottom management, including deployment, configuration, auditing, policy management, virtualisation, monitoring (traditional and client perspective), backup/recovery, and maybe even other things, covering everything from the network/hardware to the web app/database running on top of everything.
  4. Understandable.  VMware are adding Hyper-V support.  VMM 2008 R2 manages vSphere (but not all features).  VMM 2012 will add more vSphere support in addition to Xen.  But vSphere 5 isn’t far away.  I’ll be honest, I don’t think any management solution will have 100% feature management completeness of all virtualisation platforms, but maybe we can get close to it.
  5. See #4

Some Downloads For You To Consider: iSCSI and SCVMM

Microsoft was busy yesterday and released a bunch of downloads that you might be interested in.

Microsoft iSCSI Software Target 3.3

One of the challenges of trying out things like Hyper-V and clustering in a lab is the storage.  SANs are expensive.  There are solutions like Windows Storage Server (sold as OEM on storage appliances) and StarWind (a economic and highly regarded iSCSI target to install on Windows Server).

Now, if you want a simple iSCSI target that you can download and install, you can do it.  Jose Bareto blogged about (with instructions) the Microsoft iSCSI 3.3 target being available to the general public.  This was previously only available as a part of Storage Server.  Now you can download it and install it on a Windows Server 2008 R2 machine to create a simple iSCSI storage solution.  So if you want a quick and cheap “SAN” to try out clustering … you got it!

This isn’t limited to the lab either.  The iSCSI target is supported in production usage.  So if you need a cheap shared storage solution for a cluster, this is one way you can go.  Sure, it won’t match a SAN appliance for functionality or performance, and the likes of StarWind and Datacore offer other features, but this opens up some opportunities at the lower end of the market.

SCVMM 2012 MpsRpt Beta Tool

A lot of people are trying out the beta for System Center Virtual Machine Manager 2012.  I keep telling people that the virtualisation folks in Microsoft are serious about gathering and acting on feedback.  This is evidence of that.  This tool will enable support for collecting trace logs in SCVMM 2012 Beta.  Documentation is available here.

System Center Virtual Machine Manager 2008, 2008 R2, and 2008 R2 SP1 Configuration Analyzer

This tool has been updated to add support for SCVMM 2008 R2 SP1.

“The VMMCA is a diagnostic tool you can use to evaluate important configuration settings for computers that either are serving or might serve VMM roles or other VMM functions. The VMMCA scans the hardware and software configurations of the computers you specify, evaluates them against a set of predefined rules, and then provides you with error messages and warnings for any configurations that are not optimal for the VMM role or other VMM function that you have specified for the computer.

Note: The VMMCA does not duplicate or replace the prerequisite checks performed during the setup of VMM 2008, VMM 2008 R2, or 2008 R2 SP1 components.”

I’ll be on Talk TechNet

I’m going to be a guest on Microsoft’s Talk TechNet webcast on April 27th at 9am PST

“Talk TechNet is all about discussing topics and trends in the world of IT Professionals.  In this show we’ll have guest Aidan Finn. Call in and join us for what promises to be a lively 60 minute session.  Get some burning questions answered on Virtualization.

Presenters: Keith Combs, Sr. Program Manager, Microsoft Corporation, Matt Hester, Sr. IT Pro Evangelist, Microsoft Corporation, and Aidan Finn, Microsoft Virtualization MVP”

It should be interesting … hopefully you’ll be able to tune in!

Deploying VMM 2008 R2 Now?

You need to be aware of a few things if you are deploying System Center Virtual Machine Manager (SCVMM) 2008 R2 at the moment.

First thing is that VMM 2012 is just around the corner.  The public beta launched yesterday.  It brings about some big changes.  If you are buying that license then I recommend that you tack on some Software Assurance to get the upgrade to SCVMM 2012 when it is released as RTM.

Next up is SQL Server support.  SQL Express has been supported up to now.  That limits you to an on-board 4GB database.  That’s not been an issue for most Hyper-V deployments.  The free license (as opposed to SQL Server Standard edition) was a real money saver and an “obvious” decision – one which I have made myself.

VMM 2012 will not support SQL Express.  You will need SQL Server 2008 R2 Standard (or higher edition).  Yup; you will have to spend that little bit more.  If you are doing the upgrade (after VMM 2012 RTM) then you can probably install SQL Server 2008 R2, detach the database from Express, and reattach the database in SQL Server 2008 R2 (to be verified).

An interesting scenario will be that VMM 2012 can be made highly available.  Some have deployed VMM as a HA VM (which I strongly dislike) to get this effect.  HA VMM will require a clustered file share (for the library) and HA SQL (for the VMM database).

So keep all that in mind if you are deploying VMM 2008 R2 now.

VMM 2012 Public Beta is Launched

The public beta for System Center Virtual Machine Manager 2012 has been launched today at MMS 2012.  You can download it now.

This one is a game changer for Hyper-V administrators.  Cloud, service templates, host/cluster deployment, network/storage integration, XenServer support … VMM is getting as big as ConfigMgr!

Don’t expect it to be like going from VMM 2008 to VMM 2008 R2.  It’s a very different tool.  You’ll need to do some reading to get to know it – but it’s worth it!

Nice Feedback is Soup for the Soul

I think I’ve mentioned before that writing a book is hard work.  To be honest, when you’re going through the 3rd and 4th edit, you sometimes start to wonder if it’s all worth it or not. 

But then when you get positive feedback, sometimes by email or by Twitter, it can perk you up quite a bit.  Here’s a little sample of that for Mastering Hyper-V Deployment:

“… thank you for your awesome Hyper-V blog- it has really helped me get moving on Hyper-V. I purchased your book, Mastering Hyper-V Deployment earlier this week and found that to be even more valuable” – Paul

“… read it for the book review and I must say it is great” – Carsten

“…Great book” – Michael

“Handing out 16 copies of Aidan Finn’s Mastering Hyper-V Deployment book http://amzn.to/aKCQXj to the students of my #hyperv course” – @hvredevoort

Then there is the feedback on Amazon where Mastering Hyper-V Deployment is averaging 5 stars:

“Just got the book and reading half way through. A well written book with a lot of good explanation and diagram to assist user to understand the hyper v deployment. Keep up the good work” – Lai Yoong Seng

“The book has proven to be a big timesaver because it (1) cuts through the bureaucracy of the Microsoft-provided documentation and the hours researching product information on the web and (2) it covers details that will help me avoid problems later.  This is one of the few network admin books I have read cover-to-cover.” – S. Tsukuda

I found this book to be a very easy read and overall it had a great flow. Being an IT professional, I have read a lot of technical books and most are tough to read cover to cover. I had no issues reading through Mastering Hyper-V Deployment because Aidan’s style of writing is natural and he writes at a technical level that can translated by anyone, not just a Hyper-V expert. I highly recommend purchasing this book if you are planning to deploy Hyper-V R2 or have already done so.” – A. Bolt

“Best of all, you’ll get almost all the answers to the questions you’ve been thinking about. It’s all about details, but it’s always easy to get into it. You’ve been asking to yourself whether you should use snapshot on a VM running SQL ? the answers found from different sources on internet may be confusing you. In this book you’ll learn why not to use it or when you should use it and how to avoid any problem doing it among many other details to be aware of.” – Thomas Lally

“Appropriate for all Hyper-V users from the beginner to the expert, it goes beyond deployment and is definitely the administrator’s aid and if using guidance here your Hyper-V solution should remain in good shape.” – Virtualfat

“This is an excellent introduction to Hyper-V which is Microsoft’s Enterprise Software Solution. I particularly like the way the book is laid out, it is similar to a project plan to assist you if you were deploying your own Hyper-V project.  There is lots of very good information contained and this book is an asset to anyone who is planning a Hyper-V Deployment.” – Mr. J. Kane

One of the more interesting comments have been reported to me (from two independent sources) was from the Microsoft European HQ in Reading, UK.  Some of the Microsoft consultants there have stated that they thought Mastering Hyper-V Deployment was the best Hyper-V book they’ve read, including those from MS Press.  It would be an understatement to say that put a smile on my face!

Credit for the quality of Mastering Hyper-V Deployment must also be shared with the editors from Sybex, Hans Vredevoort (technical editor), and Patrick Lownds (co-author).

Last year was tough.  I was getting pretty tired of the editing process as we circled the end of Mastering Windows 7 Deployment.  I pushed through and eventually it was released a few weeks ago.  Today I got this nice message on Twitter from @miamizues

“Your co authored book on windows 7 deployment is our departments new bible, thank you”.

I was just a part of a big team of people who wrote, edited, and reviewed that book, but that was especially nice to hear.

Thank you to those concerned for taking the time to pass on or share the nice words.

And there are also plenty of online and in-person friends/colleagues who’ve said some nice things and supported me.  You know who you are and thank you!

Private Cloud Computing: Designing in the Dark

I joined the tail end of a webcast about private cloud computing to be greeted by a demonstration of the Microsoft Assessment and Planning Toolkit in a virtualisation conversion scenario.  That got me to thinking, raised some questions, and brought back some memories.

Way back when I started working in hosting/virtualisation (and it was VMware 3.x BTW) I had started a thread on a forum with some question.  It was something storage sizing or planning but I forget exactly what.  A VMware consultant (and a respected expert) responded by saying that I should have done an assessment of the existing environment before designing anything.

And there’s the problem.  In a hosting environment, you have zero idea of what your sales people are going to sell, what your customers are going to do with their VMs, and what the application loads are going to be.  And that’s because the sales people and customers have no idea of those variables either.  You start out with a small cluster of hosts/storage, and a deployment/management system, and you grow the host/storage capacity as required.  There is nothing to assess or convert.  You build capacity, and the business consumes it as it requires it, usually without any input from you. 

And after designing/deploying my first private cloud (as small as it is for our internal usage) I’ve realised how similar the private cloud experience is to the hosting (public cloud or think VPS) experience.  I’ve built host/storage capacity, I’ve shared the ability for BI consultants/developers to deploy their own VMs, and I have no idea what they will install, use them for, or what loads there will be on CPU, storage, or network.  They will deploy VMs into the private cloud as they need them, they are empowered to install software as they require, and they’ll test/develop as they see fit, thus consuming resources in an unpredicatable manner.  I have nothing to assess or convert.  MAP, or any other assessment tool for that matter, is useless to me.

So there I saw a webcast where MAP was being presented, maybe for 5-10 minutes, at the end of a session on private cloud computing.  One of the actions was to get assessing.  LOL, in a true private cloud, the manager of that cloud hasn’t a clue what’s to come.

And here’s a scary bit: you cannot plan for application supported CPU ratios.  Things like SharePoint (1:1) and SQL (2:1) have certain vCPU:pCPU ratios (virtual CPU:physical core) that are recommended/supported (search on TechNet or see Mastering Hyper-V Deployment).

So what do you do, if you have nothing to assess?  How do you size your hosts and storage?  That is a very tough question and I think the answer will be different for everyone.  Here’s something to start with and you can modify it for yourself.

 

  1. Try to figure out how big your infrastructure might get in the medium/long term.  That will define how big your storage will need to be able to scale out to.
  2. Size your hosts.  Take purchase cost, operating costs (rack space, power, network, etc), licensing, and Hyper-V host sizing (384 VMs max per host, 1,000 VMs max per cluster, 12:1 vCPU:pCPU ratio) into account.  Find the sweet spot between many small hosts and fewer gigantic hosts.
  3. Try to figure out the sweet spot for SQL licensing.  Are you going per-CPU on the host (maybe requiring a dedicated SQL VM Hyper-V cluster), per CPU in the VM, or server/CAL?  Remember, if your “users” can install SQL for themselves then you lose a lot of control and may have to license per CPU on every host.
  4. Buy new models of equipment that are early in their availability windows.  It might not be a requirement to have 100% identical hardware across a Hyper-V cluster but it sure doesn’t hurt when it comes to standardisation for support and performance.  Buying last year’s model (e.g. HP G6) because it’s a little cheaper than this year’s (e.g. HP G7) is foolish; That G6 probably will only be manufactured for 18 months before stocks disappear and you probably bought it at the tail end of the life.
  5. Start with something small (a bit of storage with 2-3 hosts) to meet immediate demand and have capacity for growth.  You can add hosts, disks, and disk trays as required.  This is why I recommended buying the latest; now you can add new machines to the compute cluster or storage capacity that is identical to previously purchased equipment – well … you’ve increased the odds of it to be honest.
  6. Smaller environments might be ok with 1 Gbps networking.  Larger environments may need to consider fault tolerant 10 Gbps networking, allowing for later demand.
  7. You may find yourself revisiting step 1 when you’ve gone through the cycle because some new fact pops up that alters your decision making process.

To be honest, you aren’t sizing; You’re providing access to elastic capacity that the business can (and will) consume.  It’s like building a baseball field in Iowa.  You build it, and they will come.  And then you need to build another field, and another, and another.  The exception is that you know there are 9 active players per team in baseball.  You’ve no idea if your users will be deploying 10 * 10 GB RAM lightly used VMs or 100 * 1 GB RAM heavily used VMs on a host.

I worked in hosting with virtualisation for 3 years.  The not knowing wrecks your head.  The only way I really got to grips with things was to have in depth monitoring.  System Center Operations Manager gave me that.  Using PRO Tips for VMM integration, I also got my dynamic load balancing.  Now I at least knew how things behaved and I also had a trigger for buying new hardware.

Finally comes the bit that really will vex the IT Pro:  Cross-charging.  How the hell do you cross-charge for this stuff?  Using third party solutions, you can measure things like CPU usage, memory usage, storage usage, and bill for them.  Those are all very messy things to cost – you’d need a team of accountants for that.  SCVMM SSP 2.0 gives a simple cross charging system based on GB or RAM/storage that are reserved or used, as well as a charge for templates deployed (license).  Figuring out the costs of GB of RAM/storage and the cost of a license is easy. 

However, figuring out the cost of installed software (like SharePoint) is not; who’s to say if the user puts the VM into your directory or not, and if a ConfigMgr agent (or whatever) gets to audit it.  Sometimes you just gotta trust that they’re honest and their business unit takes care of things.

EDIT:

I want to send you over to a post on Working Hard in IT.  There you will read a completely valid argument about the need to plan and size.  I 100% agree with it … when there’s something to measure and convert.  So please do read that post if you are doing a traditional virtualisation deployment to convert your infrastructure.  If you read Mastering Hyper-V Deployment, you’ll see how much I stress that stuff too.  And it scares me that there are consultants who refuse to assess, often using the wet finger in the wind approach to design/sizing.

Deploying New Hyper-V Integration Components

Imagine this: you are running a pretty big Hyper-V environment, Microsoft releases a service pack that adds a great new feature like Dynamic Memory (DM), legacy OS’s will require the new ICs, and you really want to get DM up and running.  Just how will you get those ICs installed in all those VMs?

First you need to check your requirements for Dynamic Memory.  The good news is that any Windows Server 2008 R2 with SP1 VM will have the ICs.  But odds are that if you have a large farm then things aren’t all that simple for you.  Check out the Dynamic Memory Configuration Guide to see the guest requirements for each supported OS version and edition. 

OK, let’s have a look at a few options:

By Hand

Log into each VM, install the ICs, and reboot.  Yuk!  That’s only good in the smallest of environments or if you’re just testing out DM on one or two VMs.

VMM

VMM has the ability to install integration components into VMs.  The process goes like this:

  1. Shut down a number of VMs
  2. Select the now shut down VMs (CTRL + select)
  3. Right-click and select the option to install new integration components
  4. Power up the VMs

You’ll see the VM’s power up and power down during the installation process.  Now you’re done.

WSUS

Here’s an unsupported option that will be fine in a large lab.  You can use the System Center Updates Publisher to inject updates into a WSUS server.  Grab the updates from a W2008 R2 SP1 Hyper-V server and inject them into the WSUS server.  Now you let Windows Update take care of your IC upgrade.

Configuration Manager

This is the one I like the most.  ConfigMgr is the IT megalomaniac’s dream come true.  It is a lot of things but at it’s heart is the ability to discover what machines are and distribute software to collections of machines that meet some criteria.  So for example, you can discover if a Windows machine is a Hyper-V VM and put it in a collection.  You can even categorise them.

You may notice that Windows Server 2008 with SP2 Web and Standard editions require a prerequisite update to get DM working

So, you can advertise the ICs to a collection of W2008 with SP2 standard and web editions, making that update a requirement.  The update gets installed, and then the ICs get installed.  All other OS’s: it’s just an update.  And of course, you just need to install SP1 on your W2008 R2 VMs.  As you may have noticed, I’[m not promoting the use of the updates function of ConfigMgr; I’m talking about the ability to distribute software.

I’ll be honest – I don’t know if the ConfigMgr method is supported or not (like the WSUS option) but it’s pretty tidy, and surely must be the most attractive of all in a large managed environment.  And because it’s a simple software distribution, I can’t see what the problem might be.