MMS 2010 Keynote: Server Management

I’m tuning into the live webcast of today’s (there’s another tomorrow) Microsoft Management Summit 2010 keynote featuring server management.  I’ll be doing my best to blog about new stuff as it happens.

System Center Service Manager 2010 is announced as RTM.  Sorry dudes!  YEARS of work (and rework) and I thought you’d get more of a launch than that.

Jeez, an hour later and I’ve not got much more to report.  There’s a lot of talk about cloud (nothing new) and a lot of talk about old concepts (using System Center to do more, and more engineering rather than operations).

EDIT: Someone on Twitter counted the number of times “cloud” was mentioned.  The final count was 83.  Cloud OD.

The next generation of System Center data center is based on lessons from Azure and Bing.  Edwin Yuen hits the stage.  Now we’re cooking!

VMM v.Next

It looks quite different!  It has the cleaner v.Next interface rather than the Outlook 2007 one we are used to.  Server application virtualization, SQL models and MSDeploy (IIS) packages live in the library.  The template model is evolved to a service template spanning multiple servers or tiers.  We see a demo of a 3 tier application.  You can drop OS templates (that we know) and “Server App-V”/MSDeploy packages which we can drop into the model.  You can say that you want X numbers of server in a tier in the model.  You can tier your storage to standard or high performance.  So you’ve got X variations of servers made from a few Server App-V images and OS templates. 

Seriously – I could use this right now.  I have recurring deployments that I could model like this.

You can integrate with WSUS and perform a patching compliance report based on the VHD in the library!  You can then remediate this image in the library.  Now – VMM knows which VMM managed VM’s need to be updated!  You don’t need to patch the running VM OS.   You can <Update Service>, to replace the running OS, while keeping the Server App-V package.

Operations Manager & Azure

How you can monitor Azure and on-premises for seamless application monitoring using OpsMgr 2007 R2.  We see a distributed application containing traditional monitored items (including databases and web watchers) and an Azure presence.  OpsMgr integrates into Azure using a soon-to-be-released (“later this year sometime”) management pack to gather performance information.  A task is there to add new web role instances in Azure.  Nice and simple! 

Deployment of more Azure instances is based on real (synthetic transaction monitoring) measured performance data.  Expansion (or withdrawal) of new instances can be easily done through the same monitoring interface based in your site.

That’s the end.  Really only had good content in the last 22 minutes of a 82 minute keynote.  A quite short post compared to what I would do at an MS Ireland event lasting the same time (see last week for a 3 hour session).

SCE 2010 and DPM 2010 RTM

Data Protection Manager 2010 and System Center Essentials 2010 were both announced as being released to manufacturing today.

DPM is MS’s backup solution and is the one that has the ability to backup a Hyper-V CSV.  The catch is that it puts the CSV into redirected IO mode.  Thus the preference is to use a storage provider with a supported VSS provider.  That allows you to safely backup running VM’s and maintain database consistency when recovered  -> VSS runs all the way through the stack.  You can even recover single files!

SCE 2010 is the all-in-one package that has the best of ConfigMgr, OpsMgr and now with VMM so you can manage W2008 R2 Hyper-V.  This makes it the ideal systems management solution for small-medium companies.

KB2022557: Selecting RedHat in VMM Fails

Microsoft has posted a fix to enable you to select RedHat as the OS of a VM in VMM 2008 R2, 2008 and 2007.  Without the fix you get this error:

Error (10637)

The virtualization software on host <server> does not support the Red Hat Enterprise Linux 5 operating system.

The problem is that the VMM database needs a tiny adjustment.  You can do this easily enough using SQL Management Studio or SQL Management Studio Express.  First, you should back up the database (don’t come crying to me if you didn’t!).  You then need to create a new query with the following:

update tbl_IL_OS

set OSFlags=0x14

where Name like ‘Red Hat Enterprise Linux 5%’

Run the query and you should be sorted.

Technorati Tags: ,,,

Some Useful Hyper-V Posts From the Last While

I’ve been going through my unread feeds from the last month or so (it’s been busy) and I’m posting links to the interesting ones here:

So … What Exactly Am I Writing?

You can tell I’m pretty busy because my usual high rate of blogging has dropped significantly in the last month.  Apologies for that.  The blogging has become writing.  I am involved in 2 book projects.  I’ve just seen on Twitter that details on one of those has just gone public.  I actually just saw the tweet seconds after I sent off a chapter I just finished.

Earlier this year I proposed an idea for a Windows Server 2008 R2 virtualization book to Wiley Publishing/Sybex.  It took quite a bit of work to tune the proposal.  It requires an understanding of the subject matter, the audience, and ideas on how it can be marketed.  You could think that a brief overview on the subject matter would be enough.  But no, the publisher needs much more detail.  You pretty much have to provide a detailed project plan for every heading (3 levels deep), page estimates and time estimates.  The proposal evolved over the weeks and eventually went through a couple of reviews.  I then got the news: an ISBN number was assigned and contracts were on the way – I was going to be a lead author on my own book for the very first time!!!!  I did get drunk that night – I think.

The deadlines are very tight.  I was considering seeking help.  My contact in Sybex advised that I outsource some of the chapters to a co-author.  I knew the person I wanted to bring in.  Wilbour Craddock is a technical specialist in the partner team with Microsoft Ireland.  Will (Irish folks will know him as the crazy Canadian who is always wearing shorts) is also a former SBS MVP.  His job has him spending a lot of time working with Hyper-V and Microsoft System Center, making him a perfect co-author to work with on this project.  Thankfully, Will agreed to hop on board the crazy train of book writing.

Another MVP (I won’t say who yet because I don’t have permission to name him) is the technical editor under the employment of Sybex.  He’s an ace at this stuff and will make sure everything we do is up to scratch.

The book is called Mastering Hyper-V Deployment.  I won’t go into the details of it yet.  But you can bet that it is based on our collective experience and knowledge of the product set involved in a Hyper-V deployment.  I saw a gap in the market and figured I could probably write (or a good chunk of) the book to fill it.  The estimated release is in November 19th of this year.  That means we need to finish writing in July.  It has started to appear on some sites for pre-order.

I’m two chapters in a the moment.  I’m really pushing my hardware at home to its limits and am “this close” to buying more.  Will is ahead of schedule and has one chapter nearly done.

I am also working on another book project as a co-author for a friend’s book.  It’s another on-subject book that is turning out to be a good experience.  I’ve one chapter done on that and am 50% through the other.  I’ll talk more about that when the time is right.

As you may have read in my previous posts about my chapters in Mastering Windows Server 2008 R2, the original draft edit is just the very start of the process.  There are numerous technical, language, layout and copy edits for each and every chapter.  It’s a lot of work but it’s a great experience.  And I can’t wait for the buzz to see my name as the lead author of a book in a book shop.  I had to really try when I saw Mastering Windows Server 2008 R2 in Barnes & Noble over in Belleview WA back in February.

MS Ireland Virtualization Summit

Yesterday, MS Ireland held the local instance of the Virtualisation Summit that MS is running in many cities around the world.  It was keynoted by Ian Carlson, a senior program manager from Redmond (nice guy too).

The usual slide decks were presented, probably the first time many of the attendees (around 140 I think, standing room only) had seen them.  For those of us “on the inside” this can be a bit tiresome but that’s what happens when you attend every MS event going to get your free cup of coffee and pastry for brekkie!  The end of the morning session feature Gerry from Lakeland Dairies, an interesting case study because they make the most of System Center and use the Compellent SAN to replicate their VM’s across their campus for DR.  They are also a fine example of a company that had a plan and knew their requirements going into the project, allowing them to make good decisions.

After the break there was a split into desktop virtualization and server virtualisation.  *I must stop using Z’s in the American way – too much writing for Sybex*  Ronnie Dockery from MS and Citrix ran a breakout on desktop virtualisation and VDI.  Wilbour Craddock, a techie in the MS Ireland partner team, ran the server virtualisation breakout and went through a number of best practices and tips on a successful solution.  Maybe 60% went into the desktop room. 

I did the last 15 or so minutes in the server room, talking about our Hyper-V, OpsMgr, VMM and HP deployment at C Infinity.  I talked through the relevant bits of the infrastructure and had a cool snazzy animated slide deck to talk through how HP SIM, OpsMgr, VMM and highly available Hyper-V VM’s allowed for no interruption of service back in January when we detected a degraded memory board (via HP SIM agent and OpsMgr management pack), got the alert, used Live Migration to move VM’s from the host, HP (via RedStone) replaced the affected board within the 4 hour support response window and we continued on without missing a beat.  Some talk of PRO was also in there.  I also stressed how Hyper-V with System Center makes this a solution for applications, which is what the business really cares about – not NIC’s and memory boards.

I haven’t posted the slide deck – animations don’t work on Slideshare, and to be honest, my slides are nothing but cue cards for me to rattle on until someone rings a bell to shut me up.

I talked to a few people afterwards and the response to the morning was positive.  I think a lot of people either got a fresh view on hearing about the complete solution (it’s more than “just” hardware virtualisation) or were happier after hearing the experiences of two Irish customers using the suites – not just the usual “Here’s XYZ Giganto Corporation from the USA or Germany” that Irish customers cannot relate to.  MS Ireland does a great job on that.

Hyper-V, ML370 and Wireless

The after-work project I’m working on right now requires as many VM’s as I can throw at it.  I’ve got my previously mentioned Latitude E6500 laptop running W2008 R2 Hyper-V.  It’s also my domain controller and my VMM 2008 R2 server/library.  Not best practice but it’s fine for a domestic lab.

I need even more VM’s than I can run on there.  So I’ve got a HP ML370 G5 that was spare from work.  It’s got as much memory as I could scrape together and I put Windows Server 2008 R2 on it.  One problem: I do not have a wired house.  And I do not want to work beside the noisy server.  I’ll be using Office on my laptop for documentation and I can sit with that in my sitting room.  The server will stay upstairs in my office.  Just how will the communicate?

That’s easy.  I have an old Belkin 11G wifi NIC which I put into the ML370.  Windows detected it as a Broadcom.  That aint right but it works!  I’m going to set the server up as a member of my laptop’s domain.  That will allow me to put a VMM agent on there for remote management. 

My VM templates are small enough (dynamic VHD’s) but I probably might not want to copy them over wifi.  I might just configure the wired NIC’s with another subnet range and connect the machines with a hub/switch when I need to deploy stuff.  Or maybe I’ll copy the templates over to the server using a USB disk and set up a library share on the server for a faster local copy.  That might just work!

The Sanbolic Melio FS (File System) With Hyper-V

I was contacted last month by Eva Helen in Sanbolic to see if I’d be interested in learning more about their Melio FS product.  I knew little about it and was keen to learn.  So Eva got David Dupuis to give me a demo.  Dave just ran an hour long demo on LiveMeeting with me and I learned a lot.  I gotta say, I’m impressed with this solution.

There were two Sanbolic products that Dave focused on:

  • La Scala = Cluster Volume Manager used instead of disk manager.  It’s a shared volume manager.  It is aware of what nodes are attached to it. 
  • Melio = Cluster File System. 

La Scala

  • La Scala can mirror volumes across 2 SAN’s, allowing for total SAN failure.  Each server has two controllers or a dual channel HBA, one path going to each SAN.  1 write is converted to two writes on two paths.  In theory, there’s no noticeable performance hit for amazing fault tolerance.
  • On the fly volume expansion
  • Can use any block based shared storage iSCSI or fibre channel system
  • You can set up a task, e.g. expand disk, and review it before committing the transaction.
  • Windows ACL’s are integrated in the interface to control volume access rights.

I’ve got to say, the SAN mirroring is pretty amazing technology.  Note the performance will equal the slowest SAN.  It can take cheap storage solutions that might not even have controller/path fault tolerance and give them really high fault tolerance via redundant arrays and mirrored storage with an unperceivable performance hit due to the mirroring being done by simultaneous writes by 2 independent controller paths.

Melio

  • This is 64-bit symmetrical cluster file system.
  • There is no coordinator node, management server, metadata controller, etc, that manages the overall system.  So there’s no redirected I/O mode *cheers from Hyper-V admins everywhere*
  • Metadata is stored on the file system and every node in the cluster has equal access to this.  This is contrary to the CSV coordinator in W2008 R2 failover clustering.
  • QoS (quality of service) allows per process or per file/folder file system bandwidth guarantees.  This allows granular management of SAN traffic for the controlled resources.  In the Hyper-V context, you can guarantee certain VHD’s a percentage of the file system bandwidth.  You can also use wildcards, e.g. *.VHD.  This is another very nice feature.
  • There is a VSS provider.  This is similar to how SAN VSS providers would work.  Unlike CSV, there is no need for redirected I/O mode when you snap/backup the LUN. 
  • There is a bundled product called SILM that allows you to copy (via VSS) new/modified files to a specified LUN on a scheduled basis.
  • Backups solutions like BackupExec that recognise their VSS provider can use it to directly backup VM’s on the Melio file system.
  • MS supports this system, i.e. Failover Clustering and VMM 2008 R2.  For example, Live Migration uses the file system.  You’ll see no CSV or storage in Failover Clustering.  The Melio file system appears as a normal lettered drive on each node in the cluster.
  • By using advance exclusive lock detection mechanisms that CSV doesn’t have, Melio can give near raw disk performance to VHD.  They say they have faster (57%) VHD performance than CSV!
  • You can provide iSCSI accessed Melio file systems to VM’s.  You can license the product by host –> gives you 4 free VM licenses.
  • Melio isn’t restricted to just Hyper-V: web servers, SQL, file servers, etc.
  • Issues seen with things like AV on CSV aren’t likely here because there is no coordinator node.  All metadata is available to all nodes through the file system.  You need to be aware of scheduled scans: don’t have all nodes in the cluster doing redundant tasks.  The tip here: put a high percentage guarantee for *.VHD and the AV has been controlled.

It’s got to be said that you cannot think of this as some messy bolt on.  Sanbolic has a tight relationship with Microsoft.  That’s why you see their Melio file system being listed as a supported feature in VMM 2008 R2.  And that can only happen if it’s supported by Failover Clustering – VMM is pretty intolerant of unsupported configurations.

Overall, I’ve got to say that this is a solution I find quite interesting.  I’d have to give it serious consideration if I was designing a cluster from scratch and the mirroring option raises some new design alternatives.

My $64,000,000 question has probably been heard by the guys a bunch of times but it got a laugh: “when will Microsoft buy Sanbolic and have you invested a lot in the company share scheme?”.  Seriously though, you’d think this would be a quick and superb solution to get a powerful cluster file system that is way ahead of VMFS and more than “just” a virtualisation file system.

Thanks to the kind folks at Sanbolic for the demo.  It’s much appreciated!

Fujitsu “My Very First Hyper-V”

Fujitsu has launched a bundle for SME’s (small/medium enterprises) that want to do Hyper-V virtualisation for the very first time.  They’ve called it the “My Very First Hyper-V”.  It includes, servers, external storage, Windows Server 2008 R2 and System Center Virtual Machine Manager 2008 R2 Work Group Edition.  A flyer can be found here.

I wonder if they’ll replace the VMM installation with System Center Essentials 2010 when it is released.  That would make sense to me seeing as it’s aimed at this market and it gives software management, health & performance monitoring and VMM functionality.

Elastic Virtualisation With System Center

Today I was working with a customer who needed to grow their hosted presence with us due to performance and scaling requirements.  OpsMgr ProTips alerts made us aware of certain things that got the customer and us working.  A VMM library template machine was quickly deployed to meet the sudden requirements.  That got me thinking about how OpsMgr and VMM could be used in a large virtualised (and even physical) application environment to scale out and in as required.  All of this is just ideas.  I’m sure it’s possible, I just haven’t taken things to this extreme.

image

Let’s take the above crude example.  There are a number of web servers.  They’re all set up as dumb appliances with no content.  All the content and web configurations are on a pair of fault tolerance content servers.  The web servers are load balanced, maybe using appliances or maybe by reverse proxies.  It’s possible to quickly deploy these web servers from VM templates.  That’s because the deployed machines all have DHCP addresses and they store no content or website configuration data.

The next tier in the application is typically the application server.  This design is also built to be able to scale out or in.  There is a transaction queuing server.  It receives a job and then dispatches that job to some processing servers.  These transaction servers are all pretty dumb.  They have an application and know to receive workloads from the queuing server.  Again, they’re built from an image and have DHCP addresses.

All VM templates are stored in the VMM library.

All of this is monitored using Operations Manager.  Custom management packs have been written and distributed application monitoring is configured.  For example, average CPU and memory utilisation is  monitored across the web farm.  An alert will be triggered if this gets too high.  A low water mark is also configured to detect when demand is low.

The web site is monitored using a captured web/user perspective transaction.  Response times are monitored and this causes alerts if they exceed pre-agreed thresholds. 

The Queuing server’s queue is also monitored.  It should never exceed a certain level, i.e. there is more work than there are transaction servers to process it.  A low water mark is also configured, e.g. there is less work than there are transaction servers.

So now OpsMgr knows when we have more work than resources, and when we have more resources than we have work for.  This means we only need a mechanism to add VM’s when required and to remove VM’s when required.  And don’t forget those hosts!  You’ll need to be able to deploy hosts.  I’ll come back to that one later.

Deploying VM’s can be automated.  We know that we can save a PowerShell job into the library when we create a VM, etc.  Do that and you have your VM.  You can even use the GUIRunOnce option to append customisation scripts, e.g. naming of servers, installation of updates/software, etc.  Now you just need a trigger.  We have one.

When OpsMgr fires an alert it is possible to associate a recovery task with the alert.  For example, the average CPU/Memory across the web farm is too high.  Or maybe the response time across the farm is too slow.  Simple – the associated response is to run a PowerShell script to deploy a new web server.  10 minutes later and the web server is operational.  We already know it’s set to use DHCP so that’s networking sorted.  The configuration and the web content are stored off of the web server so that’s that sorted.  The load balancing needs to be updated – I’d guess some amendment to the end of the PowerShell script could take care of that.

The same goes for the queuing server.  Once the workloads exceed the processing power a new VM can be deployed within a few minutes and start taking on tasks.  They’re just dumb VM’s.  Again, the script would need to authorise the VM with the queuing process.

That’s the high water mark.  We know every business has highs and lows.  Do we want to waste Hyper-V host resources on idle VM’s?  Nope!  So when those low water marks are hit we need to remove VM’s.  That one’s a little more complex.  The PowerShell script here will probably need to be aware of the right VM to remove.  I’d think about this idea:  The deploy VM’s would update a file or a database table somewhere.  Thing of it like a buffer.  The oldest VM’ should then be the first one removed.  Why?  Because we Windows admins prefer newly built machines – they tend to be less faulty than ones that have been around a while.

With all that in place you can deploy VM’s to meet demands and remove VM’s when they are redundant to free up physical resources for other applications.

What about when you run out of Hyper-V server resources?  There most basic thing you need to do here is know that you need to buy hardware.  Few of us have it sitting around and we run on budgets and on JIT (just in time) principles.  Again, you’d need to do some clever management pack authoring (way beyond me to be honest) to detect how full your Hyper-V cluster was.  When you get to a trigger point, e.g. starting  to work on your second last host, you get an alert.  The resolution is buy a server and rack it.  You can then use whatever build mechanism you want to deploy the host.  The next bit might be an option if you do have servers sitting around and can trigger it using Wake-On-Lan.

ConfigMgr will run a job to deploy an operating system to the idle server.  It’s just a plain Windows Server installation image.  Thanks to task sequences and some basic Server Manager PowerShell cmdlets, you can install the Hyper-V role and the Failover Clustering feature after the image deployment.  A few reboots happen.  You can then add it to the Hyper-V cluster.  You can approach this one from other angles, e.g. add the host into VMM which triggers a Hyper-V installation.

Now that is optimisation and dynamic IT!  All that’s left is for the robots to rise – there’s barely a human to be seen in the process once its all implemented.  I guess your role would be to work on the next generation of betas and release candidates so you can upgrade all of this when the time comes.

I’ve not read much about Opalis (recently aquired by Microsoft) but I reckon it could play a big role in this sort of deployment.  Microsoft customers who are using System Management Suite CAL’s (SMSE/SMSD) will be able to use Opalis.  Integration packs for the other System Center products are on the way in Q3.