Confusion Over Hyper-V and “Snapshots”

After presenting on the topic of Hyper-V to over 250 people (including some VMM) over the last 3 weeks, I’ve become aware that the term “Snapshot” confuses people.  There is an unfortunate amount of confusion created by many different but similar solutions/features:

Hyper-V Snapshot

This is the ability (just like in VMware) to capture a virtual machine’s state (memory, CPU, system state, and disk contents) in a point in time.  You can do some work, and then revert back to that snapshot, thus returning the VM to where it was back then, undoing all the changes made since the snapshot.  You can have lots of snapshots, all tiered, and branched.

Hyper-V snapshots are  supported in production.  But they are not supported by many of the applications you’d install in a VM, e.g. SQL Server, Exchange, etc.  I’m not a fan of snapshots in production, in fact, I hate them because of the problems that people create for themselves (long story where people assume all sorts of silly things that are convenient for them at the time).  But I do use Hyper-V snapshots in lab environments to reset tests or demos.


This is what System Center Virtual Machine Manager (SCVMM) calls a snapshot.  Yup, it’s confusing.

EDIT: Microsoft listened to feedback and renamed the Hyper-V snapshot to checkpoint in WS2012 R2. Now it matches SCVMM and shouldn’t be confused with other kinds of snapshot.

Volume Shadow Copy Service (VSS) Snapshot

This kind of snapshot is an NTFS volume snapshot that allows Windows to backup hot files that are being used (e.g. virtual machines) or databases with data/log consistency (e.g. SQL Server or Exchange).

In the Hyper-V world, you can backup VMs (even running ones) using Hyper-V VSS compatible backup products such as DPM 2010 or Altaro.  VSS creates a snapshot of the NTFS volume that contains the running VMs’ files and then backup can hit the snapshot.

This snapshot is a VSS snapshot, not a Hyper-V Snapshot.  You won’t see it in Hyper-V Manager or in SCVMM.  It exists purely as a hot file backup mechanism.

Interestingly, whereas Hyper-V snapshots may not be supported by many applications, this kind of backup can be, e.g. SQL Server and Exchange.  However, some services, such as Domain Controllers, do not support restoring this kind of backup (in AD it causes USN rollback).

When I’m asked for advice, I tell people to use this kind of backup to “snapshot” a VM instead of Hyper-V snapshots.  There isn’t the pain/mess of mismanaging VHDs, AVHDs and merges, and it is supported by almost every app you’ll need in a VM.

SAN LUN Snapshot

In a SAN, you can create a snapshot of a LUN.  This duplicates the LUN.  How the duplication works depends on the SAN.

The VSS Snapshot mechanism can leverage this to speed up backup by using a SAN manufacturer provided Hardware VSS provider.  Instead of doing a software based VSS snapshot, it will create a SAN snapshot of the relevant LUN and that can then be used by the VSS enabled backup product.  It’s especially useful for Hyper-V clusters with CSV where you want to minimise the amount of Redirected I/O (Mode or Access).

This week I heard that some are telling customer to use a manually created SAN LUN snapshot as a form of backup/restore on an hourly basis.  Painful and it’s probably consuming expensive disk – they’d be better off using an efficient backup solution that writes to more economic disk.

Fixing the Confusion

As you can imagine, all this overuse of the term “snapshot” doesn’t help.  It’s one thing for hardware VS Microsoft, but it’s another this when Hyper-V, SCVMM, and Windows VSS cause the confusion.  If I had one suggestion then it would be this:

Change the term “Snapshot” in Hyper-V to “Checkpoint”.  VSS isn’t going to change, and you’re not going to get the SAN vendors to change.  Doing this would also increase consistency in Windows Server 8.

Me Being Interviewed About CSV & Backup Design by Carsten Rachfahl

I was at the E2E Virtualisation Conference over the weekend, and had a good time chatting with lots of folks including Ronnie Isherwood (@virtualfat), Jeff Wouters (@JeffWouters), Didier van Hoye (@workinghardinit), and Carsten Rachfahl (@hypervserver). 

Carsten was awarded MVP status in Virtual Machine (like myself) by Microsoft earlier this year.  He’s a big contributor to the German (and English too) speaking community, tweeting, blogging, podcasting, and creating videos.

After my second session on CSV and backup design, Carsten asked if I would be willing to shoot a video interview on the subject.  Absolutely, and it just so happened we had a cool background with the London docklands at sunset – being an amateur photographer I was willing to shiver a little for nice light Smile


The video was posted this morning by Carsten.  He was a busy man; more videos were shot over the weekend with some of the others, and we even did a roundtable video where we talked about our favourite features of Windows 8.  Those videos will be posted in the coming weeks.

Just Because You Can Do Something, It Doesn’t Mean You Should

I get it; money is tight and people need to be creative.  But I also know that you shouldn’t do something just because you can.

Take backup of Hyper-V for example.  Several times, I’ve been challenged on “support statements” for Hyper-V.  People want to, and are, installing backup software (the management product, not just the agent) on the parent partition of Hyper-V hosts.

Microsoft are quite clear on this: it is not supported.  The only things that are supported are management agents such as anti-malware, monitoring, backup, etc.  I don’t care what the backup software vendor says.  If you have a problem with that host when it breaks, you better hope the Company X knows how to fix Hyper-V because Microsoft support will tell you that you did something that wasn’t supported.

Like I said earlier – I’ve been challenged on this during presentations.  OK, I’m quick on my feet when I’m presenting.  I gave the persons in question a simple analogy.  I can hold a loaded gun to my head and pull the trigger.  There is absolutely nothing in the architecture of my rib cage, shoulder, arm, hand, neck, head, the gun or the bullet that prevents that.  However, it turns out that the manufacturer doesn’t support that and there’s a good risk that my brain will fail to function (although some might claim that happened quite a while ago).  Just because you can do something, that isn’t a reason that you should.

Creative engineering is good.  I’ll be among the first to applaud a cool design.  But doing stuff to save €100 here and there, while not understanding the tech, while deliberately contravening the manufacturers support statement, and putting your customer (internal or external) at risk is just plain dumb.  In fact, I’ll have to be stronger about that; knowingly contravening manufacturer support statements is negligent.

Technorati Tags: ,

Whitepaper – Planning Hyper-V CSV and Backup

I have just published a guide or document to discus the subject(s) of Hyper-V Cluster Shared Volume (CSV) and backup.

Windows Server 2008 R2 introduced many new features for those of us who use Hyper-V. One of the big ones was something called Cluster Shared Volume (CSV). This allowed us to do something that VMware users take for granted and that we could not do before this release of Windows Server: store many virtual machines, which are running on many hosts in a cluster, on a single storage volume. The benefit of CSV is that it simplifies administration, reduces the possibility of human engineering error, and even makes the private cloud a possibility.

A structure depends on the foundation that it is built upon. The same is true of a virtualisation infrastructure. The foundation of Hyper-V (or XenServer or vSphere) is the storage design and implementation. What appears to be not very well understood is that backup design is intrinsically linked to your storage architecture. One must be considered hand-in-hand with the other in the Hyper-V world. Get that wrong and you’ll have unhappy users, an unhappy boss, and maybe even an unhappy bank manager when you are no longer employed. When you get to grips with the basics you’ll be empowered to implement that ideal virtualisation platform with optimised backup.

This document will cover:

  • What is CSV and how does it work?
  • How backup works with CSV
  • Designing CSV for your compute cluster
  • Disaster recovery with multi-site clusters
  • “Planning” for the private cloud

The document continues.

Thanks to Altaro for sponsoring this document.

Altaro Launches Hyper-V Backup … And How To Win an Unlimited Copy!

“Hyper-V Backup in 5 clicks – Hyper Easy, Hyper Speed, Hyper Effective” … that’s the tag line for a new Hyper-V backup solution from Altaro, called Hyper-V Backup, that launched today.  Features include:

  • Hot backups with VSS integration
  • Restore to a different host
  • File level restore
  • Different backup schedules for different VMs
  • Supports Hyper-V Server
  • Restore a VM to the same host but with a different name (cloning)
  • Reverse Delta Incremental Backup
  • Hyper-V cluster aware
  • Restore from older backups if you want
  • Plan for disasters
  • Backup Hyper-V snapshots


At this point, I would also like to welcome Altaro to my blog as a sponsor:



Want a free copy of Unlimited Edition of Altaro Hyper-V Backup?  Then here is what you need to do.

  • Step 1: Follow me on Twitter
  • Step 2: Add the word Altaro to your Twitter profile

I will be choosing 1 winner for this software on Monday morning at 9am (Irish time).


I’ve not had a chance to play with Hyper-V Backup yet but I’m looking forward to getting a chance to give it a try.  In the meantime, you can read more about it over with my friends on

Technorati Tags: ,

Hyper-V Replica DR Strategy Musings VS What We Can Do Now

See my more recent post which talks in great detail about how Hyper-V Replica works and how to use it.

At WPC11, Microsoft introduced (at a very high level) a new feature of Windows 8 (2012?) Server called Hyper-V Replica.  This came up in conversation in meetings yesterday and I immediately thought that customers in the SMB space, and even those in the corporate branch/regional office would want to jump all over this – and need the upgrade rights.

Let’s look at the DR options that you can use right now.

Backup Replication

One of the cheapest around and great for the SMB is replication by System Center Data Protection Manager 2010.  With this solution you are leveraging the disk-disk functionality of your backup solution.  The primary site DPM server backs up your virtual machines.  The DR site DPM server replicates the backed up data and it’s metadata to the DR site.  During the invocation of the DR plan, virtual machines can be restored to an alternative (and completely different) Hyper-V host or cluster.


Using DPM is cost effective, and thanks to throttling, is light on the bandwidth and has none of the latency (distance) concerns of higher-end replication solutions.  It is a bit more time consuming for the invocation.

This is a nice economic way for an SMB or a branch/regional office to do DR.  It does require some work during invocation: that’s the price you pay for a budget friendly solution that kills two marketing people with one stone – Hey; I like birds but I don’t like marke …Moving on …

Third-Party Software Based Replication

The next solution up the ladder is a 3rd party software replication solution.  At a high level there are two types:

  • Host based solution: 1 host replicates to another host.  These are often non-clustered hosts.  This works out being quite expensive.
  • Simulated cluster solution: This is where 1 host replicates to another.  It can integrate with Windows Failover Clustering, or it may use it’s own high availability solution.  Again, this can be expensive, and solutions that feature their own high availability solution can possibly be flaky, maybe even being subject to split-brain active-active failures when the WAN link fails.
  • Software based iSCSI storage: Some companies produce an iSCSI storage solution that you can install on a storage server.  This gives you a budget SAN for clustering.  Some of these solutions can include synchronous or asynchronous replication to a DR site.  This can be much cheaper than a (hardware) SAN with the same features.  Beware of using storage level backup with these … you need to know if VSS will create the volume snapshot within the volume that’s being replicated.  If it does, then you’ll have your WAN link flooded with unnecessary snapshot replication to the DR site every time you run that backup job.


This solution gives you live replication from the production to the DR site.  In theory, all you need to do to recover from a site failure is to power up the VMs in the DR site.  Some solutions may do this automatically (beware of split brain active-active if the WAN link and heartbeat fails).  You only need to touch backup during this invocation if the disaster introduced some corruption.

Your WAN requirements can also be quite flexible with these solutions:

  • Bandwidth: You will need at least 1 Gbps for Live Migration between sites.  100 Mbps will suffice for Quick Migration (it still has a use!).  Beyond that, you need enough bandwidth to handle data throughput for replication and that depends on change to your VMs/replicated storage.  Your backup logs may help with that analysis.
  • Latency: Synchronous replication will require very low latency, e.g. <2 MS.  Check with the vendor.  Asynchronous replication is much better at handling long distance and high latency connections.  You may lose a few seconds of data during the disaster, but it’ll cost you a lot less to maintain.

I am not a fan of this type of solution.  I’ve been burned by this type of software with file/SQL server replication in the past.  I’ve also seen it used with Hyper-V where compromises on backup had to be made.

SAN Replication

This is the most expensive solution, and is where the SAN does the replication at the physical storage layer.  It is probably the simplest to invoke in an emergency, and depending on the solution, can allow you to create multi-site clusters, sometimes with CSVs that span the sites (and you need to plan very carefully if doing that).  For this type of solutions you need:

  • Quite an expensive SAN.  That expense varies wildly.  Some SANs include replication, and some really high end SANs require an additional replication license(s) to be purchased.
  • Lots of high quality, and probably ultra low latency, WAN pipe.  Synchronous replication will need a lot of bandwidth and very low latency connections.  The benefit is (in theory) zero data loss during an invocation.  When a write happens in site A on the SAN, then it happens in site B.  Check with the manufacturer and/or an expert in this technology (not honest Bob, the PC salesman, or even honest Janet, the person you buy your servers from).


This is the Maybach of DR solutions for virtualisation, and is priced as such.  It is therefore well outside the reach of the SMB.  The latency limitations with some solutions can eliminate some of the benefits.  And it does require identical storage in both sites.  That can be an issue with branch/regional office to head office replication strategies, or using hosting company rental solutions.

Now let’s consider what 2012 may bring us, based purely on the couple of minutes presentation of Hyper-V replica that was at WPC11.

Hyper-V Replica Solution

I previously blogged about the little bit of technology that was on show at WPC 2011, with a couple of screenshots that revealed functionality.

Hyper-V Replica appears (in the demonstrated pre-beta build and things are subject to change) to offer:

  • Scheduled replication, which can be based on VSS to maintain application/database consistency (SQL, Exchange, etc).  You can schedule the replication for outside core hours, minimizing the impact on your Internet link on normal business operations.
  • Asynchronous replication.  This is perfect for the SMB or the distant/small regional/branch office because it allows the use of lower priced connections, and allows replication over longer distances, e.g. cross-continent.
  • You appear to be able to maintain several snapshots at the destination site.  This could possibly cover you in the corruption scenario.
  • The choice of authentication between replicating hosts appeared to allow Kerberos (in the same forest) and X.509 certificates.  Maybe this would allow replication to a different forest: in other words a service provider where equipment or space would be rented?

What Hyper-V Replica will give us is the ability to replicate VMs (and all their contents) from one site to another in a reliable and economic manner.  It is asynchronous and that won’t suit everyone … but those few who really need synchronous replication (NASDAQ and the like) don’t have an issue buying two or three Hitachi SANs, or similar, at a time.


I reckon DPM and DPM replication still have a role in the Hyper-V Replica (or any replication) scenario.  If we do have the ability to keep snapshots, we’ll only have a few of them.  What do you do if you invoke your DR after losing the primary site (flood, fire, etc) and someone needs to restore a production database, or a file with important decision/contract data?  Are you going to call in your tapes from last week?  Hah!  I bet that courier is getting themselves and their family to safety, stuck in traffic (see post-9/11 bridge closures or state of the roads in New Orleans floods), busy handling lots of similar requests, or worse (it was a disaster).  Replicating your back to the secondary site will allow you restore data (that is still on the disk store) where required without relying on external services.

Some people actually send their tapes to be stored at their DR site as their offsite archival.  That would also help.  However, remember you are invoking a DR plan because of an unexpected emergency or disaster.  Things will not be going smoothly.  Expect it to be the worst day of your career.  I bet you’ve had a few bad ones where things don’t go well.  Are you going to rely entirely on tape during this time frame?  Your day will only get worse if you do: tapes are notoriously unreliable, especially when you need them most.  Tapes are slow, and you may find a director impatiently mouth-breathing behind you as the tape catalogues on the backup server.  And how often do you use that tape library in the DR site?

To me, it seems like the best backup solution, in addition to Hyper-V Replica (a normal feature of the new version of Hyper-V that I cannot wait to start selling), is to combine quick/reliable disk-disk-disk backup/replication for short term backup along with tape for archival.

That’s my thinking now, after seeing just a few minutes of a pre-beta demo on a webcast.  As I said, it’s subject to change.  We’ll learn more at/after Build in September and as we progress from beta-RC-RTM.  Until then, these are musings, and not something to start strategising on.

Another Hyper-V Implementation Mistake –Too Many CSVs

In the PowerPoint that I posted yesterday, I mentioned that you should not go overboard with creating CSVs (Cluster Shared Volumes).  In the last two weeks, I’ve heard of several people who have.  I’m not going to play blame game.  Let’s dig into the technical side of things and figure out what should be done.

In Windows Server 2008 Hyper-V clustering, we did not have a shared disk mechanism like CSV.  Every disk in the cluster was single owner/operator.  Realistically (and required by VMM 2008) we had to have 1 LUN/cluster disk for each VM.

That went away with CSV in Windows Server 2008 R2.  We can size our storage (IOPS from MAP) and plan our storage (DR replication, backup policy, fault tolerance) accordingly.  The result is you can have lots of VMs and virtual hard disks (VHDs) on a single LUN.  But for some reason, some people are still putting 1 VM, and even 1 VHD, on a CSV.

An example: someone is worried about disk performance and they spread the VHDs of a single VM across 3 CSVs on the SAN.  What does that gain them?  In reality: nothing.  It actually is a negative.  Let’s look at the first issue:

SAN Disk Grouping is not like Your Daddy’s Server Storage

If you read some of the product guidance on big software publisher’s support site, you can tell that there is still some confusion out there.  I’m going to use HP EVA lingo because it’s what I know.

If I had a server with internal disks, and wanted to create three RAID 10 LUNs, then I would need 6 disks.


The first pair would be grouped together to make LUN1 at a desired RAID level.  The second pair would be grouped together to make the second LUN, and so on.  This means that LUN1 is on a completely separate set of spindles to LUN2 and LUN3.  They may or may not share a storage controller.

A lot of software documentation assumes that this is the sort of storage that you’ll be using.  But that’s not the case with a cluster with a hardware SAN. You need to use the storage it provides, and it’s usually nothing like the storage in a server.

By the way, I’m really happy that Hans Vredevoort is away on vacation and probably will miss this post.  He’d pick it to shreds Smile

Things are kind of reversed.  You start off by creating a disk group (HP lingo!)  This is a set of disks that will work as a team, and there is often a minimum number required.


From there you will create a virtual disk (not a VHD – it’s HP lingo for a LUN in this type of environment).  This is the LUN that you want to create your CSV volume on.  The interesting thing is that each virtual disk in the disk group spans every disk in the disk group.  How that spanning is done depends on the desired RAID level.  RAID 10 will stripe using pairs of disks, and RAID5 will stripe using all of the disks.  That gives you the usual expected performance hit/benefits of those RAID levels and the expected available amount of data.

In the below, you can see two virtual disks (LUNs) have been created in the disk group.  The benefit of this approach is that the virtual disks can benefit by having many more spindles to use.  The sales pitch is that you are getting much better performance than the alternative server internal storage.  Compare LUN1 from above (2 spindles) with vDisk1 below (6 spindles).  More spindles = more speed.

I did say it was sales pitch.  You’ve got other factors like SAN latency, controller cache/latency, vDisks competing for disk I/O, etc. But most often, the sales pitch holds fairly true.


If you think about it, a CSV spread across a lot of disk spindles will have a lot of horsepower.  It should provide excellent storage performance for a VM with multiple VHDs.

A MAP assessment is critical.  I’ve also pointed out in that PowerPoint that customers/implementers are not doing this.  This is the only true way to plan storage and decide between VHD or passthrough disk.  Gut feeling, “experience”, “knowledge of your network” are a bunch of BS.  If I hear someone saying “I just know I need multiple physical disks or passthrough disks” then my BS-ometer starts sending alerts to OpsMgr – can anyone write that management pack for me?

Long story short: a CSV on a SAN with this type of storage offers a lot of I/O horsepower.  Don’t think old school because that’s how you’ve always thought.  Run a MAP assessment to figure out what you really need.

Persistent Reservations

Windows Server 2008 and 2008 R2 Failover Clustering use iSCSI3 persistent reservations (PRs) to access storage.  Each SAN solution has a limit on how many PRs they can support.  You can roughly calculate what you need using:

PRs = Number of Hosts * Number of Storage * Channels per Host Number of CSVs

Let’s do an example.  We have 2 hosts, with 2 iSCSI connections each, with 4 CSVs.  That works out as:

2 [hosts] * 2 [channels] * 4 [CSVs] = 16 PRs

OK; Things get more complicated with some storage solutions, especially modular ones.  Here you really need to consult an expert (and I don’t mean Honest Bob who once sold you a couple of PCs at a nice price).  The key piece may end up being the number of storage channels.  For example, each host may have 2 iSCSI channels, but it maintains connections to each module in the SAN.

Here’s another example.  There is an iSCSI SAN with 2 storage modules.  Once again, we have 2 hosts, with 2 iSCSI connections each, with 4 CSVs.  This now works out as:

2 [hosts] * 4 [channels –> 2 modules * 2 iSCSI connections] * 4 [CSVs] = 32 PRs

Add 2 more storage modules and double the number of CSVs to 8 and suddenly:

2 [hosts] * 8 [channels –> 4 modules * 2 iSCSI connections] * 8 [CSVs] = 128 PRs

Your storage solution may actually calculate PRs using a formula with higher demands.  But the question is: how many PRs can your storage solution handle?  Deploy too many CSVs and/or storage modules and you may find that you have disks disappearing from your cluster.  And that leads to very bad circumstances.

You may find that a storage firmware update increases the number of required PRs.  But eventually you reach a limit that is set by the storage manufacturer.  They obviously cripple the firmware to create a reason to buy the next higher up model.  But that’s not something you want to hear after spending €50K or €100K on a new SAN.

They way to limit your PR requirement is to deploy only the CSVs you need.

Undoing The Damage

If you find yourself in the situation with way too many CSVs then you can use SCVMM Quick Storage Migration to move VMs onto fewer, larger CSVs, and then remove the empty CSVs.


Slow down to hurry up.  You MUST run an assessment of your pre-virtual environment to understand what storage you buy.  You also use this data as a factor for planning CSV design and virtual machine/VHD placement.  Like my old woodwork teacher used to say: “measure twice and cut once”.

Take that performance requirement information and combine it with backup policy (1 CSV backup policy = 1 or more CSVs, 2 CSV backup policies = 2 or more CSVs, etc), fault tolerance (place clustered or load balanced VMs on different CSVs), and DR policy (different storage level VM replication policies requires different CSVs).

Carbonite on my Windows Home Server

When I set up my Windows Home Server I configure the normal Windows Server Backup task to backup the server folders to a USB disk.  That’s nice for normal backup/recovery.  But that doesn’t protect my data (documents, books, whitepapers, and thousands of photos) against fire and theft.  Sure, I could probably swap disks and store them offsite.  But I know how poor my discipline with doing that in the past was.  I need something automated for off-site backup.

So I decided to try Carbonite.  It’s one of the few online personal backup solutions that will work on WHS.  There’s a 15 day free trial so I signed up for that, and I added the offer code from the TWiT Security Now podcast – that gives you an extra 2 months free in addition to your 12 month subscription (unlimited storage for less than $60/year!!!!).

The install was easy.  The configuration wizard walks you through the few steps.  You’re warned that files like video will not be backed up.  I’m OK with that – I have no personal/holiday videos because I’m a still photo man.  Targeting a folder is easy – use Windows Explorer, right-click, and select the add to backup option.  I had two schedule choices: constantly backup changes or schedule.  I went for the first option.

OK, the flaw: I have 20GB per month limit and I’m on ADSL.  It’s going to take a very long time to get all of my photo collection backing up to the cloud.  I’ve been incrementally adding folders, starting with My Documents, and then I added some of my older photo folders to test.  All worked well.  I’ll continue testing, and then decided next week if I’ll pay for the service.

Technorati Tags: ,,

Veeam Backup & Replication to Support Hyper-V

The conspiracy theories started a few weeks ago when Veeam started to advertise on my (mainly MS infrastructure, featuring MS Hyper-V) blog.  Then we saw a countdown clock for a big announcement on the first day of TechEd USA 2011.  1 hour into the keynote, Veeam made their announcement:

“Veeam Software, innovative provider of VMware data protection, disaster recovery and VMware management solutions for virtual datacenter environments, today announced at Tech·Ed North America that it is adding support for Windows Server Hyper-V and Microsoft Hyper-V Server to Veeam Backup & Replication, the leading data protection solution for virtual environments used with more than 1.5 million virtual machines (VMs) worldwide”.

Veeam Backup & Replication can:

  • 2-in-1 backup and replication for Hyper-V: Veeam’s solution includes replication, which provides near-continuous data protection (near-CDP) and enables the best possible recovery time and recovery point objectives (RTOs and RPOs).
  • Changed block tracking for Hyper-V: Veeam’s new hypervisor support includes technology for changed block tracking to enable fast, frequent and efficient backup and replication of all VMs, including those running on Cluster Shared Volumes (CSV).
  • Built-in deduplication and compression: Included at no extra charge, these capabilities minimize consumption of network bandwidth and backup storage.

Veeam is a name that is almost synonymous with VMware.  Many would consider that if you buy VMware then you buy Veeam.  With this new offering for Hyper-V, and with cluster support, you have to think that more than a few Hyper-V architects are considering the wider set of options that are now available to them.

Technorati Tags: ,,,

Event: Private Cloud Academy – DPM 2010

The next Private Cloud Academy event, co-sponsored by Microsoft and System Dynamics, is on next Friday 25th March, 2011.  At this free session, you’ll learn all about using System Center Data Protection Manager (DPM) 2010 to backup your Hyper-V compute cluster and the applications that run on it.  Once again, I am the presenter.

I’m going to spend maybe a 1/3 of the session talking about Hyper-V cluster design, focusing particularly the storage.  Cluster Shared Volume (CSV) storage level backup are convenient but there are things you need to be aware of when you design the compute cluster … or face the prospect of poor performance, blue screens of death, and a P45 (pink slip aka getting fired).  This affects Hyper-V when being backed up by anything, not just DPM 2010.

With that out of the way, I’ll move on to very demo-centric DPM content – I’m spending most of next week building the demo lab.  I’ll talk about backing up VMs and their applications, and the different approaches that you can take.  I’ll also be looking at how you can replicate DPM backup content to a secondary (DR) site, and how you can take advantage of this to get a relatively cheap DR replication solution.

Expect this session to last the usual 3-3.5 hours, starting at 09:30 sharp.  Note that the location has changed; we’ll be in the Auditorium in Building 3 in Sandyford.  You can register here.