Windows Server and System Center 2012 R2 Previews Are Available

It’s all over social media this morning; You can download WSSC 2012 R2 (That’s WS2012 R2 and SC/SysCtr 2012 R2) from TechNet and MSDN right now.  The previews for the following are available now:

  • Hyper-V Server 2012 R2
  • Windows Server 2012 R2 Essentials
  • Windows server 2012 R2 Datacenter
  • System Center 2012 R2 Virtual Machine Manager (x86 and x64)
  • System Center 2012 R2 Service Manager (x86 and x64)
  • System Center 2012 R2 Data Protection Manager (x86 and x64)
  • System Center 2012 R2 App Controller (x86 and x64)
  • System Center 2012 R2 Configuration Manager (x86 and x64)
  • System Center 2012 R2 Orchestrator (x86 and x64)
  • System Center 2012 R2 Operations Manager (x86 and x64)
  • Windows Server 2012 R2 Virtual Machine
  • Windows Server 2012 R2 Virtual Machine Core

SQL Server 2014 CTP1 is also up there for you to test.


Remember that these are preview releases – that’s like a beta (the product is not finished and has no support unless you are in a MSFT supervised TAP program) but without the feedback mechanism of a beta.  Do not use these preview releases in production!

I have the bits downloading now.  I’m on a customer site today so I don’t know if I’ll be deploying the bits or not until tomorrow.

Update Rollup 2 For System Center 2012 SP1 Is Released

Microsoft has released UR2 for System Center 2012 SP1 via Windows Update.  That means you’ll auto download and deploy (pending manual/auto approval on your part) this update via WSUS, etc.  You can also manually download the updates to each product. 

Note that VMM is not included this time around and OpsMgr has quite a few updates.

Please test and then update yours or your customers’ sites to improve the performance and stability of your System Center deployments.  For consultants, this is an opportunity for you do do a little *ahem* sales, and see if there are some further deployments/customisations that you can do for your clients.

App Controller (KB2815569)

  • Issue 1: You cannot change the virtual machine network of deployed virtual machines.
  • Issue 2: The network connection is set to None after you view the network properties of a deployed virtual.
  • Issue 3: You cannot view the virtual networks for a virtual machine.
  • Issue 4: When you change the virtual network in App Controller, you receive the following error message:
  • Issue 5: You cannot copy VMs that have multiple processors or large amounts of memory from VMM to a Windows Azure.
  • Issue 6: App Controller requires Microsoft Silverlight 5 but links to the download page for Silverlight 4.
  • Issue 7: An argument null exception may occur if network connectivity is interrupted.

App Controller Setup (KB2823452)

  • Issue 1: App Controller cannot be installed if the Microsoft SQL Server database server name starts with a number.
  • Issue 2: Setup incorrectly reports that the SQL Server database has insufficient disk space.
  • Issue 3: Setup is unsuccessful when it tries to enable Internet Information Services (IIS).

Operations Manager (KB2826664)

  • Issue 1: The Web Console performance is very poor when a view is opened for the first time.
  • Issue 2: The alert links do not open in the Web Console after Service Pack 1 is applied for Operations Manager.
  • Issue 3: The Distributed Applications (DA) health state is incorrect in Diagram View.
  • Issue 4: The Details Widget does not display data when it is viewed by using the SharePoint webpart.
  • Issue 5: The renaming of the SCOM group in Group View will not work if the user language setting is not "English (United States)."
  • Issue 6: An alert description that includes multibyte UTF-8 characters is not displayed correctly in the Alert Properties view.
  • Issue 7: The Chinese (Taiwan) Web Console displays a wrong message.
  • Issue 8: The APM to IntelliTrace conversion is broken when alerts are generated from dynamic module events
  • Issue 9: Connectivity issues to System Center services are fixed.
  • Issue 10: High CPU problems are experienced in Operations Manager UI.
  • Issue 11: Query processor runs out of internal resources and cannot produce a query plan when you open Dashboard views.
  • Issue 12: Path details are missing for "Objects by Performance."

Operations Manager – UNIX and Linux Monitoring (Management Pack Update) (KB2828653)

  • Issue 1: The Solaris agent could run out of file descriptors when many multi-version file systems (MVFS) are mounted.
  • Issue 2: Logical and physical disks are not discoverable on AIX-based computers when a disk device file is contained in a subdirectory.
  • Issue 3: Rules and monitors that were created by using the UNIX/Linux Shell Command templates do not contain some parameters.
  • Issue 4: Process monitors that were created by the UNIX/Linux Process Monitoring template cannot save in an existing management.
  • Issue 5: The Linux agent cannot install on a CentOS or Oracle Linux host by using FIPS version of OpenSSL 0.9.8.

Service Manager (KB2828618)

  • Issue 1: If the number of "Manual Activities" displayed in the Service Manager Portal exceeds a certain limit, page loads may time out.
  • Issue 2: Incorrect cleanup of a custom related type causes grooming on the EntityChangeLog table to stall.
  • Issue 3: Service requests complete unexpectedly because of a race condition between workflows.
  • Issue 4: The console crashes when you double-click a parent incident link on an extended incident class.
  • Issue 5: PowerShell tasks that were created by using the authoring tool do not run because of an incorrect reference.
  • Issue 6: The Exchange management pack is stuck in a Pending state after management pack synchronization.

Orchestrator (KB2828616)

  • Issue 1: The Monitor SNMP Trap activity publishes incorrect values for strings when a Microsoft SNMP Trap Service connection is used.
  • Issue 2: Inconsistent results when you use Orchestrator to query an Oracle database.

Data Protection Manager (KB2822782)

  • Issue 1: An express full backup job in SC 2012 SP1 may stop responding on a Hyper-V cluster that has 600 or more VMs.
  • Issue 2: When a SC 2012 SP1 item level restore operation is performed on a SharePoint the restore is unsuccessful.
  • Issue 3: When you open DPM on a computer that is running SC 2012 SP1, the Welcome screen does not indicate the correct version of SP1.
  • Issue 4: When you perform a disconnected installation of the DPM 2012 SP1 agent, you receive an error message.
  • Issue 5: When you use DPM 2012 SP1 for tape backup, a checksum error may occur when the WriteMBC workflow is run.
  • Issue 6: Backups of CSV volumes may be unsuccessful with metadata file corruption in DPM 2012 SP1.
  • Issue 7: The DPM console may require more time to open than expected when many client systems are being protected.

System Center Data Protection Manager CSV Serialization Tool

I recently blogged about the big changes In WS2012 Cluster Shared Volume (CSV).  The biggest changes are related to backup:

  • Single coordinated VSS snapshot
  • No more redirected IO

In Windows Server 2008 R2 CSV backup, we tried to use a hardware VSS provider to reduce the impacts of redirected IO.  But as it turns out, the multiple-snapshot-per-backup process of the past could cause problems for the hardware VSS provider and the SAN snapshot functionality.  In extreme cases, those problems could even lead to a CSV LUN “disappearing”.

If you had these problems and couldn’t get a better hardware VSS provider then you would switch to using the system VSS provider (using the VSS functionality that is built into Windows Server and does not use SAN snapshot features).  You’d be forced to use the system VSS provider if your SAN did not have support or licensing for a hardware (physical SAN) or software (software SAN) VSS provider.

If you were using the system VSS provider to backup W2008 R2 CSV then Microsoft recommended you to do something called serialization of your CSV backup (see here for DPM 2010 instructions).  This process creates (using PowerShell) and uses an XML file that is read by DPM.  Nice and simple if you have one DPM server for every W2008 R2 Hyper-V cluster.  But what if you had lots of clusters backed up by a single DPM server?  It meant you had to manually merge the XML files, and that would be a nightmare in a cloud where there is nothing but change.

Microsoft has released the System Center Data Protection Manager CSV Serialization Tool to help you in this scenario.  This tool is intended to be used when backing up Windows Server 2008 R2 Hyper-V clusters with one or more CSVs using DPM 2010 with QFE 3 and above or DPM 2012.

You do not need to use this tool with WS2012 CSV.

The downloads include the PS1 PowerShell script to create an XML file for each cluster and a tool to consolidate those XML files for DPM to use. 

Why release this tool?  Lots of people will have W2008 R2 clusters and won’t be in a position to upgrade them now or ever:

  • Change to production systems can be restricted, e.g. pharmaceuticals.
  • They might have licensed without Software Assurance and can’t upgrade their hosts until there is licensing budget.
  • They might build new clusters/hosts using WS2012 and have to leave existing VMs where they are until there is a suitable maintenance window.  For a public cloud, this could have to be scheduled well in advance.

This free tool will allow those sorts of environments to reduce DPM administrative effort.

Online Backup to Windows Azure Using System Center 2012 SP1 – Data Protection Manager

I blogged about Windows Azure Online Backup in March of this year.  What was announced then was a way to get an offsite backup of files and folders (only) into Windows Azure directly from Windows Server 2012 (including the Essentials edition).

The online backup market is pretty crowded and competitive.  You need to offer something that is different, and preferably, integrated with the customer already has for onsite backups so that the customer does not have to manage 2 backup systems.

Being a cloud service, Windows Azure Online Backup (WAOB) is something that can be tweaked and extended relatively rapidly.  And Microsoft has extended it.  WAOB will support protecting backup data from SysCtr 2012 SP1 DPM to the cloud.

With the System Center 2012 SP1 release, the Data Protection Manager (DPM) component enables cloud-based backup of datacenter server data to Windows Azure storage.  System Center 2012 SP1 administrators use the downloadable Windows Azure Online Backup agent to leverage their existing protection, recovery and monitoring workflows to seamlessly integrate cloud-based backups alongside their disk/tape based backups. DPM’s short term, local backup continues to offer quicker disk–based point recoveries when business demands it, while the Windows Azure backup provides the peace of mind & reduction in TCO that comes with offsite backups. In addition to files and folders, DPM also enables Virtual Machine backups to be stored in the cloud.

What this means is that you can:

  • Continue to reap the rewards of your investment in DPM for on-premises backups to disk and/or tape
  • Extend this functionality to back up to the cloud from the storage pools in DPM


With WAOB you will be able to:

… transparently recover files, folders and VMs from the cloud

There will be block level incremental backups to reduce the length of backup jobs and reduce the amount of data transfer.  Data is compressed and encrypted before it leaves your network.  And critically important for you to note:

The encryption passphrase is in your control only.  Once the data is encrypted, it stays that way in storage in Microsoft.  They have no way to decrypt your data without your passphrase.  So choose a good one, and document/store is somewhere safe, e.g. with a lawyer or in a deposit box.

There is throttling for bandwidth control.  You can verify data integrity in the cloud without restoring it (but test restores are a good thing).  You can also configure retention policies – you balance regulatory requirements, business needs, and online storage costs.

To go with this, the Windows Azure Online Backup portal has been launched (last week).  You can sign up for a free preview with 300 GB of storage space.

It’s still beta so we don’t know:

  • Pricing
  • RTM date
  • How it will be sold, e.g. via partner channel which is critically important (see Office 365).

MMS 2012: Automating Data Protection And Recovery With DPM and System Center 2012

Speakers: Orin Thomas and Mike Ressler

Replication is not the same as backup.  Lose it in site A = lose it in site B.  Backup is still required.  And backup provisioning in the private cloud is a challenge cos admins don’t know what’s being deployed.

DPM is a part of system center, a part of a holistic integrated solution.  Makes it perfect for provisioning in the private cloud.

How Will The Agent Get Deployed?

  • Make it part of image
  • GPO for an OU
  • Scripting or manually
  • Use Configuration Manager
  • And probably lots more options, e.g. a runbook fired off from Service Manager

Their solution is user goes to Service Manager, creates a request, and Orchestrator runs a runbook.  Their is a DPM Integration Pack.  It’s a confusing IP apparently. 

  1. Initialize Data: Add parameters – ServerName, DatabaseName, and Type (3 types of protection group in DPM such as gold, silver, and bronze for recovery points, retention, etc).
  2. Get Data Source (renamed as Get Protection Group): Data Source Location set as protection group and select Type
  3. Get Data Source (get server ID) – choose protection server and select ServerName
  4. Get Data Source (renamed as Get Data Source ID) – DPM, Get protection server name and filter to DatabaseName to protect a single DB, could have said type = SQL to protect all DBs.
  5. Protect Data Source: Protection Group = Get Protection Group
  6. Create Recovery – Something.

Yup, it’s confusing.  Go look at the videos when the guys tweet the link.

Keep the self-service simple.  If there’s more than a few questions, the user won’t do it and they’ll blame you when data isn’t protected and it’s lost.

There’s a bunch of Service Manager stuff after this.

CTP of SP1 for System Center 2012

Following my post on information for VMM 2012 SP1 CTP, Microsoft released the CTP downloads.  This includes the VMM download and a download for DPM 2012 SP1 CTP. 

The CTP enables the Data Protection Manager component’s repository and agents to run on Windows Server ‘8” as well as providing protection in Windows Server “8” environments. The CTP also adds protection for new features in Windows Server “8”:

  • Hyper-V Virtual Machines on Cluster Shared Volumes 2.0 (CSV2.0)
  • Hyper-V Virtual Machines on remote SMB share
  • Files on De-Duplicated Volumes

The supported operating systems for DPM 2012 SP1 CTP are:

  • Windows 2008
  • Windows 2008 R2
  • Windows "8" Beta

Windows Server 2012 Hyper-V Replica … In Detail

If you asked me to pick the killer feature of WS2012 Hyper-V, then Replica would be high if not at the top of my list (64 TB VHDX is right up there in the competition).  In Ireland, and we’re probably not all that different from everywhere else, the majority of companies are in the small/medium enterprise (SME) space and the vast majority of my customers work exclusively in this space.  I’ve seen how DR is a challenge to enterprises and to the SMEs alike.  It is expensive and it is difficult.  Those are challenges an enterprise can overcome by spending, but that’s not the case for the SME.

Virtualisation should help.  Hardware consolidation reduces the cost, but the cost of replication is still there.  SAN’s often need licenses to replicate.  SAN’s are normally outside of the reach of the SME and even the corporate regional/branch office.  Software replication which is aimed at this space is not cheap either, and to be honest, some of them are more risky than the threat of disaster.  And let’s not forget the bandwidth that these two types of solution can require.

Isn’t DR Just An Enterprise Thing?

So if virtualisation mobility and the encapsulation of a machine as a bunch of files can help, what can be done to make DR replication a possibility for the SME?

Enter Replica (Hyper-V Replica), a built-in software based asynchronous replication mechanism that has been designed to solve these problems.  This is what Microsoft envisioned for Replica:

  • If you need to replicate dozens or hundreds of VMs then you should be using a SAN and SAN replication.  Replica is not for the medium/enterprise sites.
  • Smaller branch offices or regional offices that need to replicate to local or central (head office or HQ data centre) DR sites.
  • SME’s who want to replicate to another office.
  • Microsoft partners or hosting companies that want to offer a service where SME’s could configure important Windows Server 2012 Hyper-V host VMs to replicate to their data centre – basically a hosted DR service for SMEs.  Requirements of this is that it must have Internet friendly authentication (not Kerberos) and it must be hardware independent, i.e. the production site storage can be nothing like the replica storage.
  • Most crucially of all: limited bandwidth.  Replica is designed to be used on commercially available broadband without impacting normal email or browsing activity – Microsoft does also want to sell them Office 365, after all Smile How much bandwidth will you need?  How long is a piece of string?  Your best bet is to measure how much change there is to your customers VMs every 5 minutes and that’ll give you an idea of what bandwidth you’ll need.

Figure 1  Replicate virtual machines

In short, Replica is designed and aimed at the ordinary business that makes up 95% of the market, and it’s designed to be easy to set up and invoke.

What Hyper-V Replica Is Not Intended To Do

I know some people are thinking of this next scenario, and the Hyper-V product group anticipated this too.  Some people will look at Hyper-V Replica and see it as a way to provide an alternative to clustered Hyper-V hosts in a single site.  Although Hyper-V Replica could do this, it is not intended for for this purpose.

The replication is designed for low bandwidth, high latency networks that the SME is likely to use in inter-site replication.  As you’ll see later, there will be a delay between data being written on host/cluster A and being replicated to host/cluster B.

You can use Hyper-V Replica within a site for DR, but that’s all it is: DR.  It is not a cluster where you fail stuff back and forth for maintenance windows – although you probably could shut down VMs for an hour before flipping over – maybe – but then it would be quicker to put them in a saved state on the original host, do the work, and reboot without failing over to the replica.

How It Works

I describe Hyper-V Replica as being a storage log based asynchronous disaster recovery replication mechanism.  That’s all you need to know …

But let’s get deeper Smile

How Replication Works

Once Replica is enabled, the source host starts to maintain a HRL (Hyper-V Replica Log file) for the VHDs.  Every 1 write by the VM = 1 write to VHD and 1 write to the HRL.  Ideally, and this depends on bandwidth availability, this log file is replayed to the replica VHD on the replica host every 5 minutes.  This is not configurable.  Some people are going to see the VSS snapshot (more later) timings and get confused by this, but the HRL replay should happen every 5 minutes, no matter what.

The HRL replay mechanism is actually quite clever; it replays the log file in reverse order, and this allows it only to store the latest writes.  In other words, it is asynchronous (able to deal with long distances and high latency by write in site A and later write in site B) and it replicates just the changes.

Note: I love stuff like this.  Simple, but clever, techniques that simplify and improve otherwise complex tasks.  I guess that’s why Microsoft allegedly ask job candidates why manhole covers are circular Smile

As I said, replication or replay of the HRL will normally take place every 5 minutes.  That means if a source site goes offline then you’ll lose anywhere from 1 second to nearly 10 minutes of data.

I did say “normally take place every 5 minutes”.  Sometimes the bandwidth won’t be there.  Hyper-V Replica can tolerate this.  After 5 minutes, if the replay hasn’t happened then you get an alert.  The HRL replay will have another 25 minutes (up to 30 completely including the 5) to complete before going into a failed state where human intervention will be required.  This now means that with replication working, a business could lose between 1 second and nearly 1 hour of data.

Most organisations would actually be very happy with this. Novices to DR will proclaim that they want 0 data loss. OK; that is achievable with EUR100,000 SANs and dark fibre networks over short distances. Once the budget face smack has been dealt, Hyper-V Replica becomes very, very attractive.

That’s the Recovery Point Objective (RPO – amount of time/data lost) dealt with.  What about the Recovery Time Objective (RTO – how long it takes to recover)?  Hyper-V Replica does not have a heartbeat.  There is not automatic failover.  There’s a good reason for this.  Replica is designed for commercially available broadband that is used by SMEs.  This is often phone network based and these networks have brief outages.  The last thing an SME needs is for their VMs to automatically come online in the DR site during one of these 10 minute outages.  Enterprises avoid this split brain by using witness sites and an independent triangle of WAN connections.  Fantastic, but well out of the reach of the SME.  Therefore, Replica will require manual failover of VMs in the DR site, either by the SME’s employees or by a NOC engineer in the hosting company.  You could simplify/orchestrate this using PowerShell or System Center Orchestrator.  The RTO will be short but have implementation specific variables: how long does it take to start up your VMs and for their guest operating systems/applications to start?  How long will it take for you to get your VDI/RDS session hosts (for remote access to applications) up, running and accepting user connections?  I’d reckon this should be very quick, and much better with the 4-24 hours that many enterprises aim for.  I’m chuckling as I type this; the Hyper-V group is giving SMEs a better DR solution than most of the Fortune 1000’s can realistically achieve with oodles of money to spend on networks and storage replication, regardless of virtualisation products.

A common question I expect: there is no Hyper-V integration component for Replica.  This mechanism works at the storage level, where Hyper-V is intercepting and logging storage activity.

Replica and Hyper-V Clusters

Hyper-V Replica works with clusters.  In fact you can do the following replications:

  • Standalone host to cluster
  • Cluster to cluster
  • Cluster to standalone host

The tricky thing is the configuration replication and smooth delegation of replication (even with Live Migration and failover) of HA VMs on a cluster.  How can this be done?  You can enable a HA role called a Hyper-V Replica Broker on a cluster (once only).  This is where you can configure replication, authentication, etc, and the Broker replicates this data out to cluster nodes.  Replica settings for VMs will travel with them, and the broker ensures smooth replication from that point on.

Configuring Hyper-V Replica

I don’t have my lab up and running yet, but there are already many step-by-step posts out there.  I wanted to focus on the how it works and why to use it.  But here are the fundamentals:

On the replica host/cluster, you need to enable Hyper-V Replica.  Here you can control what hosts (or all) can replicate to this host/cluster.  You can do things like have one storage path for all replicas, or creating individual policies based on source FQDN such as storage paths or enabling/pausing/disabling replication.

You do not need to enable Hyper-V Replica on the source host.  Instead, you configure replication for each required VM.  This includes things like:

  • Authentication: HTTP (Kerberos) within the AD forest, or HTTPS (destination provided SSL certificate) for inter-forest (or hosted) replication.
  • Select VHDs to replicate
  • Destination
  • Compressing data transfer: with a CPU cost for the source host.
  • Enable VSS once per hour: for apps requiring consistency – not normally required because of the logging nature of Replica and it does cause additional load on the source host
  • Configure the number of replicas to retain on the destination host/cluster: Hyper-V Replica will automatically retain X historical copies of a VM on the destination site.  These are actually Hyper-V snapshots on the destination copy of the VM that are automatically created/merged (remember we have hot-merge of the AVHD in Windows 8) with the obvious cost of storage.  There is some question here regarding application support of Hyper-V snapshots and this feature.

Initial Replication Method

I’ve worked in the online backup business before and know how difficult the first copy over the wire is.  The SME may have small changes to replicate but might have TBs of data to copy on the first synchronisation.  How do you get that data over the wire?

  • Over-the-wire copy: fine for a LAN, if you have lots of bandwidth to burn, or if you like being screamed at by the boss/customer.  You can schedule this to start at a certain time.
  • Offline media: You can copy the source VMs to some offline media, and import it to the replica site.  Please remember to encrypt this media in case it is stolen/lost (BitLocker-To-Go), and then erase (not format) it afterwards (DBAN).  There might be scope for an R2/Windows 9 release to include this as part of a process wizard.  I see this being the primary method that will be used.  Be careful: there is no time out for this option.  The HRL on the source site will grow and grow until the process is completed (at the destination site by importing the offline copy).  You can delete the HRLs without losing data – it is not like a Hyper-V snapshot (checkpoint) AVHD.
  • Use a seed VM on the destination site: Be very very careful with this option.  I really see it as being a great one for causing calls to MSFT product support.  This is intended for when you can restore a copy of the VM in the DR site, and it will be used in a differencing mechanism where the differences will be merged to create the synch.  This is not to be used with a template or similar VMs.  It is meant to be used with a restored copy of the same VM with the same VM ID.  You have been warned.

And that’s it.  Check out the social media and you’ll see how easy people are saying Hyper-V Replica is to set up and use.  All you need to do now is check out the status of Hyper-V Replica in the Hyper-V Management Console, Event Viewer (Hyper-V Replica log data using the Microsoft-Windows-Hyper-V-VMMSAdmin log), and maybe even monitor it when there’s an updated management pack for System Center Operations Manager.


I said earlier that failover is manual.  There are two scenarios:

  • Planned: You are either testing the invocation process or the original site is running but unavailable.  In this case, the VMs start in the DR site, there is guaranteed zero data loss, and the replication policy is reversed so that changes in the DR site are replicated to the now offline VMs in the primary site.
  • Unplanned: The primary site is assumed offline.  The VMs start in the DR site and replication is not reversed. In fact, the policy is broken.  To get back to the primary site, you will have to reconfigure replication.Can I Dispense With Backup?No, and I’m not saying that as the employee of a distributor that sells two competing backup products for this market.  Replication is just that, replication.  Even with the historical copies (Hyper-V snapshots) that can be retained on the destination site, we do not have a backup with any replication mechanism.  You must still do a backup, as I previously blogged, and you should have offsite storage of the backup.Many will continue to do off-site storage of tapes or USB disks.  If your disaster affects the area, e.g. a flood, then how exactly will that tape or USB disk get to your DR site if you need to restore data?  I’d suggest you look at backup replication, such as what you can get from DPM:


    The Big Question: How Much Bandwidth Do I Need?

  • Ah, if I knew the answer to that question for every implementation then I’d know many answers to many such questions and be a very rich man, travelling the world in First Class.  But I am not.

    There’s a sizing process that you will have to do.  Remember that once the initial synchronisation is done, only changes are replayed across the wire.  In fact, it’s only the final resultant changes of the last 5 minutes that are replayed.  We can guestimate what this amount will be using approaches such as these:

    • Set up a proof of concept with a temporary Hyper-V host in the client site and monitor the link between the source and replica: There’s some cost to this but it will be very accurate if monitored over a typical week.
    • Do some work with incremental backups: Incremental backups, taken over a day, show how much change is done to a VM in a day.
    • Maybe use some differencing tool: but this could have negative impacts.

    Some traps to watch out for on the bandwidth side:

    • Asynchronous broadband (ADSL):  The customer claims to have an 8 Mbps line but in reality it is 7 Mbps down and 300kbps up.  It’s the uplink that is the bottleneck because you are sending data up the wire.  Most SME’s aren’t going to need all that much.  My experience with online backup verifies that, especially if compression is turned on (will consume source host CPU).
    • How much bandwidth is actually available: monitor the customer’s line to tell how much of the bandwidth is being consumed or not by existing services.  Just because they have a functional 500 kbps upload, it doesn’t mean that they aren’t already using it.

    Very Useful Suggestion

    Think about your servers for a moment.  What’s the one file that has the most write activity?  It is probably the paging file.  Do you really want to replicate it from site A to site B, needlessly hammering the wire?

    Hyper-V Replica works by intercepting writes to VHDs.  It has no idea of what’s inside the files.  You can’t just filter out the paging file.  So the excellent suggestion from the Hyper-V product group is to place the paging file of each VM onto a different VHD, e.g. a SCSI attached D drive.  Do not select this drive for replication.  When the VMs are failed over, they’ll still function without the paging file, just not as well.  You can always add one after if the disaster is sustained.  The benefit is that you won’t needlessly replicate paging file changes from the primary site to the DR.


    I love this feature because it solves a real problem that the majority of businesses face.  It is further proof that Hyper-V is the best value virtualisation solution out there.  I really do think it could give many Microsoft Partners a way to offer a new multi-tenant business offering to further reduce the costs of DR.


    I have since posted a demo video of Hyper-V Replica in action, and I have written a guest post on Mary Jo Foley’s blog.


    I have written around 45 pages of text (in Word format) on the subject of Hyper-V Replica for a chapter in the Windows Server 2012 Hyper-V Installation and Configuration Guide book. It goes into great depth and has lots of examples. The book should be out Feb/March of 2013 and you can pre-order it now:


    Licensing DPM 2010

    Two of the System Center products that are generally available have unusual licensing.  System Center Data Protection Manager 2010 is one of those 2 unusual ones.

    Typically for an installation you will buy:

    • A server license: For example System Center Operations Manager, optionally with SQL Server – and don’t forget the Windows to run it on.
    • Management licenses: for each machine that will be managed by the management server(s)

    DPM 2010 doesn’t follow that model.  Instead, you actually the the DPM server license free if you buy one or more management licenses.

    Note that you still have to buy the Windows Server license that the DPM server will be installed on.  You also must buy a copy of SQL Server 2008 Standard/Enterprise/Datacenter (and install SP1). 

    “For the DPM database, DPM 2010 requires a dedicated instance of the 64-bit or 32-bit version of SQL Server 2008, Enterprise or Standard Edition, with Service Pack 1 (SP1). During setup, you can select either to have DPM Setup install SQL Server 2008 SP1 on the DPM server, or you can specify that DPM use a remote instance of SQL Server.

    If you decide to have DPM Setup install SQL Server 2008 SP1 on the DPM server, you are not required to provide a SQL Server 2008 license. But, if you decide to preinstall SQL Server 2008 on a remote computer or on the same computer where DPM 2010 will be installed, you must provide a SQL Server 2008 product key. You can preinstall SQL Server 2008 Standard or Enterprise Edition”.

    DPM 2010 comes with a copy of SQL that doesn’t have a product key.  If you install this SQL you can put in a purchased product key, or you can leave it blank to use the evaluation license which will expire.

    “If you do not have a licensed version of SQL Server 2008, you can install an evaluation version from the DPM 2010 DVD. To install the evaluation version, do not provide the product key when you are prompted by DPM Setup. However, you must buy a license for SQL Server if you want to continue to use it after the evaluation period”.

    There are a bunch of ways to purchase management licenses (agents) for DPM:

    • System Center Server Management Suite Standard: For bulk managing a server with more than one System System Center Server Management Suite Enterprise: For a small virtualisation host (max 4 VMs)
    • System Center Server Management Suite Datacenter: For a virtualisation host with more than 4 VMs max
    • System Center Client Management Suite: for bulk management of PCs
    • System Center Data Protection Manager 2010 Standard: For a server with basic backup (more later on this)
    • System Center Data Protection Manager 2010 Enterprise: For a server with advanced backup (more later on this)
    • System Center Data Protection Manager 2010 client management licenses: For backing up a PC

    Most backup products have complex agent licensing:

    • Basic backup agent
    • Open file backup
    • SQL backup
    • Exchange backup
    • Direct to disk backup … and so on

    DPM is much simpler in comparison.  There are two basic levels of agent for backing up a server: Standard and Advanced.  The following table describes how to choose between them:

    Functionality or Workload

    Required Server Management Licenses

    Basic file backup and recovery management by instances of the server software of:

    • operating system components
    • utilities
    • service workloads running in the licensed OSE
    • these security workloads: Firewall, Proxy, Intrusion detection and prevention, Anti-virus management, Application security gateway, Content filtering (which includes URL filtering and Spam), Network forensics, Security information management, and Vulnerability assessment in order to safeguard the network and host.
    • System Center Data Protection Manager 2010 Standard Server Management License, or
    • System Center Server Management Suite Standard

    In other words, a Standard management license is required to do basic file backup.

    Backup and recovery, including basic file backup and recovery, by instances of the server software of:

    • the server system state
    • all operating system components
    • all utilities
    • all server workloads
    • any applications running in the licensed OSE
    • System Center Data Protection Manager 2010 Enterprise Server Management License, or
    • System Center Server Management Suite Enterprise, or
    • System Center Server Management Suite Datacenter

    In other words, an Enterprise management license is required to backup system state and application workloads.


    You can read more about this and licensing for all of  the Microsoft products in the Product Usage Rights (PUR) document.  Note that this stuff changes from time to time and the PUR is the only official source.

    So lets look at 2 examples:

    Example 1

    I want to back up the following:

    • Files only from a file server
    • SQL database server
    • Domain controller and System State

    I would need to buy a server to install DPM on.  This will require SQL Server Standard (or higher) and a copy of Windows Server.

    For the file server (files only) backup I can get 1 Standard DPM ML (management license).  For the other 2 machines, I will need 1 Enterprise DPM ML each.  Buying DPM MLs entitles me to a DPM server license.  I can even do DPM2DPM4DR replication to a DPM server in another site and get a free DPM server license for that too.

    Example 2

    I have a virtualisation cluster (Hyper-V/VMware/Xen) with 30 VMs. There are 2 hosts, each has 2 CPUs.  I can buy 30 DPM MLs … but if my reseller is doing their homework (like we do!) we’ll have noticed that buying the System Center Management Suite Datacenter edition (1 per CPU, minimum 2 per host) might work out cheaper.  As a customer, I get management licenses for all System Center products for my hosts and all current and future VMs on the hosts … and for less than just buying backup licenses.  If I’m a consulting company selling the solution, I know that there’s more work and solutions in that licensing that I can provide to my customer at a later point.

    And once again, we’ll need a DPM server … buy the hardware, buy/put Windows Server and SQL Server on it, and install the free DPM server license.

    Technorati Tags: ,,

    Results & Report on The Great Big Hyper-V Survey of 2011


    I am pleased to present the results and a report on The Great Big Hyper-V Survey of 2011, that was conducted by myself, Hans Vredvoort, and Damian Flynn.  We conducted this report over the last few weeks, asking people from around the world to answer 80 questions on:

    • Their Hyper-V project
    • Their Hyper-V installations
    • Systems management
    • Private cloud
    • Their future plans

    Note that this survey had no outside influences.  Microsoft found out about this survey by reading blog or twitter posts at the same time as the respondents.  I have deliberately chosed not to try get a sponsor for my report to further illustrate its independence.

    Some of the results were as expected, and some of them were quiet an education.  Thank you to all who completed the survey, and to all who helped to spread the word.  And now, here’s what you have been waiting for:

    • Here is a report that I have written over the last 2 days.  I dig into each of the 80 questions, analysing the results of each and every question that we asked.
    • For those of you who want to dig a little deeper, here is a zip file with all of the raw data from the survey.  You will find reports and spread sheets with different views and selections of data.  I also created an additional spread sheet that was used to create the report.

    Whether you are a sales person, a Hyper-V customer, a potential customer, or an enthusiast, I think there is something here for you.

    Now the conversations and debates can begin.  Have a read of the report and then go over to see what Hans Vredvoort, and Damian Flynn thought of the data.  We have deliberately not shared our opinions with each other; this means we can all have unique view points, and possibly see something that others don’t.  For example, I work in the software sales channel with a background in consulting and engineering, Damian is an enterprise systems administrator/engineer, and Hans is an enterprise consultant.  We each have a different view of the IT world.  And after you read their opinions, it’ll be your turn: we want to hear what you think.  Post comments, tweet (#GBHVS2011), blog, or whatever.

    Great Big Hyper-V Survey 2011 Is Now Closed

    I closed the Great Big Hyper-V Survey of 2011 this morning at 10:05 (Dublin time, 11:05 CET, 5:05 EST).  Thank you to all who completed the survey.  Myself, Damian Flynn (another Hyper-V MVP), and Hans Vredevoort (Failover Clustering MVP) will be sharing the results on this Wednesday (7th September, 2011) at 10:00 Dublin time, 11:00 Amsterdam time (05:00 EST, 19:00 Sydney).