Licensing DPM 2010

Two of the System Center products that are generally available have unusual licensing.  System Center Data Protection Manager 2010 is one of those 2 unusual ones.

Typically for an installation you will buy:

  • A server license: For example System Center Operations Manager, optionally with SQL Server – and don’t forget the Windows to run it on.
  • Management licenses: for each machine that will be managed by the management server(s)

DPM 2010 doesn’t follow that model.  Instead, you actually the the DPM server license free if you buy one or more management licenses.

Note that you still have to buy the Windows Server license that the DPM server will be installed on.  You also must buy a copy of SQL Server 2008 Standard/Enterprise/Datacenter (and install SP1). 

“For the DPM database, DPM 2010 requires a dedicated instance of the 64-bit or 32-bit version of SQL Server 2008, Enterprise or Standard Edition, with Service Pack 1 (SP1). During setup, you can select either to have DPM Setup install SQL Server 2008 SP1 on the DPM server, or you can specify that DPM use a remote instance of SQL Server.

If you decide to have DPM Setup install SQL Server 2008 SP1 on the DPM server, you are not required to provide a SQL Server 2008 license. But, if you decide to preinstall SQL Server 2008 on a remote computer or on the same computer where DPM 2010 will be installed, you must provide a SQL Server 2008 product key. You can preinstall SQL Server 2008 Standard or Enterprise Edition”.

DPM 2010 comes with a copy of SQL that doesn’t have a product key.  If you install this SQL you can put in a purchased product key, or you can leave it blank to use the evaluation license which will expire.

“If you do not have a licensed version of SQL Server 2008, you can install an evaluation version from the DPM 2010 DVD. To install the evaluation version, do not provide the product key when you are prompted by DPM Setup. However, you must buy a license for SQL Server if you want to continue to use it after the evaluation period”.

There are a bunch of ways to purchase management licenses (agents) for DPM:

  • System Center Server Management Suite Standard: For bulk managing a server with more than one System System Center Server Management Suite Enterprise: For a small virtualisation host (max 4 VMs)
  • System Center Server Management Suite Datacenter: For a virtualisation host with more than 4 VMs max
  • System Center Client Management Suite: for bulk management of PCs
  • System Center Data Protection Manager 2010 Standard: For a server with basic backup (more later on this)
  • System Center Data Protection Manager 2010 Enterprise: For a server with advanced backup (more later on this)
  • System Center Data Protection Manager 2010 client management licenses: For backing up a PC

Most backup products have complex agent licensing:

  • Basic backup agent
  • Open file backup
  • SQL backup
  • Exchange backup
  • Direct to disk backup … and so on

DPM is much simpler in comparison.  There are two basic levels of agent for backing up a server: Standard and Advanced.  The following table describes how to choose between them:

Functionality or Workload

Required Server Management Licenses

Basic file backup and recovery management by instances of the server software of:

  • operating system components
  • utilities
  • service workloads running in the licensed OSE
  • these security workloads: Firewall, Proxy, Intrusion detection and prevention, Anti-virus management, Application security gateway, Content filtering (which includes URL filtering and Spam), Network forensics, Security information management, and Vulnerability assessment in order to safeguard the network and host.
  • System Center Data Protection Manager 2010 Standard Server Management License, or
  • System Center Server Management Suite Standard

In other words, a Standard management license is required to do basic file backup.

Backup and recovery, including basic file backup and recovery, by instances of the server software of:

  • the server system state
  • all operating system components
  • all utilities
  • all server workloads
  • any applications running in the licensed OSE
  • System Center Data Protection Manager 2010 Enterprise Server Management License, or
  • System Center Server Management Suite Enterprise, or
  • System Center Server Management Suite Datacenter

In other words, an Enterprise management license is required to backup system state and application workloads.

 

You can read more about this and licensing for all of  the Microsoft products in the Product Usage Rights (PUR) document.  Note that this stuff changes from time to time and the PUR is the only official source.

So lets look at 2 examples:

Example 1

I want to back up the following:

  • Files only from a file server
  • SQL database server
  • Domain controller and System State

I would need to buy a server to install DPM on.  This will require SQL Server Standard (or higher) and a copy of Windows Server.

For the file server (files only) backup I can get 1 Standard DPM ML (management license).  For the other 2 machines, I will need 1 Enterprise DPM ML each.  Buying DPM MLs entitles me to a DPM server license.  I can even do DPM2DPM4DR replication to a DPM server in another site and get a free DPM server license for that too.

Example 2

I have a virtualisation cluster (Hyper-V/VMware/Xen) with 30 VMs. There are 2 hosts, each has 2 CPUs.  I can buy 30 DPM MLs … but if my reseller is doing their homework (like we do!) we’ll have noticed that buying the System Center Management Suite Datacenter edition (1 per CPU, minimum 2 per host) might work out cheaper.  As a customer, I get management licenses for all System Center products for my hosts and all current and future VMs on the hosts … and for less than just buying backup licenses.  If I’m a consulting company selling the solution, I know that there’s more work and solutions in that licensing that I can provide to my customer at a later point.

And once again, we’ll need a DPM server … buy the hardware, buy/put Windows Server and SQL Server on it, and install the free DPM server license.

Technorati Tags: ,,

I Hope You Patch Adobe Products Like All The Others

Yesterday I quoted a Microsoft security report based on information they gather from numerous sources:

“Detections of exploits targeting Adobe Flash, although uncommon in comparison to some other types of exploits, increased in 2Q11 to more than 40 times the volume seen in 1Q11 … Two vulnerabilities accounted for the bulk of zero-day exploit activity … Both vulnerabilities affect Adobe Flash Player”.

In other words, hackers have found a new sweet spot.  Most (not all) companies have copped on when it comes to patching Microsoft products.  But:

  1. Other companies make software
  2. Pretty much all software has vulnerabilities
  3. Hackers aren’t stupid.  I’m reading a book called Kingpin and it illustrates how hackers will move from one attack vector to another to find a soft underbelly.  Adobe is that new point of attack.

And there is a high profile example of that.  The Inquirer website reports that (and there is no evidence because RSA have not publicly documented this):

“Criminals used a zero-day vulnerability in Adobe Flash player to penetrate RSA defences through an embedded Flash file in an Excel email attachment. A spear phishing attack, it targeted regular employees of RSA Security disguised as a recruitment form. It breached the RSA systems, even though it first went to Microsoft Outlook’s spam folder”.

OK, it was a zero day attack.  We know this was a very organised attack, possibly sponsored by a nation.  They found a hole in Flash (allegedly) that wasn’t yet reported and crafted an email attachment to attack it, knowing that the recipient would get stung by it, thus allowing the hacker to 0wn the PC.  Unlucky. 

But even if it wasn’t a zero day attack would they have patched Adobe?  (we learned that less than 1% of attacks are zero day) I bet the answer is no.  Most companies focus just on Microsoft software.  Adobe products do automatically prompt for upgrades, but they are seriously click heavy and frequent, so most people probably disable the auto-check for upgrades, and the PCs probably go years without updating.  And that leaves those PCs vulnerable to:

  • Drive by attacks where a user navigates to an innocent website that has either been hacked (malware uploaded) or has a compromised advert that is hosted elsewhere.
  • When a user reads a document/email with a crafted attachment for attacking an Adobe product vulnerability.

In other words, patch Adobe products too, and not just Microsoft ones.  Unfortunately, that isn’t too easy (or supported) in WSUS.  However, you can do it using System Center Essentials (that’s what we use in our office) or System Center Configuration Manager.

Windows 7 Overtakes Windows XP

There’s a story on pingdom that says, based on web analysis:

“Windows 7 launched in October of 2009, then…

  • Within three months, it overtook Mac OS X.
  • Within 10 months, it overtook Windows Vista.
  • Now, two years after its launch, it’s finally overtaken Windows XP”.

They’ve got more info and some nice graphs that tell us Windows 7 is 40.39% of web traffic (XP is 38.34% and Vista is 11.3%).

Technorati Tags:

Interesting Analysis on Patching and Attacks

Microsoft produces a document called the Security Intelligence Report on a regular basis.  Some of it hit the news today so I decided to take a peek.  The report doesn’t restrict itself to exploits of Microsoft products and is based on data that they gather from a number of sources.

“In this supplemental analysis, zero-day exploitation accounted for about 0.12 percent of all exploit activity in 1H11, reaching a peak of 0.37 percent in June”.

OK, so that tells us that the vast majority of exploits take advantage of old vulnerabilities that have had patches available previously.

“Of the attacks attributed to exploits in the 1H11 MSRT data, less than half of them targeted vulnerabilities disclosed within the previous year, and none targeted vulnerabilities that were zero-day during the first half of 2011”.

People aren’t patching like they should be. That explains this:

Conficker is still (STILL!!!!) the leading infection on domain joined computers. Seriously!?!?!? It is not in the top 10 of non-domain joined PCs.

And it explains this:

“Exploits that target CVE-2010-2568, a vulnerability in Windows Shell, increased significantly in 2Q11, and were responsible for the entire 2Q11 increase in operating system exploits. The vulnerability was first discovered being used by the family Win32/Stuxnet in mid-2010”.

This report covers up to H2 2011 and MS10-046 is still being exploited because people haven’t deployed the patch.

“Detections of exploits targeting Adobe Flash, although uncommon in comparison to some other types of exploits, increased in 2Q11 to more than 40 times the volume seen in 1Q11 … Two vulnerabilities accounted for the bulk of zero-day exploit activity … Both vulnerabilities affect Adobe Flash Player”.

Adobe Flash is one of those products that is constantly badgering me to get updated at home.  I leave this turned on because Flash is a real target for attackers. 

“The most commonly observed types of exploits in 1H11 were those targeting vulnerabilities in the Oracle (formerly Sun) Java Runtime Environment (JRE), Java Virtual Machine (JVM), and Java SE in the Java Development Kit (JDK). Java exploits were responsible for between one-third and one-half of all exploits observed in each of the four most recent quarters”.

Other products like Java and Adobe Reader are nice targets too because they have vulnerabilities and are rarely patched.  At work, we patch the Adobe products via System Center Essentials.  You can also use ConfigMgr 2007 to do this.

“As in previous periods, infection rates for more recently released operating systems and service packs are consistently lower than earlier ones, for both client and server platforms. Windows 7 and Windows Server 2008 R2, the most recently released Windows client and server versions, respectively, have the lowest infection rates”.

A) Newer products always do more under the hood to protect themselves.  B) Newer home PCs will have current AV.  C) Newer business deployments will have had a fresh installation of patching/security systems that some more mature environments have lacked, e.g. lack of WSUS, etc.

Interestingly, in the regional analysis, Italy appears to lead the pack at minimizing most malware infections (congrats!) but is second worst when it comes to adware infections (boo!). 

Don’t be so quick to blame Microsoft: 44.8% of exploits are because of the weakness that is found between the keyboard and the chair, where the user is handing over some piece of information or OK-ing something bad. 

Drive by attack download sites (innocent web sites that have been compromised, e.g. adspace that was sold and contains a Flash exploit) are on the rise.

There’s a lot of good info in the Security Intelligence Report.  You should give it a read if considering the security of your business.

The Windows Server 8 Drinking Game

Bob Muglia has gone and left MSFT so we need a new drinking game.  The rules of the Windows Server 8 (2012) Drinking Game are simple enough.  You must drink one shot every time you hear any of the following:

  • Optimised for the cloud
  • Scalable
  • Continuously available
  • We’re all in on the cloud

The winner is the last person standing who can successfully describe an asymmetric Windows Server 8 Hyper-V/File Share cluster.

The rules will be updated over the coming 12 months.

Technorati Tags: ,

Windows Server 8 Host Maximums

Microsoft posted the host maximums for Windows Server 8 (2012) early this morning in a blog entry that includes a bunch of other cloud information.

  • 160 logical processors per host
  • 2 TB of memory per host
  • 32 virtual processors per virtual machine
  • 512 GB of memory per virtual machine

Maximum numbers usually aren’t be a big worry for 99.99% of us except for those who get caught up in pointless “Top Gear Trumps” verbal spats with VMware fans.  Lets just assume that both platforms have sufficient capacities for what almost everyone need.

The number that probably mean the most, though, are 32 vCPUs per VM.  There are times when there’s a physical machine that is CPU heavy that you want to virtualise (for reasons other than just consolidation), and this will enable that.

Can I Mix LAN and DMZ/Internet VMs On A Hyper-V Host/Cluster?

The question of mixing internal and edge network virtual machines on a single Hyper-V host or cluster has popped up a number of times over the past few years.  Businesses are under pressure to reduce costs, but there is that old issue of security.  It’s something I’ve given consideration to over the past few weeks and I have a few answers.

I’ll start with the simplest answer: Yes, you can, and you can do it securely.

Firstly, the Hyper-V virtual switch, without third party network add-ins (like NIC teaming) is secure.  You can’t bounce from one VLAN to another.  In the below example, we have a simple scenario where VLAN 101 is in the LAN and VLAN 102 is an edge network.  The physical network firewall isolates the two VMs from each other and they cannot eavesdrop on each other. 

image

NIC teaming can change things quite a bit if you have 2 pNICs for virtual switch traffic on the host (read the OEM’s guidance).  In the case of the HP Network Configuration Utility, you need to do something like this to maintain security:

image

Both of those deal with traditional firewall and network isolation.  But is that enough?  The virtualisation guidance for Forefront Threat Management Gateway (TMG – Microsoft’s firewall solution) indicates that we have more thinking to do.  Firewall and network isolation is not enough.

A distributed denial of service (DDOS) attack aims to disrupt or bring down an online service by flooding it with traffic of some kind.  I’ve seen one in action (against a small company in Ireland).  They really are more common than you would think, small companies do get targeted (not just the big guys/government), and you rarely hear about them. 

The one I saw succeeded in bringing down the edge network devices, first one, and then the next in line when the defence/attack were adjusted.  That attack brought down dedicated network appliances.  What if the appliances hadn’t gone down.  What was next in line?  With the above two designs the next network device is either the pNICs in the host or the virtual switch in the host.  The pNICs share traffic for internal (LAN) VMs and external (DMZ) VMs.  If the NIC fails – everything loses communication and therefore the DDOS hits not just the online presence but the LAN VMs too.  If the virtual switch is hit then we’re looking at the CPU and RAM of the parent partition being stressed, and DMZ and LAN traffic/VMs experiencing downtime.  We need physical isolation of LAN and DMZ in some fashion.

The cheapest solution would be to have dedicated NICs in the hosts: one for LAN traffic and one for DMZ traffic.  This would allow a single host/cluster to still run internal and external VMs but to isolate the impact of traffic at the NIC level (as below).  At least now, if the online presence is hit by a DDOS attack then we’ve limited the impact of the damage.  In the below example, pNIC2 is the last physical device that can fail or be flooded.  The VMs on pNIC1 are physically isolated from the DMZ and should be unaffected … of course that assumes that the virtual switch for the DMZ (on pNIC2) doesn’t spike the CPU/RAM of the parent partition – I actually have no idea what would happen in this case to be honest – my guess is that an edge network or the WAN connection would suffer first but I really do not know.

image

If your web presence is large enough, then maybe you can justify a dedicate Hyper-V host/cluster for the edge network.  The design would be something like below.  This design is a take-no-chances solution that completely isolates everything.  If the online presence in the DMZ is hit by a DDOS attack then there is not a single physical connection to the LAN Hyper-V hosts that should impact their normal operations within the LAN.

image

There is another benefit to this design approach too.  The handful of security fixes for Hyper-V have been related to DDOS attacks from within a compromised VM on a host.  In other words, if a VM is compromised (for example, a hacker gains admin rights on a VM via a SQL injection attack or a WordPress website compromise), they can use their local log on in the VM to DDOS attack the host that the VM is on if the relevant Hyper-V security fixes (as shared by MSFT via Windows Update) have not been applied.  If you aren’t quick about your updates you might get hit by a zero day attack if you have the really bad luck of (a) not having the update deployed and (b) a hacker gaining logon rights on a VM.  If that’s the case – you know at least that all that the hacker can DDOS attack are the DMZ VMs that are on that particular DMZ host.  And hopefully you’ve been good with your network isolation, password rules, etc, to slow down the hacker, and maybe you have an IDS to detect their attempts to break out from that VM via the network.

Anyway, there’s a few thoughts to keep you thinking.

In The Year 2000: VDI Will Replace The PC

Ok, I meant in the year 2009 … err … 2010 … err … 2011 … hmm … maybe 2012?  Some of you might remember back to 1998 or thereabouts when the PC was doomed by the return to mainframe style computing based on WinFrame MetaFrame (aka Presentation Server, XenApp).  Somehow or other, that didn’t happen.  Instead, a few companies did go for this style of server based computing based on Terminal Services Remote Desktop Services Session Hosts – seriously Microsoft, can we just call them Terminal Servers once again?

A decade later, virtual desktop infrastructure was to call time on the PC in the business.  Endless new years forecasts, and a heap of VMware marketing, promised us this would happen in 2010.  Then it would happen in 2011.  I’m betting that come December 2012 the forecasts will once gain proclaim that the coming year will be The Year Of The Virtual Desktop.  Pfft!  Just like this year was, and the year before.

Like I’ve been saying for a couple of years, VDI is a false economy.  Costs go up when you move from the desk to the data centre, and you’re just moving the management problem from one place to another, and adding the need to add more management and lock down to the end user desktop experience – and that’s the last thing the customer (we in IT are in a service business and the business/user is our customer *throws up just a little bit*) wants right now (see Consumerisation of IT).

An article in Network World sums this "year of virtual desktop” claptrap up nicely.  My opinion, and what the local MSFT folks here have been saying too, is that VDI is part of the overall solution, just like *breath in* Remote Desktop Services Session Hosts *gasp for air – I’m sure glad I don’t have emphysema*.

So the next time you see a forecast telling you that the coming year will be the year of VDI (I think they’ve pretty much moved to The Year Of The Cloud these days) or you hear a VMware sales/marketing person telling you that the PC is dead, you know it’s time to either move on, or walk outside to the free coffee in the hall outside.

Technorati Tags: ,

Sizing Tool for Hyper-V Assessment Using MAP

You are typically not too concerned about the SQL requirements of the Microsoft Assessment and Planning toolkit for the typical Hyper-V deployment.  You can use SQL Server Express and the 2008 limit of 4 GB database or the 2008 R2 limit of a 10 GB database won’t be an issue.  Network traffic is usually not a big concern either, but it might be if you decide to assess branch office machines across the WAN, or assess hundreds of machines at once.  Anyone dealing with a larger farm may want to plan for disk space, using SQL Server Standard/Enterprise/Datacenter, and the potential impact on the LAN/WAN.

I decided to put together a spread sheet based on the details that are shared by Microsoft.  I’ve taken the details, put in the formulas, and all you need to do is enter 3 figures:

  • Number of machines to discover
  • Number of machines to assess (do the performance monitoring)
  • Length of time in hours to run the assessment

The spread sheet will give you four figures:

  • The size of the resulting SQL database, allowing you to decide between the SQL Express that is included with MAP, and other editions of SQL Server.
  • The network impact of the discovery process (maximum – because this is variable depending on the amount of WMI data returned)
  • The network impact of starting an assessment
  • The continuing network impact of the assessment every 5 minutes

You can download this spread sheet from here.

Technorati Tags: ,,

Comparing 3 CPU Types in Hyper-V Assessment Hardware Sizing

Measure twice and cut once.

I’m assisting with a very large Hyper-V sizing process at the moment.  It’s a rare one where CPU appears to be the bottleneck instead of RAM.  As such, I’m spending some time comparing the traits and sizing of different CPUs.  Before the real assessment starts, I’ve fired up a small lab just to do a few comparisons between:

  • 2 * AMD Opteron 6180 12 core CPUs
  • 2 * Intel Xeon X5690 6 core CPUs
  • 2 * Intel E7 Xeon E7-4870 10 Core CPUs

The positives for AMD, they have the plus of having more cores (logical processors) with a lower price.  The positives for Intel are that they have 2 threads of execution per logical processor, but that does come at a higher cost.  Who wins?  I’ll let MAP 6.0 decide that:

I came up with 3 server specifications, each using one of the above processor configurations.  I assessed 4 virtual machines and then ran the MAP 6.0 Server Consolidation Wizard to see how much of the host hardware would be utilised by the VMs.  The results were:

2 * AMD Opteron 6180 12 core CPUs

image

2 * Intel Xeon X5690 6 core CPUs

Not surprisingly, the 12 core AMD CPU beats the Intel 6 core CPU.  But the margin is very small.  Those 2 threads of execution per logical processor gives Intel more BHP per core.

image

2 * Intel E7 Xeon E7-4870 10 Core CPUs

This is Intel’s latest CPU.  With it, the VMs are using 2.25% less of the CPU than the AMD 12 core CPU, and 2.36% less than the Intel 6 core CPU.

image

I’m wondering if this CPU going to have the same hardware microcode issues that were associated with Nehalem and Westmere CPUs when running Hyper-V.

Conclusions

I’m not recommending a CPU based on this tiny virtual lab.  What I actually aimed to illustrate was that the sizing feature of the assessment can be used with different hardware profiles to find the right host specification for your environment.  In my real world example, I’ll be doing a week-long performance gathering during what the customer believes will be a busy period, followed by sizing with multiple different host specifications, combined with application support statements (from the discovery) to rule out invalid candidates, and maybe even breaking Hyper-V up into several clusters with different hardware specs.

What you can learn from this post is that you shouldn’t assume anything.  When you assume, then assume that you are wrong. 

And remember, this is a software tool.  It will give us an estimation of physical host utilisation compared to what is measured.  It won’t be perfect, but it’s better than the usual “we know your/our requirements”, “here’s the usual spec for this size site” or “wet finger in the air” because it is scientific.  These other approaches are no better than waiting to see if a rodent sees it’s own shadow when it comes out of a hole.

Remember to add some spare host capacity:

  • Host fault tolerance
  • Future growth & free space for spikes