New VMware Licensing – Really? Are they Mental or What?

You may just have noticed a slight pro-Hyper-V bias to this blog Smile  Yeah, I prefer it because I think it does what I need and there is more focus from Microsoft on what the business cares about: business applications.  But from time to time I’ve said that VMware have an excellent server virtualisation product.  Recently I’ve been heard to say that I think VMware got a huge leap on Microsoft by virtually stealing the term Private Cloud in their marketing efforts.  A few of us geeks know what Microsoft are up to.  VMware have been doing huge road shows to reach a much wider audience to say “we are the private, public, and hybrid cloud”.  That might be about to change.

VMware announced their new pricing structure.  It is moving away from a predictable per host model to a model that charges for processors and assigned memory. 

SKU

vRAM entitlement

vSphere 5 Essentials Kit

24 GB

vSphere 5 Essentials Plus Kit

24 GB

vSphere 5 Standard

24 GB

vSphere 5 Enterprise

32 GB

vSphere 5 Enterprise Plus

48 GB

By the way, ESXi 5 (the free one) entitles you to a not-so-massive 8GB of RAM.  An example is a typical DL380 or R710 host with 2 CPUs and 196 GB RAM.  To license it you will need 4 * vSphere 5 Enterprise Plus licenses.  They cost $3,495 retail each.  So virtualisation (only) on that host will cost $13,980

Rather confusingly, cloud deployments have a different licensing model for vCloud Director, etc.  They are sold on a VM-bundle basis.  vCloud Director costs $3,750 for 25 VMs.  Not cheap, not at all!  vOperations is more money and the much ballyhooed SRM is seriously mad money.

VMware customers are expressing their dissatisfaction all over the net.  Many are reporting that this vTax (as Microsoft cleverly calls it) is going to increase their virtualisation costs significantly.  And don’t forget, this gives you your virtualisation licensing and nothing else.

Let’s saunter over to the Microsoft alternative.  If you license your Windows VMs correctly (on any virtualisation platform) then you’re probably licensing per host, using DataCenter edition.  That licenses all the host (if required) and unlimited number of VMs on that host.  The retail (and no one pays retail!) price is $2,999.  That DL380 or R710 will be licensed for unlimited Windows Server VMs for $5,998. 

By the way, you can install that Windows Server Datacenter on the host (you’re entitled to) and enable Hyper-V instead of ESXi.  All of the features of Hyper-V are included at no hidden or extra cost.  Clustering, Live Migration, Dynamic Memory are all there.  Hyper-V Replica is on the way in Windows Server 8 (announced this week at WPC) to replicate VM workloads from host to host, site to site.  No need for VMware.

But aren’t VMware the private cloud?  Bollox!  If you want private cloud then look at the service (the business application) centric System Center Virtual Machine Manager 2012.  You can get that as part of a bundle from Microsoft called the System Center Management Suite.  You can license a 2 CPU host (and all VMs and applications on that host) for all of Microsoft’s systems management products for $5,240 (retail).  That’s private cloud, virtualisation management, enterprise monitoring, service/helpdesk management, backup, configuration management, and runbook automation.  In other words, you can manage the entire service stack – the stuff that the business cares about.

Let’s compare the two vendors on a single 2U server with 2 CPUs and 196 GB RAM (my hardware sweet spot by the way).  We’ll also assume that there are 50 VMs on this host:

Product Microsoft VMware Comments

Virtualisation

Free 4 * vSphere 5 Enterprise Plus $13,980 Hyper-V is included in Windows licensing so it’s free.  The Microsoft option is already $13,980 ahead.
Windows for Unlimited VMs 2 * Windows Server DC
$5,998
2 * Windows Server DC
$5,998
 
Monitoring System Center Management Suite DC
$5,240

vCenter Operations (25 VM pack) * 2

$7,564

Not a good comparison: MSFT option includes licensing to use all of Microsoft’s System Center products and it’s still around 1/3 cheaper!
Total $11,238 $27,542 MSFT is $16,304 (59%) cheaper, doesn’t limit your RAM assignment to VMs, and includes all of their management products.

What is a private cloud?  It’s a mechanism where end users will freely deploy VMs as and when they need them, with no restrictions placed on them by IT.  We can measure and optionally cross-charge.  But do we really want to get into the whole “we can’t use that much RAM because it’ll add another $4K tax on our virtualisation.  Sorry the business will need to do without!”.  Not good. 

If I’m a customer, I have to seriously revisit the Microsoft option.  It’s 59% cheaper, does way more across the entire application stack, and the focus is on the business application in the private cloud, not on the irrelevant (yeah I said it) hypervisor layer that can probably fit on a tiny disk.  And with all those cash savings, I can refocus my budget on taking advantage of all those management systems.

If’ I’m a consulting company, I look at what I make margin on.  You’re lucky to make 10% margin on software.  Services are where the money really is.  If you’re selling VMware to your customer then you’re getting them to spend 59% more on software that you’ll make 10% margin on.  If you sold the MSFT alternative then you know that customer has 59% extra budget that can be spent on services.  They’ll have all that System Center licensing goodness that you can revisit to deploy and engineer.  That’s 70%+ margin on human effort.  What sounds better and more profitable?  And you know what: more of your competition are taking advantage of this.  Why aren’t you?

SCVMMSSP 2.0 SP1 RTM

To be honest, I thought Microsoft would have killed this project because it causes confusion.  Microsoft starting talking about their private cloud when System Center Virtual Machine Manager Self Service Portal (SCVMMSSP) 2.0 was released last year.  With no other road map information, we were left to assume that it was the long term strategy – a shell in front of SCVMM.

Then along came SCVMM 2012 beta and we find that it is a self-contained private cloud solution.  What about hybrid cloud integration?  Project Concero takes care of that.  It seems like SCVMMSSP is a one-time-only solution for those on SCVMM 2008 R2.  Any effort you put into engineering it in your site will have a short term value if you do upgrade to 2012 because SCVMMSSP 2.0 is irrelevant there.  I wish the messaging from MSFT had been clearer last year.  I bet you a good few customers deployed the original SCVMMSSP 2.0 cloud solution to find it had a short life, and would have to be ripped/replaced by SCVMM 2012 with no migration path.

So, you can download this SP1 release of SCVMMSSP 2.0 now.  I’m not going to bother copy/pasting any more information.  This product is a total cul-de-sac, and a bad strategy to take in my opinion (now we know the real strategy).

Users Bypassing IT to Adopt Cloud Services

Another interesting read on TechCentral.ie this morning: IT departments struggle to control cloud adoption.  In it is says that:

  1. 20% of those responding said they had gone around their IT department to provision cloud services
  2. 61% said it was easier to provision the services themselves
  3. 50% said it takes too long to go through IT
  4. 60% reported that they have corporate policies in place that prohibit such actions, those policies aren’t real deterrents

I cannot say that I am surprised.  I know that in the past I have hosted and managed the infrastructure for users in some very large organisations.  They found it too difficult to get what they needed from their IT departments/divisions and they looked to someone who could give them what they needed, when they needed it.

This problem is typically in the medium to large organisation.  The smaller organisation usually only has a couple of IT people who pretty much do everything IT for the company.  The larger organisation features divisions, branch offices, and departments that purchase and deploy applications and typically rely on some central IT to deploy the infrastructure that they need.  Sometimes it’s a case of they go to a central applications/MIS department/division to get an application that they need.  And we all know this: the bigger the organisation, the longer it takes for simple things to happen.  For example, I know of a bank with its headquarters in Munich, where former employees claimed that it could take up to 6 weeks for the helpdesk to respond to a non-critical ticket.  How hopeless is that?

IT is a service.  It has a customer and that customer is the user *I’m choking just a little bit as I type this*.  Any business person, even those on The Apprentice, will tell you that if you are slow to respond to a customer’s service request then you lose the customer.  In this case, every IT employee’s worst dream is coming true: their reason for being employed (the users) are going to an external service provider because IT is too slow.

One of the big complaints you’ll get from users about IT is that anything they do come up with isn’t quite what was asked for and it isn’t flexible.  That’s why the consumerisation of IT started: users are buying devices and apps independent of IT because, for example, their iPad is better for consuming information on the move than a laptop might be.  Public cloud computing is similar.  All the user needs is a credit card and some idea of what they need.  They’ll demo a few things, find something they like, and cough up the money to get it active.  They may well get approval from some departmental or divisional budget to cover the costs, completely independently of the IT budget.  Uh-oh, now the organisation has a reason to start reassessing (downwards) the IT budget, not to mention the headcount.

So IT is bypassed.  That causes hurt feelings and threatens their jobs.  The user is happy because they finally got the services they wanted in a timely manner.  End of story?  Let’s think bigger for a moment.

What about compliance?  Say this is a European company storing sensitive personal information: Does the user know anything about the Data Protection Act or the Patriot Act?  What if they are handling online payments?  Have they assessed a data centre or SaaS solution for ISO 27001 and PCI compliance?  I bet you these things don’t even cross their minds!  Things such as governance and regulatory compliance rarely do.  Businesses can put in policies to ban the independent adoption of public cloud computing services, but we all know that budget holders will do whatever they think will alleviate their pain.  One only has to look at the news to see how rules are regularly tossed aside with no repercussions in the business world.  Anyone who has worked in the corporate world knows that there’s one set of rules for “everyone”, and a different set of rules for certain people.

You cannot stop the user from seeking out alternatives.  Already over 60% of UK CIOs reckon that consumer devices have become essential tools in the business.  Banning those has worked real nice, eh?  The solution isn’t to ban things, it’s to adapt and provide internal & managed services that have the traits of those alternative that the users have started to turn to.

The private cloud brings that elasticity, self-service provisioning, rapid deployment, and flexibility into the internal network from the external service provider.  From the compliance and governance perspective, this brings back a lot of control.  From the IT worker’s perspective, it saves their job.  From the user’s perspective, this can be much better than the public cloud.  Think of the public cloud as a phone company: there’s a remote service desk that is pretty similar, and how many of us like the “customer care” that we get from our phone service providers?  When your service is deployed internally, then you at least have some leverage to apply pressure when things inevitably go wrong.

You now control where your data is, and who can see it.  IT switches from a deployment role to a role where templates are shared, services are monitored, workloads are backed up, and data is secured.  Users help themselves as consumers of this service that IT provides.  In other words, the “user is the customer” relationship is reinforced.

Server Power & Your Private Cloud

Let’s pull a Doctor Who and travel back in time to 2003.  Odds are when you bought a server, and you were taking the usual precautions on uptime/reliability, you specified that the server should have dual power supplies.  The benefit of this is that a PSU could fail (it used to be #3 in my failure charts) but the redundant PSU would keep things running along. 

Furthermore, an electrician would provide two independent power circuits to the server racks.  PSU A in each server would go into power circuit A, and PSU B in each server would go into power circuit B.  The benefit of this is that a single power circuit could be brought down/fail but every server would stay running, because the redundant PSU would powered by the alternative power circuit.

image

Applying this design now is still the norm, and is probably what you plan when designing a private cloud compute cluster.  If power circuit A goes down, there is no downtime for VMs on either host.  They keep on chugging away.

Nothing is free in the computer room/data centre.  In fact, everything behind those secured doors costs much more than out on the office floors.  Electrician skills, power distribution networks, PSU’s for servers, the electricity itself (thanks to things like the UPS), not to mention the air conditioning that’s required to keep the place cool.  My experience in the server hosting industry taught me that the cost of biggest concern was electricity.  Every decision we made had to consider electricity consumption.

It’s not a secret that data centres are doing everything that they can to eliminate costs.  Companies in the public cloud (hosting) industry are trimming costs because they are in a cutthroat business where the sticker price is often the biggest decision making factor for the customers when they choose a service provider.  We’ve heard data centres running at 30C instead of the traditional 18-21C … I won’t miss having to wear a coat when physically touching servers in the middle of summer.  Some are locating their data centres in cool-moderate countries (Ireland & Iceland come to mind) because they can use native air without having to cool it (and avoiding the associated electrical costs).  There are now data centres that take the hot air from the “hot aisle” and use that to heat offices or water for the staff in the building.  Some are building their own power supplies, e.g. solar panel farms in California or wind turbines in Sweden.  It doesn’t have to stop there; you can do things in the micro level.

You can choose equipment that consumes less power.  Browsing around on the HP website quickly finds you various options for memory boards.  Some consume less electricity.  You can be selective about networking appliances.  I remember when buying a slightly higher spec model of switch than we needed because it consumed 40% less electricity than a lesser model.  And get this: some companies are deliberately (after much planning) choosing lower capacity processors based on a couple of factors. 

  • They know that they can get away with providing less CPU muscle.
  • They are deliberately choosing to put less VMs on a host than is possible because their “sweet spot” cost calculations took CPU power consumption and heat generation costs into account.
  • Having more medium capacity hosts works out cheaper for them than having fewer larger hosts over X years, because of the lower power costs (taking everything else into account).

Let’s bring it back to our computer room/ data centre where we’re building a private cloud.  What do we do?  Do we do “the usual” and build our virtualisation hosts just like we always have built servers: each host will get dual PSUs on independent power circuits just as above?  Or do we think about the real costs of servers?  I’ve previously mentioned that the true cost of a server is not the purchase cost.  It’s much more than that, including purchase cost, software licensing, and electricity.  A safe rule of thumb is that if a server costs €2,000 then it’s going to cost at least €2,000 to power it over its 3 year life time.

So this is when some companies compare cost of running fully specced and internally redundant (PSU’s etc) servers versus the risk of having brief windows of downtime.  Taking this into account, they’ll approach building clusters in alternative ways.

In the first diagram (above) we had a 2 node Hyper-V cluster, with the usual server build including 2 PSUs.  Now we’re simplifying the hosts.  They’ve each got one PSU.  To provide power circuit fault tolerance, we’ve doubled the number of hosts.  In theory, this should reduce our power requirements and costs.  It does double the rack space, license, and server purchase costs, but for some companies, this is negated by reduce power costs; the magic is in the assessment. 

But we need more hosts.  We can’t do an N+1 cluster.  This is because half of the hosts are on power circuit A.  If that circuit goes down then we lose half of the cluster.  Maybe we need an N+N cluster?  In other words if we have 2 active hosts, then we have 2 passive hosts.  Or maybe we extend this out again, to N+N+N with power circuits A, B, and C.  That way we would lose 1/3 of a cluster if the power goes.

Increasing the number hosts to give us power fault tolerance gives us the opportunity to spread the virtual machine loads.  That in turn means you need less CPU and memory in each host, in turn reducing the total power requirements of those hosts and reduces the cost impact of buying more server chassis’.

image

The downside of this approach is that if you do power to PSU A in Server 1, the VMs will stop executing, and failover to Severs 3 or 4.

I’m not saying this is the right way for everyone.  It’s just an option to consider, and run through Excel with all the costs to hand.  You will have to consider that there will be a brief amount of downtime for VMs (they will failover and boot up on another host) if you lose a power circuit.  That wouldn’t happen if each host has 2 PSUs, each on different power circuits.  But maybe the reduced cost (if really there) would be worth the risk of a few minutes downtime?

Do I Need a Private Cloud?

With this post I am going to stay technology agnostic.  I’m also going to stay clear of marketing terms.

Before we answer the central question of the blog post, let’s get something clear.  A private cloud does not equal server virtualisation.  A private cloud is an extension of server virtualisation.  It provides a complex self-service mechanism where non infrastructure administrators can deploy services.  In this context and using the ITIL view of things, a service is a business application comprised of things like IIS/Apache, SQL/MySQL, virtual machines with operating systems, application components (Perl/.NET, database schemas, and web content), and additional fabric configurations like load balancers and storage.  In other words, a person from the department that manages business applications can deploy the virtual infrastructure that they need to meet a business need without any effort/time required from the IT department that manages the infrastructure.

This accomplishes a bunch of things that the business will care about.  But the key piece here is that non infrastructure people are doing the deployment.

Server virtualisation is a subset of the private cloud.  You can do server virtualisation without deploying a private cloud.  My bet is that you already have – years ago.  But you cannot do private cloud without server virtualisation.

Taking all into account (up to now, and this might change) I have one rule to answer the central question of this blog post.

Question: Do I need a private cloud

Consultants Answer: Who deploys and manages your applications?

I know, I know.  I’ve answered a question with a question.  Go read how I briefly described a private cloud.  The think you noticed was that the infrastructure administrators were delegating deployment tasks to people who manage applications.  That’s the crux.  Do those people exist in your organisation?

In a small and some medium organisations, there are a few IT infrastructure administrators who do everything.  They manage the firewalls, the run the domain, they do server virtualisation, they run the CRM application (I’m picking on CRM today!), they manage the SQL databases, and so on.  There is no one to delegate service deployment tasks to.  So what is the point in deploying all the additional infrastructure of a private cloud?  There is no valid business reason that I can envision (at the moment).  All that small team really needs is their virtualisation management tools, preferably joined by a set of systems management tools (no brands – I said I’d be agnostic).

On the other hand some medium and large organisations do have various different departments that manage various aspects of the business application portfolio.  There will also be branch offices where servers have been centralised in a virtual farm.  Here there absolutely is a reason to deploy a private cloud.  The central IT infrastructure department could employ people to deploy VMs and install things like IIS/Apache or SQL/MySQL all day long.  And that still wouldn’t meet the deadlines of their internal customers.  Deploying a private cloud would allow those internal customers, who are IT savvy, to deploy their own services in a timely and controlled manner, using policies and quotas that are defined centrally by the business. 

My rule of thumb here (at the moment) is that:

  • If the IT infrastructure team is doing all application deployment/management then there should not be a private cloud.
  • If there are other departments or teams that are doing application deployment/management then there should be a private cloud.

That’s my view on the “Should I deploy a private cloud?” question.  I’ll be interested in other opinions.  This is early days for this stuff and I figure many of the questions and answers for the private cloud will evolve over the coming years.

VMM 2012 Public Beta is Launched

The public beta for System Center Virtual Machine Manager 2012 has been launched today at MMS 2012.  You can download it now.

This one is a game changer for Hyper-V administrators.  Cloud, service templates, host/cluster deployment, network/storage integration, XenServer support … VMM is getting as big as ConfigMgr!

Don’t expect it to be like going from VMM 2008 to VMM 2008 R2.  It’s a very different tool.  You’ll need to do some reading to get to know it – but it’s worth it!

Private Cloud Computing: Designing in the Dark

I joined the tail end of a webcast about private cloud computing to be greeted by a demonstration of the Microsoft Assessment and Planning Toolkit in a virtualisation conversion scenario.  That got me to thinking, raised some questions, and brought back some memories.

Way back when I started working in hosting/virtualisation (and it was VMware 3.x BTW) I had started a thread on a forum with some question.  It was something storage sizing or planning but I forget exactly what.  A VMware consultant (and a respected expert) responded by saying that I should have done an assessment of the existing environment before designing anything.

And there’s the problem.  In a hosting environment, you have zero idea of what your sales people are going to sell, what your customers are going to do with their VMs, and what the application loads are going to be.  And that’s because the sales people and customers have no idea of those variables either.  You start out with a small cluster of hosts/storage, and a deployment/management system, and you grow the host/storage capacity as required.  There is nothing to assess or convert.  You build capacity, and the business consumes it as it requires it, usually without any input from you. 

And after designing/deploying my first private cloud (as small as it is for our internal usage) I’ve realised how similar the private cloud experience is to the hosting (public cloud or think VPS) experience.  I’ve built host/storage capacity, I’ve shared the ability for BI consultants/developers to deploy their own VMs, and I have no idea what they will install, use them for, or what loads there will be on CPU, storage, or network.  They will deploy VMs into the private cloud as they need them, they are empowered to install software as they require, and they’ll test/develop as they see fit, thus consuming resources in an unpredicatable manner.  I have nothing to assess or convert.  MAP, or any other assessment tool for that matter, is useless to me.

So there I saw a webcast where MAP was being presented, maybe for 5-10 minutes, at the end of a session on private cloud computing.  One of the actions was to get assessing.  LOL, in a true private cloud, the manager of that cloud hasn’t a clue what’s to come.

And here’s a scary bit: you cannot plan for application supported CPU ratios.  Things like SharePoint (1:1) and SQL (2:1) have certain vCPU:pCPU ratios (virtual CPU:physical core) that are recommended/supported (search on TechNet or see Mastering Hyper-V Deployment).

So what do you do, if you have nothing to assess?  How do you size your hosts and storage?  That is a very tough question and I think the answer will be different for everyone.  Here’s something to start with and you can modify it for yourself.

 

  1. Try to figure out how big your infrastructure might get in the medium/long term.  That will define how big your storage will need to be able to scale out to.
  2. Size your hosts.  Take purchase cost, operating costs (rack space, power, network, etc), licensing, and Hyper-V host sizing (384 VMs max per host, 1,000 VMs max per cluster, 12:1 vCPU:pCPU ratio) into account.  Find the sweet spot between many small hosts and fewer gigantic hosts.
  3. Try to figure out the sweet spot for SQL licensing.  Are you going per-CPU on the host (maybe requiring a dedicated SQL VM Hyper-V cluster), per CPU in the VM, or server/CAL?  Remember, if your “users” can install SQL for themselves then you lose a lot of control and may have to license per CPU on every host.
  4. Buy new models of equipment that are early in their availability windows.  It might not be a requirement to have 100% identical hardware across a Hyper-V cluster but it sure doesn’t hurt when it comes to standardisation for support and performance.  Buying last year’s model (e.g. HP G6) because it’s a little cheaper than this year’s (e.g. HP G7) is foolish; That G6 probably will only be manufactured for 18 months before stocks disappear and you probably bought it at the tail end of the life.
  5. Start with something small (a bit of storage with 2-3 hosts) to meet immediate demand and have capacity for growth.  You can add hosts, disks, and disk trays as required.  This is why I recommended buying the latest; now you can add new machines to the compute cluster or storage capacity that is identical to previously purchased equipment – well … you’ve increased the odds of it to be honest.
  6. Smaller environments might be ok with 1 Gbps networking.  Larger environments may need to consider fault tolerant 10 Gbps networking, allowing for later demand.
  7. You may find yourself revisiting step 1 when you’ve gone through the cycle because some new fact pops up that alters your decision making process.

To be honest, you aren’t sizing; You’re providing access to elastic capacity that the business can (and will) consume.  It’s like building a baseball field in Iowa.  You build it, and they will come.  And then you need to build another field, and another, and another.  The exception is that you know there are 9 active players per team in baseball.  You’ve no idea if your users will be deploying 10 * 10 GB RAM lightly used VMs or 100 * 1 GB RAM heavily used VMs on a host.

I worked in hosting with virtualisation for 3 years.  The not knowing wrecks your head.  The only way I really got to grips with things was to have in depth monitoring.  System Center Operations Manager gave me that.  Using PRO Tips for VMM integration, I also got my dynamic load balancing.  Now I at least knew how things behaved and I also had a trigger for buying new hardware.

Finally comes the bit that really will vex the IT Pro:  Cross-charging.  How the hell do you cross-charge for this stuff?  Using third party solutions, you can measure things like CPU usage, memory usage, storage usage, and bill for them.  Those are all very messy things to cost – you’d need a team of accountants for that.  SCVMM SSP 2.0 gives a simple cross charging system based on GB or RAM/storage that are reserved or used, as well as a charge for templates deployed (license).  Figuring out the costs of GB of RAM/storage and the cost of a license is easy. 

However, figuring out the cost of installed software (like SharePoint) is not; who’s to say if the user puts the VM into your directory or not, and if a ConfigMgr agent (or whatever) gets to audit it.  Sometimes you just gotta trust that they’re honest and their business unit takes care of things.

EDIT:

I want to send you over to a post on Working Hard in IT.  There you will read a completely valid argument about the need to plan and size.  I 100% agree with it … when there’s something to measure and convert.  So please do read that post if you are doing a traditional virtualisation deployment to convert your infrastructure.  If you read Mastering Hyper-V Deployment, you’ll see how much I stress that stuff too.  And it scares me that there are consultants who refuse to assess, often using the wet finger in the wind approach to design/sizing.

Event: Private Cloud Academy – DPM 2010

The next Private Cloud Academy event, co-sponsored by Microsoft and System Dynamics, is on next Friday 25th March, 2011.  At this free session, you’ll learn all about using System Center Data Protection Manager (DPM) 2010 to backup your Hyper-V compute cluster and the applications that run on it.  Once again, I am the presenter.

I’m going to spend maybe a 1/3 of the session talking about Hyper-V cluster design, focusing particularly the storage.  Cluster Shared Volume (CSV) storage level backup are convenient but there are things you need to be aware of when you design the compute cluster … or face the prospect of poor performance, blue screens of death, and a P45 (pink slip aka getting fired).  This affects Hyper-V when being backed up by anything, not just DPM 2010.

With that out of the way, I’ll move on to very demo-centric DPM content – I’m spending most of next week building the demo lab.  I’ll talk about backing up VMs and their applications, and the different approaches that you can take.  I’ll also be looking at how you can replicate DPM backup content to a secondary (DR) site, and how you can take advantage of this to get a relatively cheap DR replication solution.

Expect this session to last the usual 3-3.5 hours, starting at 09:30 sharp.  Note that the location has changed; we’ll be in the Auditorium in Building 3 in Sandyford.  You can register here.

Community Event: From The Desktop to the Cloud: Let’s Manage, Monitor and Deploy

We’ve just announced the details of the latest user group event in Dublin … it’s a biggie!  I’ll be presenting two of the deployment sessions, on MAP and MDT.

Join us at the Guinness Store House on February 24th at 09:00 for a full day of action packed sessions covering everything from the desktop to The Cloud, and maybe even a pint of Guinness afterwards.

We have our a fantastic range of speakers ranging from MVPs to Microsoft Staff and leading industry specialists to deliver our sessions ensuring a truly unique experience.  During this day, you will have the choice of attending sessions of your choice, covering topics such as Windows 7/Office 2010 deployment, management using System Center, and cloud computing for the IT pro (no developer content – we promise!).

We have our a fantastic range of speakers ranging from MVPs to Microsoft staff and leading industry specialists to deliver our sessions ensuring a truly unique experience. During this day, you will have the choice of attending sessions of your choice, covering topics such as Windows 7/Office 2010 deployment, management using System Center, and cloud computing for the IT pro (no developer content – we promise!).

We promised bigger and better and we meant it.  This session will feature 3 tracks, each with four sessions.  The tracks are:

  1. The Cloud: Managed by Microsoft Ireland
  2. Windows 7/Office 2010 Deployment: Managed by the Windows User Group
  3. Systems Management: Managed by the System Center User Group

You can learn more about the event, tracks, sessions, and speaker on the Windows User Group site.

You can register here.  Please only register if you seriously intend to go; Spaces are limited and we want to make sure as many can attend as possible.

The Twitter tag for the event is #ugfeb24.

What the Heck is the Microsoft Private Cloud?

There’s been lots of terms thrown around by Microsoft over the past 5 years.  Dynamic systems initiative (DSI) was one.  It focused on using System Center and Active Directory to manage an optimized infrastructure, or an IT infrastructure that was centrally managed with as much automation as possible.  A few years ago the term Dynamic Datacenter started to appear in the form of the Dynamic Datacenter Toolkit.   That was a beta product aimed at the normal internal network.  A hosting variant of that was also in the works.  Eventually the term was split giving us:

  • The Private Cloud and
  • The Dynamic Datacenter

Confused yet?  Yes?  That’s to be expected.  Some theorise that there is a team of people in a basement in Redmond that is paid in two ways:

  • Cause as much confusion as possible: the best are headhunted to rename products in Citrix.
  • Paid by the letter: you’ll see what I mean by that in a few minutes.

The private cloud is a variation on the public cloud – makes sense right?  The public cloud is what you’ve always called the cloud.  In other words, the public cloud is something you subscribe to on the Internet like Salesforce, Google Apps, Office365, and so on.  It could be an application, it could be an application platform, or it could be a set of virtual machines.  You don’t care about the underlying infrastructure, you just want instant access with no delays caused by the service provider.  You pay, you get, you activate your service. Simple.

The private cloud takes those concepts and applies them internally into your internal network.  Why the hell would you want to do that?  Well, maybe you do, and maybe you don’t.  But your business might very well want to.  And here’s why:

The business does not give a damn about servers, SANs, network cards, virtualisation, or any of the other stuff that we IT pros are concerned with.  They are only concerned with applications and information.  Applications allow business to happen and information allows decisions to be made.  Compare the salary of a Windows admin with that of an equal grade .NET dev.  The developer will be driving the nicer car and living in the nicer house.  That’s your proof.

Here’s how the business sees us.  They go and buy some new LOB application or the MIS department develops something.  They come to us to deploy it.  We’re busy.  We want to go through various processes to control what’s deployed.  From their point of view, we are slowing things down.  What they think should happen in a matter of hours may end up taking weeks.  That really happens – I’ve heard of helpdesk calls taking 6 weeks in one corporation in Munich.  And I’ve met countless developers who think we IT pros are out to sabotage their every effort (OK, who told them?  The first rule of being an IT pro is we don’t tell developers that we are out to get them.  The second rule of being an IT pro is we don’t tell developers that we are out to get them).

So something has to give.  That’s where the private cloud comes in.  It shifts the power of deploying servers from the IT pro to the business (typically application admins, faculty admins, developers, and so on … not the end user).  This is all made possible by hardware virtualisation.  Let’s face it: we don’t want to open up physical access to the computer room or data centre to just anyone who says they need it.

The Microsoft private cloud is made possible by the System Center Virtual Machine Manager Self-Service Portal (SCVMM SSP – lots of letters there, eh?) 2.0.  That’s a free download that sits in front of SCVMM.  It has it’s own SQL database and allows for a layer of abstraction above the virtualisation management tool.

Now the role of the IT pro changes.  You now take care of the infrastructure.  You manage the Hyper-V compute cluster.  That’s the set of virtualisation hosts that VMM manages.  You manage VMM: preparing and patching (VMST 3.0) templates, loading ISOs, and so on.  You monitor systems using OpsMgr (and enable delegated operator access/notifications for application owners).  You manage backups.  You do not deploy virtual machines anymore. 

In SCVMM SSP 2.0, you will add the ability for people to get access to SANs, network load balancer appliances, and gain access to VM templates in the VMM library.  You also will define networks.  This allows you to optionally define static IP ranges that can be automatically assigned to VMs that are placed in those networks.

The business user (the dev, etc) can access the private cloud by logging into the SCVMM SSP 2.0 web portal.  There they can create a business unit (requiring admin approval).  That allows you to verify this super cloud user (super as in they are the overall admins of their business unit which will contain virtual machines).  You might also have a cross charging process and set up a process via Accounts.  The business unit owner now creates one or more services.  A service is an application architecture.  Each service has service roles.  The best way to describe them is to think of them as a tier in an n-tier application.  For example, a web application may have web servers (one service role with an associated network), application servers (a second service role with a different network), and database servers (a third service role with a third network).  Any VM created for a service role will be automatically placed in the appropriate network and have TCP/IP configured as required.  Nice!  No admin work to do!  Fewer helpdesk calls!  The admin gets the chance to review the service architecture before approving/denying/modifying.

Once the service is approved, the business unit owner, or any delegated admins (that they delegate from existing AD users) can create VMs, and manage them.  They get remote console access via the portal.  They can log in and install software as required.  No administrator (you) involvement is required to deploy, delete, shut down, install, etc.  You’re off monitoring, backing up (there’s a place where they can tell you what they need backed up in the service creation request), and adding hosts to the cluster.

Things will evolve more with SCVMM 2012 … but there’ll be more on that later.

From the business’s point of view, they feel empowered.  The blocker (us) is removed from the process and they get the cloud experience they’ve had from the public cloud with the associated instant gratification.  From your point of view, you are less stressed, able to spend more time on systems management, and don’t have pestering emails looking for new servers.  Sounds like a win-win situation to me.

Except …

For you folks in the SME (small/medium enterprise) market there will be no change.  Who manages those applications in your environment?  You do.  There are no application admins.  It makes little sense to implement this cloud layer on top of SCVMM because it’s just more process for you.  Sorry!

OK, what about this dynamic datacenter thingy?  Well, that’s the addition of System Center to manage everything in the compute cluster (hardware, virtualisation and operating systems): monitor everything from hardware to applications, backup, deploy, patch, automate process, manage compliance, and so on.  In other words, build automated expertise and process into the network.  But that’s a whole other story.