The Pros and Cons of Virtual Desktop Infrastructure

This article weighing the pros and cons of VDI is making the rounds on the blogoshphere right now.  Here’s my quick take.

The idea with VDI is that instead of users using applications on a Terminal Server or their PC they run an RDP or ICA session that connects to a virtual machine running Vista or Windows 7 on a server in the data centre.  The server will run many desktop virtual machines and users connect to them via a broker service.  The broker deploys the VM’s on behalf of the user/administrators from a golden sysprepped image(s).  There can be a pool of shared and/or dedicated per user VM’s depending on policy.

The PC?

The power of the PC is that the end user computing environment is on the user’s desk.  The weakness of the PC is that the end user computing environment is on the user’s desk.  Some one has to build it, patch it, put updates on it, software on it, secure it, etc.  We can automate lots of that, if not all of that.  But most organisations have failed to do that well despite the available tools out there.  The biggest problem is that the data the user is working on is often far away from the user and their PC. 

Back in the 90’s we learned about Thin Client computing.  Why change the PC every 3 years?  There’s an OS, the hardware, etc, to repurchase and deploy.  There’s helpdesk staff to send out to take care of lots of stuff.  Policies regarding security and data storage locations must be enforced somehow.  We were told that transferring the computing environment to the data centre via Terminal Services (or Citrix)

Why Not Terminal Services?

  • Very prone to issues, e.g. applications are designed for single user machines.  Low level failures can crash the server for everyone.  Memory or processor inefficient applications can reduce the user count per server.
  • Change is slow, e.g. fixing a simple thing for a help desk engineer can become a change control issue taking weeks to complete.  Users used to PC’s won’t accept that.
  • Incompatible applications can create application silos.  App-V can resolve that but that massively increases license costs, e.g. software assurance and you per user MDOP purchase.

I’m not totally slating Terminal Services.  Certainly not.  For bog standard or clean applications, it’s a fine solution especially now with EasyPrint, TS Gateway and RemoteApp.

Why VDI?

VDI takes the best of the PC and puts it with the best of Terminal Services.  Each user gets their own computing environment running a desktop OS.  That’s a familiar computing environment for the user and pretty much most applications should be OK with that.  The desktop OS is running on a server running a virtualisation solution in the data centre.  That centralises the user computing environment beside their server applications and data.  It also reduces the footprint out on the floor so it should reduce helpdesk movement out there.

Sound like it’s going to reduce costs?  I’m not so sure.

Why Not VDI?

Let’s dispel some myths.  Most helpdesk calls are not PC hardware related.  They’re vastly in the minority.  It’s usually applications, printers, phones, Q&A stuff that keeps the phones buzzing.  With PC’s, we can minimise foot traffic by using things like Active Directory Group Policy, Remote Assistance, etc.  Moving the user to a data centre located virtual machine doesn’t remove these calls.  In fact, the same solutions will be used to fix the issues.  Heck, we’ve added further complexity such as bandwidth and the broker to break.

What about VDI being a cheaper solution?  A 2GB RAM HP PC’s with a copy of Vista Business is €362 in Ireland.  It’s got 320GB of disk but we really only need 100GB of that for the desktop OS and applications.  All user specific data will be stored on the network using redirected folders.

What about VDI?  I’m going low end here.  We’ll run a standalone host to keep costs down.  Using a clustered solution will require a SAN of some kind.  We’ll go with a DL385 G6 costing €4880.  We’ll have to add some equipment to it to maximise it’s potential.  We’ll put in 2 RAID 1 drives for the host OS and 14 300GB drives for the VM’s.  The OS drives total €500.  The VM drive total €6650 giving us 3900GB of storage.  That’ll round out as 35VM’s.  For virtualisation, let’s use a free one with RAM oversubscription.  We’ll need another 16GB of RAM to give us a total of 32GB RAM, costing €520.  The second processor works out at €1000.

So here’s the cost comparison for hardware:

  • 35 PC’s = €12,670
  • 35 VM’s = €13,550

I’ve not accounted for monitors, keyboards and mice in this – you need them in both solutions.

We went low end with the VDI host.  If it experiences a hardware issue then 35 users will be unable to work.  You’re likely going to need to build a cluster to protect against that.  That means buying SAN storage and an additional server.  We’re probably doubling the costs of the solution, if not more, by doing that.  You’ve also completely centralised the user computing environment so there will be greater reliance on networking, requiring an upgrade there. 

I also forgot about the OS costs.  Most businesses in Ireland use the OEM OS so there no extra costs there.  But VDI requires leasing a VECD license (to be renamed) every month from MS so there’s additional licensing costs there.  You’re also likely to purchase a broker solution from someone like Citrix or Provision Networks.

But what about all the management savings?  If you need to deploy software, updates, patches, AV, etc to a PC then you need to do the same with a VM.  Sure it’s centralised but you still need all those management applications to do the work.  And you’re still likely to need 1 helpdesk admin for every 50 users to take care of all that support.

Oh – you’ll still need something for VDI on the end user’s desktop.  I already discounted monitors from the equation.  But you still need a terminal of some kind in VDI on the desktop.  That could be a recycled PC.  Or it could be a terminal, costing from €203 per machine.  They also need to be managed and upgraded in someway too.

Which Way To Go?

VDI does have a place but I’d be more likely to go with Terminal Services for a centralised solution.  For larger deployments I’d look at Citrix’s offering, even though you’re doubling those CAL costs.  But more often than not, a PC deployment is the way I’d still go.  You still need all the same management solutions and mechanisms no matter which way you go.  So I say go the way that’s cheapest, most trusted and more fault tolerant.

The basic problem is that server hardware is more expensive than PC hardware.  The only hope is that the power savings would write off the hardware purchase but I’ve no data for that.  I do know that PC’s are more efficient than ever.  We can control power usage using GPO and use WOL to power them up during the night to do updates.  And don’t forget there will always be a terminal on the desk (often an old, power inefficient recycled PC) for Terminal Services and VDI.

Virtualising Windows Essential Business Server

Wilbour Craddock, a Partner Technology Specialist at Microsoft Ireland, has done an interesting presentation discussing the virtualisation of EBS and SBS using Hyper-V.  You can use Hyper-V Server 2008/2008 R2 but you could also buy the Enterprise SKU’s.  Use that extra Standard Edition license as the host OS.  Standard gives you 1 free license for a VM so that gives you back your license for an additional VM, e.g. SQL or Terminal Services.

Microsoft has published a document on virtualising EBS.

“Server virtualization enables multiple operating systems to run on a single physical server as virtual machines. With server virtualization, you can consolidate the workloads of multiple servers onto a smaller number of fully utilized servers. Fewer servers can reduce hardware, energy, and management costs. By using the Microsoft® Hyper-V™ technology in the Windows Server® 2008 operating system, you can run a virtualized instance of Windows® Essential Business Server on a single server or several servers.”

Hypervisor Functional Specification v2.0

This is probably only going to be of interest to a handful of developers but MS published a detailed doc on Hyper-V.

“This document is the top-level functional specification (TLFS) of the second-generation Microsoft hypervisor. It specifies the externally visible behaviour of the Microsoft hypervisor, a component of Microsoft Windows Server 2008 R2 Windows Server virtualization. The document assumes familiarity with the goals of the project and the high-level hypervisor architecture. This specification is provided under the Microsoft Open Specification Promise. For further details on the Microsoft Open Specification Promise, please refer to: http://www.microsoft.com/interop/osp/default.mspx. The Hypervisor Functional Specifications document specifies the externally visible behaviour of the Microsoft hypervisor, a component of Microsoft Windows Server 2008 R2 Windows Server virtualization. The specifications can be used to understand the functions of the hypervisor and implement a compatible solution. Specification Outline The following is the outline of the information contained in the complete Hypervisor Functional Specification:

  • Introduction
  • Basic Data Types, Concepts and Notation
  • Feature and Interface Discovery
  • Hypercall Interface
  • Partition Management
  • Physical Hardware Management
  • Resource Management
  • Guest Physical Address Spaces
  • Intercepts
  • Virtual Processor Management
  • Virtual Processor Execution
  • Virtual MMU and Caching
  • Virtual Interrupt Control
  • Inter-Partition Communication
  • Timers
  • Message Formats
  • Partition Save and Restore
  • Scheduler
  • Event Logging
  • Guest Debugging Support
  • Statistics
  • Booting
  • System Properties
  • Appendix”

OpsMgr 2007 Management Pack for Windows Server 2008 Hyper-V

Last week, MS released a management pack for monitoring W2008 Hyper-V using OpsMgr 2007.  It offers “monitoring of Windows Server Hyper-V systems. This includes monitoring coverage of Hyper-V host servers, including critical services and disks, and Hyper-V virtual machines, including virtual components and virtual hardware.

Feature Summary
This management pack provides the following functionality:

  • Management of critical Hyper-V services that affect virtual machines and host server functionality
  • Management of host server logical disks that affect virtual machine health
  • Full representation of virtualization in a single Hyper-V host server, including virtual networks, virtual machines, and guest computers
  • Monitoring of virtual machine hardware components that affect availability“

You don’t need VMM 2008 to use this management pack.  Prior to this MP, you could monitor Hyper-V using an MP that was included with VMM.

Microsoft Assessment and Planning Toolkit 4.0 Beta

Microsoft has released a beta for release 4 of the MAP toolkit.  Previous content has been extended:

  • Windows Vista
  • Windows Server 2008
  • Hyper-V for Windows Server 2008
  • SQL Server 2008
  • 2007 Microsoft Office
  • Microsoft Application Virtualization
  • Microsoft Online Services (e.g. Exchange Online)
  • Forefront Client Security and more
  • The new content is:

  • Windows 7
  • Windows Server 2008 R2
  • Hyper-V for Windows Server 2008 R2
  • Using the MAP toolkit you can asses your current infrastructure and deploy new technologies.  The new features are:

  • Windows 7 Hardware Assessment
  • Windows Server 2008 R2 Hardware Assessment
  • Virtualization ROI Tool Integration
  • VMware Virtual Machine Discovery
  • Hyper-V 2008 R2 Virtualization Planning
  • Proposal Customization for Microsoft Partners
  • Performance Enhancements
  • Windows Server 2008 R2 Hyper-V Snapshot Changes

    Ben Armstrong blogged last night about some of the changes that affect snapshots in Hyper-V running on Windows Server 2008 R2 and Hyper-V Server 2008 R2.

    Before we mention them, it should be noted that you need to merge or delete your snapshots on the current release of Hyper-V before doing and upgrade to 2008 R2.

    • You will be able to export a snapshot.  In the background a merge is taking place to a new VHD file.  You can then mount this or set it up as a new VM.  It will appear as a new VM with no snapshots.
    • AVHD files (your snapshots) will be created in the same folder as the VM’s VHD files.  That makes them easier to find.
    • Opening the properties of a snapshot shows you the related VHD files that are used to create the snapshot.
    • You can edit an AVHD file to manually merge a snapshot.
    • You will be able to manually attach snapshots to a VM.  E.G. say you have a template VHD and you’ve built up some lab machines using snapshots.  You can recreate that lab by copying the AVHD’s and attaching them to new VM’s.

    I really don’t see snapshots as being something that should be used in production.  For me, they’re limited to labs.  The AVHD file is a kind of differencing disk and the performance isn’t great.  You also need to be careful about AVHD bloat.  If you need snapshot like behaviour in production then use a Hyper VSS enabled backup solution on the parent partition, e.g. DPM 2007 SP1.

    HSE And Bord Gáis Still Not Encrypting Laptops

    This is beyond stupid and irresponsible now.  I’m tired of seeing these stories.  A few days ago we heard that 15 laptops were stolen from the HSE, 2 of which were unencrypted and contained personal information of patients.  Now we hear that 4 unencrypted laptops with 75,000 customer’s banking details were stolen from Bord Gáis. 

    What the hell is new about encrypting laptops anymore?  It should be a matter of practice: Buy/build a laptop and encrypt it.  But oh no, these lazy organizations don’t want to do that or some inept managers just don’t care.

    Brendam Drumm of the HSE should be sacked (without his massive pay rise) anyway.  But we were promised all laptops would be encrypted by September of last year.  Was that done?  No.  Who would think a government agent would lie or fail like that?  SACK HIM!

    We need some new laws:

    1. It should be mandatory to encrypt all business laptops by law.  Trying to get just those with personal data hasn’t worked.  Data is movement is too fluid.
    2. There should be employment law protection for whistle blowers; that’s needed anyway, e.g. the financial system.
    3. It should be a mandatory requirement for the Data Protection Commissioner to prosecute the directors of companies where unencrypted laptops are stolen.  There will be a fixed, non-negotiable punishment.  That’ll get ‘em worried. 
    4. Failure to prosecute will be a prosecutable offense regardless or not if the data protection commissioner is still in office or not.  Prosecution will be mandatory as will the punishment.  That’ll take care of the cronyism that’s rife in our country.

    Organisations like the HSE probably have MS Software Assurance.  If they then then deploy MS’s Windows Enterprise edition and enable BitLocker.  If not, go have a look at a 3rd party solution.

    What’s The Big Deal With Hyper-V and System Center?

    Microsoft’s big differentiator from the competition is management.  Most people have never experienced System Center so they’ve no idea by what I mean by management.  They’ve seen things like HP SIM, IBM Director or VMware Virtual Center.  For me, those are incomplete point solutions but they’re better than some of the freeware or “cheapware” solutions I’ve seen on some sites.  When I say management I mean knowing what is where, how it’s performing, automation of deployment & configuration, backup/recovery from cradle to grave and from hardware to application inclusive of virtualisation.  Sounds like science fiction?  Nope, it’s a reality for some of us who’ve gone down the System Center route.  Even back in the early days, I had this sort of thing running in 2005.  Me and my team of 2 others managed 173 worldwide servers and were 3rd line support for the desktops.  That included doing all the AD management, PC image builds, patching and software deployment.  That sounds like we must’ve worked 24 hours a day?  Nope, outside of project/development work, we did around 3 hours a day between us.

    This was all thanks to the automation provided by Microsoft System Center.  It wasn’t even called that back then … or the term had just been coined.  We had SMS 2003 (now known as Configuration Manager 2007 R2).  That allowed us to audit systems, generate license deployment reports, measure software utilisation and deploy software automatically.  It could have done software deployment and patch deployment but those features were pretty crude prior to the current release of ConfigMgr.  Instead we used WSUS and Remote Installation Services (now replaced by Windows Deployment Services).  Microsoft Operations Manager 2005 (now Operations Manager 2007 R2) gave us centralised monitoring of health and performance for our Windows Servers.  This included HP hardware, operating system, Microsoft applications/services and Citrix MetaFrame at the time.  Combined with Active Directory and a carefully designed GPO and delegation model, we had complete control of everything, always knowing what was happening and being able to proactively respond to issues in the network.  We had a frequently changing business so being able to respond quickly was essential.  We had that.

    Let’s have a look at what MS has to offer now.

    image

    Let’s start with Hyper-V in Server 2008 R2.  That’s Microsoft’s enterprise virtualisation platform.  You have the versions built into Windows Server 2008 R2and the free Hyper-V Server 2008 R2.  You can run standalone machines with no hardware fault tolerance.  Or you can create a cluster.  This means that virtual machines can move from one host to another with only 10 milliseconds of an outage during the move.  10 milliseconds is virtually nothing and no network application will notice.  This is thanks to Live Migration.  The R2 version simplifies storage by using Cluster Shared Volume (CSV).  You can store many VM’s on one large volume reducing the amount of time you need to spend talking to that pesky SAN administrator 😉

    System Center Virtual Machine Manager 2008 R2 or VMM allows you to manage your Hyper-V hosts, the placement of the VM’’s and the configuration of the VM’s.  It can also manage ESX, ESXi and many VMware Virtual Center installations.  VMM is based on PowerShell so it gives you a central place for scripting.  There’s a library where you can save those scripts and ISO’s, VM configurations, VHD’s (virtual hard disks), etc.  There’s a self service console that allows you to delegate the deployment and management of VM’s.  You can control delegated VM deployment using quotas.  This really works now because the storage work is already done – it’s not a real player in the pre-R2 release in the real world due to this complication.

    That’s the virtualisation dedicated stuff done with.

    System Center Operations Manager 2007 R2 allows you to monitor the health and performance of Windows, Linux, UNIX, Microsoft services/applications, distributed applications and synthetic transactions out of the box.  You can add in support for things like ESX, MySQL, Oracle, Cisco, Juniper, etc using purchased 3rd party applications.  HP, Dell and IBM provide free management packs for monitoring their hardware.  The new *NIX support is perfect because we can monitor those SLES VM’s we have now or the RedHat VM’s that MS will probably start supporting before the end of the year.

    VMM integrates tightly with OpsMgr using Pro Tips, e.g. OpsMgr can detect a performance issue with a VM.  It then notifies VMM to move that VM to another host with more available resources.  Hardware vendors are adding to this, e.g. Brocade has a management pack where their HBA can report heavy utilisation of a fibre channel link by a VM.  OpsMgr reports this to VMM and then VMM responds by moving VM’s about on the cluster.  Live Migration means that these VM moves have no impact on the application they host or the clients they service.

    System Center Data Protection Manager 2007 SP1 is Microsoft’s backup solution.  Using the Volume Shadow Copy Service (VSS) writer for Hyper-V, it can snapshot a VM without it being brought offline.  That’s using an agent on the Hyper-V host or parent partition.  You could also install and agent in the VM for a more traditional backup.  OpsMgr will monitor that DPM installation for you.  And you can more easily test your backup recoveries now.  Snapshot a VM and restore it to an alternate location.  Attach it to a private lab network.  Then do some tests like database recovers or SharePoint recoveries.

    System Center Configuration Manager 2007 R2 is a huge product now.  I’m probably going to do it a disservice here.  It can do your OS deployment, software deployment, security update deployment, custom update deployment, license usage reporting, hardware auditing, license usage reporting, desired configuration auditing … to be honest it can do anything you can do in a script or from a command prompt and on a controlled and scheduled basis.  With this mad ability to deploy VM’s at a moments notice, ConfigMgr gives you options for deploying the OS.  Maybe you use a sysprepped VHD template.  Maybe you use ConfigMgr to deploy an OS image over the network.  A VM deployed using the self service console can be immediately configured by ConfigMgr with settings and security updates that are mandated by policy.  Network configuration policy can be enforced by checking that customised VM’s are up to scratch using desired configuration management.

    That’s a quick view of what’s to offer.  As you can see, it’s pretty damned powerful.  It also allows you to automate so much.  You can focus on future developments, maybe even get onto that Windows Server 2010/2011 Serer beta 😉