I’ll be on Talk TechNet

I’m going to be a guest on Microsoft’s Talk TechNet webcast on April 27th at 9am PST

“Talk TechNet is all about discussing topics and trends in the world of IT Professionals.  In this show we’ll have guest Aidan Finn. Call in and join us for what promises to be a lively 60 minute session.  Get some burning questions answered on Virtualization.

Presenters: Keith Combs, Sr. Program Manager, Microsoft Corporation, Matt Hester, Sr. IT Pro Evangelist, Microsoft Corporation, and Aidan Finn, Microsoft Virtualization MVP”

It should be interesting … hopefully you’ll be able to tune in!

Will Hardware Scalability Change Microsoft Virtualisation Licensing?

One of the nice things about virtualising Microsoft software (on any platform) is that you can save money.  Licensing like Windows Datacenter, SMSD, SQL Datacenter, or ECI all give you various bundles to license the host and all running virtual machines on that host. 

Two years ago, you might have said that you’d save a tidy sum on licensing over a 2 or 3 year contract.  Now, we have servers where the sweet spot is 16 cores of processor and 192 GB of RAM.  Think about that; that’s up to 32 vCPUs of SQL (pending assessment, using the 2:1 ratio for SQL) Server VMs.  Licensing one two pCPUs could license all those VMs with per-processor licensing, dispensing with the need to count CALs!

In it’s just getting crazier.  The HP DL980 G7 has 64 pCPU cores.  That’s 128 up to vCPUs that you could license for SQL (using the 2:1 ratio for SQL).  And I just read about a SGI machine with 2048 pCPU cores and 16TB of RAM.  That sort of hardware scalability is surely just around the corner from normal B2B. 

And let’s not forget that CPUs are growing in core counts.  AMD have led the way with 12 core pCPUs.  Each of those gives you up to 24 SQL vCPUs.  Surely we’re going to see 16 core or 24 core CPUs in the near future.

Will Microsoft continue to license their software based on sockets, while others (IBM and Oracle) count cores?  Microsoft will lose money as CPUs grow in capacity.  That’s for certain.  I was told last week that VMware have shifted their licensing model away from per host licensing, acknowledging that hosts can have huge workloads.  They’re allegedly moving into a per-VM pricing structure.  Will Microsoft be far behind?

I have no idea what the future holds.  But some things seem certain to me.  Microsoft licensing never stays still for very long.  Microsoft licensing is a maze of complexity that even the experts argue over.  Microsoft will lose revenue as host/CPU capacities continue to grow unless they make a change.  And Microsoft is not in the business of losing money.

RemoteFX Deployment Guides

Microsoft has published guides for deploying RemoteFX.  RemoteFX is a new Windows Server 2008 R2 feature that is added with Service Pack 1 (currently a pre-RTM RC release).  It allows a Windows Server 2008 R2 server to virtualise a graphics card (GPU).  That means that Remote Desktop Services (VDI and Session Hosts aka Terminal Servers) can use a host server’s GPU to process high quality graphics, and stream them down to a “dumb” terminal.  Citrix is also including support for this in their Dazzle.

Network Security in the Hypervisor

I just read an interesting article that follows up some presentations at VMWorld.  It discusses the topic of security in the Hypervisor (ESX in this case) – the author is actually focusing solely on network security.  Other aspects such as policy, updating, etc, are not discussed. 

The author asks 4 questions:

Q) Security is too complicated, and takes too many separate devices to configure/control.
A) Yes – and I agree, sort of.

Security should be simple.  It isn’t.  It requires too many disparate point solutions.  Let me step back a moment.  Why do I like Windows, AD, System Center, Hyper-V, etc?  It’s because they are all integrated.  I can have one tidy solution with AD being the beating heart of it all.  And that even includes security systems like WSUS/ConfigMgr (update management), NAP (policy enforcement), BitLocker/BitLocker To Go, device lock downs on personal computers, remote access (DirectAccess or VPN via RADIUS/IAS) etc.

Things start to fall apart for network security.  Sure you can use whatever ISA Server is called these days (Sorry ForeFront; you are the red headed stepchild in Redmond, locked away where no one knows you exist).  Network security means firewall appliances, IDS systems, VPN appliances, VPN clients that make every living moment (for users and admins) a painful existence, etc.  None of these systems integrate.

To VMware’s credit, they have added vShield into their hypervisor to bring firewall functionality.  That would be find for a 100% virtual or cloud environment.  That’s the sort of role I had for 3 years (on ESX and Hyper-V).  I relied on Cisco admins to do all the firewall work in ASA clusters.  That’s way out of my scope and it meant deployments took longer and cost more.  It slowed down changes.  It added more systems and more cost.  A hypervisor based firewall would have been most welcome.  But I was in the “cloud” business.

In the real world, we virtualization experts know that not everything can be virtualized.  Sometimes there are performance, scalability, licensing, and/or support issues that prevent the installation of an application in a virtual machine.  Having only a hypervisor based firewall is pretty pointless then.  You’d need a firewall in the physical and the virtual world.

Ugh!  More complications and more systems!  Here’s what I would love to see (I’m having a brainfart) …

  • A physical firewall that has integration in some way to a hypervisor based firewall.  That will allow a centralized point of management, possibly by using a central policy server.
  • The hypervisor firewall should be a module that can be installed or enabled.  This would allow third parties to develop a solution.  So, if I run Hyper-V, I’d like to have the option of a Checkpoint hypervisor module, a Microsoft one, a Cisco one, etc, to match and integrate with my physical systems.  That simplifies network administration and engineering.
  • There should be a way to do some form of delegation for management of the hypervisor firewall.  In the real world, network admins are reluctant to share access to their appliances.  They also might not want to manage a virtual environment which is rapidly changing.  This means that they’ll need to delegate some form of administrative rights and limit those rights.
  • Speaking of a rapidly changing virtual environment: A policy mechanism would be needed to allow limited access to critical VLANs, ports, etc.  VMs should also default to some secure VLAN with security system access.
  • All of this should integrate with AD to reuse users and groups.

I reckon that, with much more time, this could be expanded.  But that’s my brain emptied after thinking about it for a couple of minutes, early in the morning, without a good cup of coffee to wake me up.

Q) Security now belongs in the hypervisor layer.
A) Undecided – I would say it should reside there but not solely there.

As I said above, I think it needs to exist in the hypervisor (for public cloud, and for scenarios where complicated secure networks must be engineered, and to simplify admin) and in the physical world because there is a need to secure physical machines.

Q) Workloads in VMs are more secure than workloads on physical systems.
A) Undecided – I agree with the author.

I just don’t know that VM’s are more secure.  From a network point of view, I don’t see any difference at all.  How is a hypervisor based firewall more secure than a physical firewall?  I don’t see the winning point for that argument.

Q) Customers using vShield can cut security costs by 5x compared to today’s current state-of-the-art, while improving overall security.
A) Undecided – I disagree with VMware on this one.

The need for a physical environment is still required to protect physical infrastructure.  That cost is going nowhere.

This is all well and good, but this all forgets about security being a 3D thing, not just the signle dimension of firewall security.  All those other systems need to be run, ideally in an integrated management, authentication/authorisation environment such as AD.

VMware’s Paul Maritz’s Keynote Comments

I just guffawed out loud.  My co-workers are giving me funny looks.  It’s kinda the same reaction I had when Steve Ballmer proclaimed in his TechEd NA 2010 keynote that MS wanted nothing to do with you if you weren’t into cloud services (shareholders everywhere probably spat up their coffee).

OK: I have a pro-Microsoft stance on a lot of things and you may have noticed I criticise them too (see above).  You can see what my favoured virtualisation solution is with too much difficulty.  But, I do recognise there is a place for VMware (whcih I have used), Citrix (who have done some cool stuff), and Linus as an OS.

I’ve just read “VMware CEO Paul Maritz has questioned the relevance of the operating system … is a clear indication that operating systems such as Windows and Linux are no longer as important as they once were”.   The general jist of the pitch is that the OS is dead.

Huh!  OK … just what exactly is installed in all those VMs that are running in your Fortune 1000 customer sites?  It is fair to say that Platform-as-a-Service has given developers more options.  The problem with PaaS is that it’s a lockin mechanism.  Marketing people call it stickiness.  The idea is your customer cannot leave because its too difficult.

Cloud computing such as Infrastructure-as-a-Service (Amazon E2C, etc) are based on virtualisation, such as VMware ESX, MS Hyper-V, and (primarily) Xen (variants).  That’s traditional VM technologgy that requires an OS.  That OS needs to be installed, secured, managed, updated … all the stuff you do with physical machines.  That workload goes nowhere.  The OS is critical. 

If cloud computing takes off (and I really don’t think it’s going to have the full impact that evangelists are proclaiming – see terminal services/Citrix in the late 90’s) then it’s going to be a combined solution…. PaaS and IaaS.  But the OS and associated management are going nowhere.

By the way, I’ve seen some tweets where VMware are pitching View as a complete VDI solution.  OK, so you have some machines that users will log into (more operating systems!!!).  You’ll not that Citrix take a different view: those operating systems require some form of management and automation.  It just so happens that these are usually the same mechanisms that are available for managing physical PCs.  Without that … can you imagine a PC network of hundreds or thousands of machines with no management?  No centralised patching of the OS.  No application deployment.  No software/license auditing.  And so on and so on.  That’ll lead to a real happy user base 😉

My advice: don’t burn your WIndows/Linux books just yet.

HP CloudStart

HP has launched a new “cloud” bundle.  It appears to be based on ProLiant and Integrity blade servers to allow you to build a private cloud.  It comes with the usual options such as Virtual Connects with Flex-10.  The bundle can include VMware or Hyper-V for ProLiants.  HPs own system will be used for Integrity servers.  So far, it just sounds like any old server/storage kit.  Where’s the cloud?  That comes with a software product called Cloud Service Automation.  There’s little info on it that I can find quickly.  I guess it’s some virtualization agnostic job engine for automating the deployment of resources, etc.

The suite is available in Asia and will slowly be made available around the rest of the world by the end of the year.

Citrix XenClient

Citrix has announced/released a type 1 (bare metal) hypervisor for the PC, called XenClient.  This is a product with a very small set of supported hardware.  It must have one of a few Intel processors (AMD not supported) and the machine must have drivers included in XenClient … that’s because it is a monolithic hypervisor like ESX, instead of being a paravirtualized hypervisor like XenServer or Hyper-V.

The obvious benefit for the user over hosted solutions such as VMware Workstation is performance.  I saw a quick demo of it at PubForum 2010 Frankfurt.  It did look cool but the very limited set of supported hardware was offputting.  Speaking of which:

General Requirements

  • CPU: Intel Core 2 Duo, Intel Core i5, Intel Core i7
  • Graphics: Intel integrated graphics GMA 4500, Intel HD Graphics
  • Memory: 4GB RAM recommended
  • Disk: 160GB recommended
  • Wireless Lan: Intel WiFi Link 5100/5300, Intel Centrino Advanced-N 6200, Intel Centrino Ultimate-N 6300
  • Intel vPro: Highly Recommended
      

Hardware Compatibility List (HCL)  

  • HP EliteBook 6930p, 2530p, 8440p*
  • Dell Latitude E4300, E6400, E4310*, E6410*, E6500, E6510*
  • Dell Optiplex 780
  • Lenovo ThinkPad X200, T400, T500

* Standby and external monitors are not fully supported on these systems. This will be addressed in a near term update.

Heh – My Dell Latitude is supported.  I’m tempted!

Some nice features when combind with a server infrastructure:

  • To enable local virtual machine desktops for laptop users, download and install XenClient and Citrix Receiver™ for XenClient. To enable centralized management of XenClient devices, download and install the Synchronizer for XenClient virtual appliance.
  • Create multiple desktops locally by installing each operating system into a new local virtual machine. Connect to the Synchronizer to download predefined virtual machines hosted by IT
  • Use XenClient to switch instantaneously between multiple secure desktops, run high-performance graphics, and access business and personal applications. Whether for business or personal use XenClient delivers flexibility and mobility for users with control and security for IT.

There will be two versions.  “XenClient is going to be available in an Express freebie edition that individuals or companies can download and put on as many as ten machines … a fully supported XenClient hypervisor for PCs will only be available in the XenDesktop 4 Enterprise and Platinum Editions … The freebie XenClient Express will have support only through the Web forums at Citrix”.

Now all we need is a client hypervisor from Microsoft that we can optionally synchronise with Hyper-V hosted VDI machines.  That would be cool.  Manage the sucker with ConfigMgr, deploy the VHD’s with WDS, control and secure using BitLocker, AD and Group Policy, etc.

Credit: The Register

Oracles Virtualization Package Goes Together Like Bourbon Creams and Baked Beans

Another story from the Register: Oracle is claiming that they have the best unified enterprise server virtualisation solution on the market.  It is comprised of:

  • Oracle VM for x64 (Oracle’s Xen)
  • Oracle VM for Sparc (Sun’s LDoms)
  • OpsCenter (from Sun)
  • Oracle Enterprise Manager

That’s a lot of stuff that’s thrown together.  If I want a unified virtualization solution that is part of a greater systems management solution (flogging a dead horse here) then I go:

  • Hyper-V
  • System Center (VMM, OpsMgr, and maybe DPM and/or ConfigMgr).

That’s one solution that gives me virtualization (for servers and desktops [via XenDesktop]) and enterprise management for the entire IT infrastructure and applications.

Oracle also pushed the Sun (purple) blade package.  Hmm, I think not!  I’ve seen how much purple hardware costs.  I used to be able to buy several fully kitted servers for the price of a single 4GB stick of reconditioned purple RAM.  I giggled a bit when the Oracle marketing pitch made it sound like 10Gbps networking was something that only they could do.

One big gotcha: if you run Oracle s/w then you need to know that it is not supported on a non-Oracle virtualisation platform.  That means no running of Oracle software on Amazon E2C, on Hyper-V, on Citrix XenServer or on VMware.  But there are stories out there where Oracle customers have threatened to switch to the MS stack and have gotten bespoke support for running the s/w on non-Oracle virtualisation.

Project Kensho OVF

One fo the reasons I love virtual machines is because they are mobile.  Most of them (except RDM and passthrough disks) are just files, making them easy to migrate, copy, export, and import.  But this is lmiited to the same virtualisation platform.  Changing virtualisation platforms requires a tricky V2V process that vendors have made one-way.

Citrix has unveiled a solution with the codename of Project Kensho.  It leverages the Open Virtualisation Format (OVF) standard  (developed by  Citrix, VMware, Dell, HP, IBM and Microsoft) to allow the movement of virtual machines from on virtualisation platform to another.  You can think of OVF as playing the same role as XML in business integration solutions: it’s a stepping stone.

The solution is expected to ship by Citrix in September.

What does this mean to you?  OVF gives us a standard way to V2V virtual machines between many virtualisation platforms, depending on the support offered by those platforms for OVF.

According to wikipedia:

“VirtualBox supports OVF since version 2.2.0 (April 2009).  AbiCloud Cloud Computing Platform Appliance Manager component now supports OVF since version 0.7.0 (June 2009).  Red Hat Enterprise Virtualization supports OVF since version 2.2.  VMware supports OVF in ESX 3.5, vSphere 4, Workstation 6.5, and other products using their OVF tool.  OVF version 1.1 is supported in Workstation 7.1 and VMware Player 3.1 (May 2010).  IBM supports OVF format VM images on the POWER server platform for the AIX operating system and Linux, and for Linux on z/VM on the mainframe using IBM Systems Director (via VMControl Enterprise Edition plug-in, a cross-platform VM manager”.

MS executives have confirmed in the past that VMM v.Next will include added support for XenServer management.  We know MS and Citrix are veery tight.  MS staff recommened XenDesktop as a VDI solution and Citrix are currently recommending Hyper-V for virtualisation.  It won’t surprise me to see OVF turning up in Hyper-V v.Next and VMM v.Next.  This would offer huge fleixibility:

  • Private cloud made up of many platforms (as found in medium/large organisations)
  • Switching seamlessly between public cloud (would require some form of broker application – there’s a startup opportunity!)
  • Migrating VM’s seamlessly between any virtualisation platform in public/private clouds, e.g. develop in house on Hyper-V with VSS 2010 Lab Management and upload the final VM via OVF to the cloud service provider of choice, no matter what virtualisation solution they use.

It sounds like Nirvana!  I’m sure that there will be niggling things that will cause problems:

  • Licensing: moving a VM with a MSDN license key up to a cloud environment that requires SPLA provided by the hoster will be a mess.
  • Technical: Build a VM with 8 vCPUs on VMware and migrate it to Hyper-V and you’ll lose 4 vCPUs.
  • Technical: VM additions or itnegration components are virtualisation platform specific.  Something will need to be done to be able to add/remove them seamlessly.

It’s going to take a while, and it might even be impossible for business reasons, to get to an automated, seamless solution.  But OVF will give us something where, with a tiny amount of admin work (product key and addition removal), we will have a format to make virtual machines even more mobile.

Multi-Site Clustering and Virtualisation

I’ve been doing a little research on this topic lately.  Sure, it was from the Hyper-V perspective but watching a video on VMware’s site about SRM confirmed they facts were similar with ESX.  If you want to create a multi-site cluster for DR/business continuity then you have two big and expensive considerations:

Storage

The replication will probably be performed by your storage system.  These SANs do not come cheap.  The one that looks most interesting to me is the HP LeftHand.  I can’t see any features on the Dell site.  Dell does suck when it comes to educating the market about their products – they provide private briefings only.  Strangely higher end systems like EVA/EMC don’t have CSV (and probably VMFS) support yet in this scenario because the LUN can only be active in one site at one time (preventing granular VM migration across the WAN).  NetApp appear to use a snapshot feature for replication and this could complicate a multi-site cluster design IMO.  Admittedly, I have failed to follow up with opportunities to hook up with them to learn more about their system – apologies if I have things wrong there.

The WAN

The good news is that you will get a very nice lunch/dinner/weekend away from your WAN service provider because of the need to have huge bandwidth.  From the Hyper-V perspective, you need 2*1GBPS lines between the primary and secondary site for Live Migration between the sites.  You may also need less than 2MS latency on the line for the storage synchronous replication that can support this.  You can do Quick Migration (it’s still there luckily) for DR invocation across 100MBPS lines.  Quick migration is fine for that emergency scenario.  It’s not ideal but this is a bandwidth thing – VM memory needs to transfer quickly.

Software Solutions

A number of them are out there.  Some are point solutions for creating failover clusters (using Windows feature) between two hosts in different sites (Steeleye).  Some simulate the processes and controls of a Windows Failover Cluster without using the Windows feature (DoubleTake).  I’ve seen one solution (can’t remember product name) that installs a service on servers with disk and creates an iSCSI SAN with features similar to that of a HP LeftHand.

Advice

Get your accountants ready to sign some big cheques.  No matter what you do, you’re going to need to put in some big bandwidth and that’s going to be a big recurring cost.  The benefit is simple: a single fault tolerant solution for disaster recovery that will work when the company is under the stress of a disaster.

The specifics of your design will be totally dependent on the hardware and software you use.  Make sure you work with a vendor who really knows this stuff.  Look for references.  Don’t just use Honest Bob’s PC Sales because the IT manager is having it off with Bob (I’ve seen that one happen and it ended badly).