Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware

This 7 page article on InfoWorld makes for an interesting read.  And it appears to me that the author was doing his best to be fair when comparing Hyper-V, XenServer, vSphere, and RedHat.  In the end, he appears to favour vSphere slightly more than Hyper-V for two reasons:

  1. Simplicity of set up
  2. Management

I will concede on point #1.  I’ve done vSphere and, you may not have noticed, I am a wee bit of a Hyper-V fan.  When it comes to setup, vSphere is easier to set up, mainly because it is a virtualisation platform and nothing else.

On the management side, if you look 100% at the virtualisation slice of the pie, then you might concede that vSphere has the tiniest of an edge.  The author picked on Microsoft adding complexity to the management setup by using several tools.

Let me ask you a question: Why do businesses have IT?  Is it so they can own servers, switches, routers, disks, and firewalls?  Or is it because they want applications to enable the business to carry out operations and make profit?  Hopefully it is the latter … otherwise you work for a soon-to-be dot.bomb.

Microsoft have observed why business have IT and have developed their management stack to cater for the entire computing stack, not just virtualisation.  I’ve bleated on about that over and over so I’ll leave that there.

As a MS partner, I like Hyper-V because it brings the possibility of selling other licenses and services such as enterprise monitoring, backup, automation, and so on.  My relationship with the customer does not end after I sell some servers/storage and some virtualisation licenses.

Give the report a read for yourself.  Interestingly, he seems to reckon all the solutions are excellent.

Interesting Survey Results on Behalf of Veeam

I’ve just read these stats on TechCentral.ie in an article called “IT departments lack visibility in virtualisation”.  They are from a survey “carried out by Vanson Bourne on behalf of VMware management solutions provider Veeam Software”.

  1. Nearly half (49%) of firms that use virtualisation say they have delays in resolving IT problems because of a lack of visibility into their whole IT infrastructure
  2. Forty five per cent of respondents also said that the lack of visibility is slowing down their organisation’s adoption of virtualisation
  3. Eighty per cent of respondents who currently use specialist tools but would prefer to use traditional enterprise-wide management tools
  4. Seventy-one per cent said they had difficulty doing so managing the VMware vSphere and Microsoft Hyper-V hypervisors from a single console …
  5. … while 68% wanted a single dashboard for managing them both

The stats are quoted are from TechCentral.ie so please check out their site for IT news.

Let’s quickly deal with the stats one by one:

  1. Want visibility into your infrastructure?  I’m curious to see how VMware will accomplish that.  They make great virtualisation software but that’s where they stop.  On the other hand, Microsoft System Center will audit and report on your infrastructure hardware and software (Configuration Manager) and monitor your hardware and applications (Operations Manager).
  2. Use the Microsoft Assessment and Planning Toolkit, ideally combined with System Center, and you have (a) the tools to figure out what you have (b) figure out what your virtualisation infrastructure will be, and (c) do that conversion process.
  3. See System Center.  vSphere will give you great VMware virtualisation management.  But as anyone who really knows what private cloud computing is will tell you, the business doesn’t care about the infrastructure – they care about the business application that lives on top of it.  You need complete end-end and top-bottom management, including deployment, configuration, auditing, policy management, virtualisation, monitoring (traditional and client perspective), backup/recovery, and maybe even other things, covering everything from the network/hardware to the web app/database running on top of everything.
  4. Understandable.  VMware are adding Hyper-V support.  VMM 2008 R2 manages vSphere (but not all features).  VMM 2012 will add more vSphere support in addition to Xen.  But vSphere 5 isn’t far away.  I’ll be honest, I don’t think any management solution will have 100% feature management completeness of all virtualisation platforms, but maybe we can get close to it.
  5. See #4

MS Partner Event: Server Licensing in a Virtual Environment

I’m at a MS partner briefing day in Dublin.  The focus is on licensing in a virtualised environment.  I’ve spent most of the last 3 years in a hosting environment with SPLA licensing.  This will give me an opportunity to start getting back in touch with volume licensing.

  • Good News: we got key shaped 8GB USB sticks with the Hyper-V logo Smile
  • Bad News: Sales and marketing are coming in to talk to us Sad smile  I guess we have to take the bad with the good Winking smile

Ideal Process

  1. Technical expert assesses the infrastructure.
  2. Technical expert designs the virtualisation solution.
  3. Licensing specialist prices the requirements and chooses the best licensing.

Definitions

  • Virtual Machine: encapsulated operating environment
  • Instance of software: Installed software, ready to execute.  On a physical hard disk or VHD.  On a server or copied to a SAN.
  • Processor: Socket, physical processor
  • Core: logical processor contained within a physical processor.  For example, 4 cores in a quad core processor.
  • OSE: Operating System Environment.
  • POSE: Physical operating system environment, installed on a physical server.
  • VOSE: Virtual operating system environment.

Licensing

  • You only have to license running instances.  Powered down VMs do not need to be licensed.
  • This guy is saying that OEM licensing with Software Assurance is not tied to the hardware.  I guess I’ll have to take his word for that …. but I’d be sure to verify with a LAR beforehand!
  • Live migration: you can move a VM between hosts as long as the host is adequately licensed.  Exception: application mobility on server farms.  >90 days movement of licenses. (no details given).
  • CALs need to be bought for VOSEs.  Usually don’t need CALs for the POSE unless the POSE is providing direct services to users, e.g. you are silly and make your Hyper-V host into a file server.

Licensing Applications Per CPU

In the standard editions, you license the CPU’s of the OSE.  For example, in a VOSE you count the vCPUs.  In a POSE, you count the pCPUs.

In the Enterprise/Datacenter installations, you should license the host pCPUs.  There are benefits that cover more than one VOSE.  Enterprise usually covers 4 VOSEs (SQL), and DataCenter (if all pCPU’s are licensed with a minimum of 2) covers all VOSEs.

Simple VS Flexibility

We want simple licensing.  MS is claiming the the dynamic nature of virtualisation requires flexibility and this is an opposing force to simplicity.

Predictable:

  • Standard: lest flexible
  • Enterprise: flexible but limited
  • Datacenter: flexible and unlimited

SQL Licensing

God only knows!  The MS folks in the room cannot agree.  Ask your LAR and your local MS office licensing specialists.  The topic of 2008 rights (Enterprise covered all VOSEs) vs 2008 R3 rights (Enterprise covers 4 VOSEs) is debated.  One side says that 2008 rights have ended as of the release of 2008 R2.  The other side says they remain as long as you licensed SQL 2008 prior to the 2008 R2 release with per processor licensing or you bought instances with maintained Software Assurance.  There’s no firm answer so we break for lunch.

OK, there is a discount process.  You can license per processor based on virtual CPU, or physical CPU.  For example, if you have 1 vCPU in a VM on a host with quad core processors then you can buy 1 vCPU license.  If you have 4 vCPUs in a VM on a host with quad core processors then –> that VM runs on 1 pCPU so you can buy 1 per processor license for the pCPU.  If you have 2 * VM’s with 4 * vCPUs on a host with a single quad core processor then you buy 2 per processor licenses –> each VM runs on a single pCPU and you must license each installtion (1 pCPU * 2 VMs = 2 per processor licenses).

If licensing per POSE (host) then you must license each possible host that may license your SQL VM’s.  So, you could use Failover Clustering’s preferred hosts option for your SQL VM’s and set up a few preferred hosts in a cluster, and license those hosts.  And remember to take advantage of the CPU discount process.

Server

You can freely reassign a license within a server farm.  Microsoft has a time zone definition of a server farm, e.g. 3 hours for North America, and 5.5 hours for Europe and the Middle East.

I’m not doing the std, ent, datacenter stuff because it’s done to death.

Most Common Mistakes

  • Virtualising more than 4 VM’s when using Enterprise Server edition
  • Under licensing when using Live Migration or VMotion
  • Under licensing of server application versions, e.g. SQL Standard instead of SQL Enterprise, for hosts when using Live Migration or VMotion
  • Selling OEM/FPP to customers who want live migration …. they either need volume licensing (with/without Software Assurance) or they should have OEM licensing with Software Assurance.

This is where the speaker warns us to never trust someone who claims to fully understand MS licensing rules.  Always qualify the answer by saying that you need to verify it.

VDI

If you have non-SA, legacy or thin clients, then you can use the VDA license for VDI.  If you have SA then your Enterprise licensing entitles you to 4 VM’s per licensed desktop machine and place those VM’s on a virtualisation host.

The VDI standard suite includes a bunch of management systems (SCVMM, SCOM, SCCM, and MDOP) and an RDS license for delivering user access to the VMs.  The VDI enterprise suite extends this by offering unrestricted RDS licensing to allow the user to access both VDI and terminal servers.  You also get App-V for RDS.

Scenarios

If you are running things like SQL, then you may need to consider live migration or VMotion.  There was a real-world example based on VMware.  24 possible hosts (4 CPUs each), 295 VMs and 36 of those running SQL.  How do you license?  For Server, the best scenario is to buy 96 * Datecenter edition.  For SQL, the actual solution (MS, customer, lawyers, etc involved) was to create a cluster of 4 hosts.  The SQL cluster of 4 hosts was licensed with SQL Datacenter edition.  That limited costs and maximised compliance.

Summary

That was an informative session.  The presenter did a good job.  He was accepting of being challenged and seemed to enjoy the 2-way conversation that we had going on.  If you are a partner and get an invite for this type of session, register and go in.  I think you’ll learn something.  For me, the day flew by, and that’s always a good sign.  I can’t say I understood everything and will retain it all.  I think that’s just the nature of this EU treaty-like complexity.

It seems to me that MS licensing for virtualised environments conflicts directly with the concepts of a dynamic data centre or private cloud computing.  For example, SCVMM 2012 gives us elasticity.  SCVMM SSP 2.0 gives us complete self-service.  System Center makes it possible to automatically deploy VMs based on user demand.  IT lose control of licensing that’s deployed in the private cloud because we’re handing over a lot of that control to the business.  What’s to stop the owner of a dozen VMs from deploying SQL, BizTalk, and so on, especially if we are doing cross charging which assumes they have an IT budget to spend?

Microsoft licensing rules assume complete control and oversight.  We don’t have that!  That was tough in the physical world; it’s impossible in the virtual world.  We might deploy VMs onto the “non-SQL” Hyper-V or vSphere cluster but the owners of those VMs can easily go and install SQL or something else on there that requires per-host licensing (for cost savings).  This pushes you back to per-VM licensing and you lose those cost savings.

I think MS licensing needs to think long and hard about this.  The private cloud is about to take off.  We need things to be simplified, which they are not.  On the contrary, I think virtualised licensing (on any of the hypervisors) is more complicated than ever, considering the dynamic nature of the data centre which is made possible by the great tools made by the likes of Microsoft, VMware, and Citrix.

On the positive side, if you understand this stuff, and put it to work, you can really save a lot of money in a virtualised environment.  The challenge is that you have to maintain some very tight controls.  It’s made me reconsider how I would look at designing Hyper-V/vSphere clusters.

What to Expect From SCVMM 2012

Microsoft released more details about System Center Virtual Machine Manager 2012 at TechEd Europe 2010 last week.  The release is scheduled for H2 2011.  In the meantime, the next VMM release will be Service Pack 1 for VMM 2008 R2, probably 30 days after the release of SP1 for Windows Server 2008 R2 (to give us Dynamic Memory support).  The server SP is estimated to be RTM in March 2011.

image

Interestingly, and very powerfully, we’re told that VMM 2012 will have the ability to build Hyper-V hosts and host clusters.  Storage and network (VLAN tags and IP ranges) can also be provisioned!  Wow – VMM will become the first thing you need (or really, really want) to install in a Hyper-V deployment!

VMM is moving to being a private cloud product (management and provisioning) rather than just a virtualisation management solution.  Provisioning is more than just pushing out VMs.  It involves deploying services, as well as configuring storage and networking.  Service templates are at the heart of that.  We’ve seen the demos before; you define an application architecture (web servers, database servers, network, etc), define how to scale (server elasticity), and then deploy that service template to deploy the servers and roles.  The elasticity gives you dynamic growth, a key component of cloud computing.

You can deploy three types of service to VMs in a service template:

  • MSDeploy: web apps
  • Server App-V: virtualised services
  • Database apps

Application deployment improvements include custom scripting support.  You can also specify roles/features to enable in Windows Server in the hardware template.

Let’s not knock management.  Long time readers know I’m an IT megalomaniac.  I want complete control and knowledge over my systems.  MS aren’t stupid.  They know that medium and large companies will have a mix of hypervisors.  And that’s why the 2012 release includes additional support for XenServer.

Virtualisation is the foundation of new IT infrastructures, and hence the line-of-business applications, and even the business!  And that’s why the VMM service needs to be made highly available.  That’s not possible now.  We can cluster file services (library) and database (service data and library metadata) but not the service.  The 2012 release changes that.

The delegation model is expanded:

  1. VMM Administrator: manage everything
  2. Delegated Administrator: manage delegated infrastructure
  3. Cloud Manager: manage a delegated cloud and provision it into sub-clouds
  4. Self-Service User: deploy and manage virtual machines in sub-clouds

The outlook is cloudy.  Everything refers to clouds in the interface.  Get over the new ribbon interface and you’ll see that the navigation bar in the VMs and Services view has the traditional Host Groups and a new Clouds section.

A cloud is made up of other clouds, VMware resource pools, or host groups.  You will add one or more networks to a cloud.  You can add load balancer templates to clouds.  Different kinds of storage (high or low performance, for example) can be specified.  Ah – a change I want: now you can specify read-only and read-write library shares.  This has been an all-or-nothing thing up to now.  Maybe we don’t want to allow self-service users to store VMs in the library.  Storage is not cheap!!!  We can specify quotas for number of virtual machines, vCPUs, RAM, storage, and memory.  We can also specify if VMs can be made highly available or not (on a cluster). 

I am looking forward to the beta and testing the new functionality out.

Hyper-V Cloud Fast Track

Microsoft has announced a partnership program to deliver Hyper-V based private cloud solutions called Hyper-V Cloud Fast Track Partners.  It sounds like a marketing thing for buying a bundle of hardware and software to me.  Nothing exclusive is included.

There is some documentation on how to build a private cloud (using Hyper-V, VMM and SCVMM SSP 2.0) available.  You’ll find that to be a bit more of interest.

VMware Pricing Not Helping Them Renew Contracts

If you follow any Microsoft feeds related to virtualisation then you’re bound to read this story in the next couple of days.  It describes how many UK organisations are choosing to dump VMware in favour of XenServer or Hyper-V because of VMware’s pricing.

VMware are trying to say that the costs are the same … but they aren’t.  Hyper-V is effectively free.  VMM and OpsMgr are additional costs but they are less than vSpehere and they give you a complete stack management system, rather than a virtualisation layer.  Buy the SCMS bundle and you get backup in the form of DPM and the other System Center products too!

Timing is everything.  It’s around 3 years since virtualisation really took off.  I’m told that VMware’s contracts are for 3 years.  That means that there’s an opportunity to make that move …. stick in a Hyper-V cluster with VMM, run V2V conversions, and pretty soon afterwards you’ll have a fully operational cluster at a fraction of the cost.

Mastering Hyper-V Deployment Excerpts

Sybex, the publisher of Mastering Hyper-V Deployment, have posted some excerpts from the book.  One of them is from Chapter 1, written by the excellent Patrick Lownds (Virtual Machine MVP from the UK).  As you’ll see from the table of contents, this book is laid out kind of like a Hyper-V project plan, going from the proposal (Chapter 1), all the way through steps like assessment, Hyper-V deployment, System Center deployment, and so on:

Part I: Overview.

  • Chapter 1: Proposing Virtualization: How to propose Hyper-V and virtualisation to your boss or customer.
  • Chapter 2: The Architecture of Hyper-V: Understand how Hyper-V works, including Dynamic Memory (SP1 beta).

Part II: Planning.

  • Chapter 3: The Project Plan: This is a project with lots of change and it needs a plan.
  • Chapter 4: Assessing the Existing Infrastructure: You need to understand what you are converting into virtual machines.
  • Chapter 5: Planning the Hardware Deployment: Size the infrastructure, license it, and purchase it.

Part III: Deploying Core Virtualization Technologies.

  • Chapter 6: Deploying Hyper-V: Install Hyper-V.
  • Chapter 7: Virtual Machine Manager 2008 R2: Get VMM running, stock your library, enable self-service provisioning.  Manage VMware and Virtual Server 2005 R2 SP1.
  • Chapter 8: Virtualization Scenarios: How to design virtual machines for various roles and scales in a supported manner.

Part IV: Advanced Management.

  • Chapter 9: Operations Manager 2007 R2: Get PRO configured, make use of it, alerting and reporting.
  • Chapter 10: Data Protection Manager 2010: Back up your infrastrucuture in new exciting ways.
  • Chapter 11: System Center Essentials 2010: More than just SCE: Hyper-V, SBS 2008 and SCE 2010 for small and medium businesses.

Part V: Additional Operations.

  • Chapter 12: Security: Patching, antivirtus and where to put your Hyper-V hosts on the network.
  • Chapter 13: Business Continuity: A perk of virtualisation – replicate virtual machines instead of data for more reliable DR.

Survey on How Irish Companies Would Spend IT Budget

TechCentral.ie did a small survey on how Irish organisations would spend their IT budget.  They question they were asked was “If you had 50% of your total IT budget to spend on one area alone, what would it be?”

The results were:

  • Infrastructure: 61%
  • Virtualisation/(Public/Private)Cloud Computing: 24%
  • Applications: 15%

I was somewhat surprised by the results, and not at the same time.  Here’s why.

Everything we’ve been hearing since the recession started in 2008 (the slide really started in August 2007) is that business could optimise their operations by implementing business intelligence applications to improve their decision making.  These are big projects costing hundreds of thousands and even millions of Euros.  But this survey tells us that Irish IT only would spend 15% of their budget on this area.  This surprised me.

Cloud computing/virtualisation also ranked pretty still brings in a quarter of the budget.  One would expect that everyone should have something done on the virtualisation front by now.  It’s clear that even a small virtualisation project can save an organisation a lot of money on hardware support contracts and power consumption (remember that we were  recently ranked as the second most expensive country in Europe to buy electricity in and we have an additional 5% Green Party tax coming for power).  Getting 10-1 consolidation ratios will drive that bill down.  Those on an EA or similar subscription licensing can even see similar consolidation of their MS licensing, especially with Hyper-V or XenServer.  Putting that argument to a financial controller in a simple 1 page document will normally get a quick approval.

But, I’m finding that many have either not done any virtualisation at all yet or have literally just dipped their toes in the water by deploying one or two standalone hosts as point solutions, a minor part of a mainly physical server infrastructure.  There is still a lot of virtualisation work out there.  And as regualr readers will know, I see a virtualisation project as being much more than just Hyper-V, XenServer, or ESX.

61% of respondents said they would spend 50% of their budget on infrastructure.  That could mean anything to be honest.  I expect that most servers out there are reaching their end of life points.  Server sales have been pretty low since 2007.  We’re in the planning stages for 2011.  3 year old hardware is entering the final phases of support from their manufacturers.  Those with independent servicing contracts will see the costs rise significantly because replacement components will become more expensive and harder to find, thus driving up costs and risks for the support service providers.

I was at a HP event in 2008 where we were told that the future in hardware was storage.  I absolutely agree.  Everyone I seem to talk to has one form of storage challenge or another.  Enterprise storage is expensive and it’s gone as soon as it is installed.  Virtualisation requires better storage than standalone servers, especially if you cluster the hosts and use some kind of shared storage.

DR is still a hot topic.  The events of 2001 in New York or the later London bombings did not have the same effect here as it did in those cities or countries.  People are still struggling.  Virtualisation is making it easier (it’s easier to replace storage or VHD/MVDK files than to replicate an N-tier physical application installation) but there is a huge technical and budget challenge when it comes to bandwidth.  Our electricity is expensive but that’s nothing to our bandwidth.  For example, a (up to) 3MB domestic broadband package (with phone rental) package is €52/month in Ireland, where available

The thing that I believe is missing is systems management.  I recently wrote in a document that an IT infrastructure was like a lawn.  If you manage it then it is tidy and under control.  If you don’t then it becomes full of weeds and out of control.  Eventually it reaches a point where it will be easier to rip out the lawn completely and reseed the lawn, taking up time and money.  Before virtualisation was a hot topic and I was still contracting before going in the cloud/hosting business, most organisations here were clueless when it came to systems management.  Many considered a continuous ping to be monitoring.  Others would waste money and effort on dodgy point solutions to do things like push out software or audit infrastructure.  Those who bought System Center failed to hire people who knew what to do with it, e.g. I twice trained junior helpdesk contractors in a bank (that I now indirectly own shares in because I’m a tax payer) to use SMS 2003 R2 to deploy software.  They were clueless at the start and remained that way because they were too junior.  Maybe those organisations realise what mistakes they’ve made and realise that they need to take control.  Many virtualisation solutions will be mature by now.  That means people have done the VMware ESX thing and had VM sprawl.  They’ve also learned that vSphere, just like Microsoft’s VMM by itself is not management for a complete infrastructure.  You need to manage everything, including the network, servers, storage, virtualisation, operating systems, services and applications.

EDIT:

I think there’s also a growing desire to deal with the desktop, much for the same reasons as I mentioned with the server.  Desktops right now are running possibly 5 year old XP images.  A lot of desktop hardware out there is very old.   There are business reasons to deploy a newer operating system like Windows 7.  Solutions like session virtualisation, application virtualisation, desktop virtualisation, and client virtualisation are all opening up new opportunities for CIOs to tackle technical and business issues.  The problem for them is that all of this is new technology and they don’t have the know-how.

There is a lot of potential out there if you’re in the services industry.  But maybe all of this is moot.  We’re assuming people have a budget.  Heck, Ireland might not even have an economy after this week!

Network Security in the Hypervisor

I just read an interesting article that follows up some presentations at VMWorld.  It discusses the topic of security in the Hypervisor (ESX in this case) – the author is actually focusing solely on network security.  Other aspects such as policy, updating, etc, are not discussed. 

The author asks 4 questions:

Q) Security is too complicated, and takes too many separate devices to configure/control.
A) Yes – and I agree, sort of.

Security should be simple.  It isn’t.  It requires too many disparate point solutions.  Let me step back a moment.  Why do I like Windows, AD, System Center, Hyper-V, etc?  It’s because they are all integrated.  I can have one tidy solution with AD being the beating heart of it all.  And that even includes security systems like WSUS/ConfigMgr (update management), NAP (policy enforcement), BitLocker/BitLocker To Go, device lock downs on personal computers, remote access (DirectAccess or VPN via RADIUS/IAS) etc.

Things start to fall apart for network security.  Sure you can use whatever ISA Server is called these days (Sorry ForeFront; you are the red headed stepchild in Redmond, locked away where no one knows you exist).  Network security means firewall appliances, IDS systems, VPN appliances, VPN clients that make every living moment (for users and admins) a painful existence, etc.  None of these systems integrate.

To VMware’s credit, they have added vShield into their hypervisor to bring firewall functionality.  That would be find for a 100% virtual or cloud environment.  That’s the sort of role I had for 3 years (on ESX and Hyper-V).  I relied on Cisco admins to do all the firewall work in ASA clusters.  That’s way out of my scope and it meant deployments took longer and cost more.  It slowed down changes.  It added more systems and more cost.  A hypervisor based firewall would have been most welcome.  But I was in the “cloud” business.

In the real world, we virtualization experts know that not everything can be virtualized.  Sometimes there are performance, scalability, licensing, and/or support issues that prevent the installation of an application in a virtual machine.  Having only a hypervisor based firewall is pretty pointless then.  You’d need a firewall in the physical and the virtual world.

Ugh!  More complications and more systems!  Here’s what I would love to see (I’m having a brainfart) …

  • A physical firewall that has integration in some way to a hypervisor based firewall.  That will allow a centralized point of management, possibly by using a central policy server.
  • The hypervisor firewall should be a module that can be installed or enabled.  This would allow third parties to develop a solution.  So, if I run Hyper-V, I’d like to have the option of a Checkpoint hypervisor module, a Microsoft one, a Cisco one, etc, to match and integrate with my physical systems.  That simplifies network administration and engineering.
  • There should be a way to do some form of delegation for management of the hypervisor firewall.  In the real world, network admins are reluctant to share access to their appliances.  They also might not want to manage a virtual environment which is rapidly changing.  This means that they’ll need to delegate some form of administrative rights and limit those rights.
  • Speaking of a rapidly changing virtual environment: A policy mechanism would be needed to allow limited access to critical VLANs, ports, etc.  VMs should also default to some secure VLAN with security system access.
  • All of this should integrate with AD to reuse users and groups.

I reckon that, with much more time, this could be expanded.  But that’s my brain emptied after thinking about it for a couple of minutes, early in the morning, without a good cup of coffee to wake me up.

Q) Security now belongs in the hypervisor layer.
A) Undecided – I would say it should reside there but not solely there.

As I said above, I think it needs to exist in the hypervisor (for public cloud, and for scenarios where complicated secure networks must be engineered, and to simplify admin) and in the physical world because there is a need to secure physical machines.

Q) Workloads in VMs are more secure than workloads on physical systems.
A) Undecided – I agree with the author.

I just don’t know that VM’s are more secure.  From a network point of view, I don’t see any difference at all.  How is a hypervisor based firewall more secure than a physical firewall?  I don’t see the winning point for that argument.

Q) Customers using vShield can cut security costs by 5x compared to today’s current state-of-the-art, while improving overall security.
A) Undecided – I disagree with VMware on this one.

The need for a physical environment is still required to protect physical infrastructure.  That cost is going nowhere.

This is all well and good, but this all forgets about security being a 3D thing, not just the signle dimension of firewall security.  All those other systems need to be run, ideally in an integrated management, authentication/authorisation environment such as AD.