Virtualisation Memory Over Commitment

Working in the server hosting business I’m used to “VPS” terms like over commit, burstable, etc.  What they mean is that although your virtual machine is granted 4GB RAM (for example) it only ever is given whatever it is using.  The idea is that the server hoster might have 29GB RAM available for VM’s but could possibly sell 40GB on that host machine.  You could see how this would be attractive to anyone.  Let’s face it, we tend to spec servers based on peak requirements, not average ones.  A web server might have 2GB RAM but it probably only uses 1GB of that 95% of the time.  Wouldn’t this be appealing in testing labs, development farms and enterprise virtualisation deployments?  But what happens if the VM with 4GB of RAM can’t burst to 4GB when it needs it?  What if either too many VM’s are bursting at once or what if the hosting company abuses over commitment?  The best case scenario is that the host machine starts to page like crazy.  The worst case scenarios is that VM’s start to blue screen when the RAM the believe to be available cannot be accessed.  At work, our virtualisation solution (Hyper-V) doesn’t have this and even if it did, I’d be very conservative about using it.

That’s why I read this article with interest.  Let me preface this by saying that I’ve found this blogger, in my opinion (i.e. not fact), to have a slanted viewpoint.

The blogger talks about the Burton Group and how they compare/measure virtualisation solutions for the enterprise.  They have 27 requirements and a number of preferred standards.  Yes, they measure VMware above Hyper-V.  Fair enough.  I’d agree that VMware have been in this market longer and have a more mature solution.  It might not be the right solution for me right now, but it is around longer and had more time to develop.  VMware do have more features.  For example, VMware has memory over commitment of sorts.  Hyper-V does not.  MS did try to add it into W2008 R2 but had to pull it very late (pre beta) for whatever reason.  I suspect they didn’t feel they had time to get it perfect before the release date.  Instead of releasing a nearly perfect solution they waited to ensure something critical like this would be right.

One of the really cool things VMware does is their power management by putting idle hosts to sleep after using VMotion.  It’s like Core Parking across host servers.

The blogger says that one of the preferred features, Memory Over Commitment, should be a requirement.  Oh really?  Let’s just analyse this for a second.  Would it save companies money?  Absolutely.  With server costs exploding in the last 12 months the less we have to buy of them, the better.  Is memory over commitment supported in production?  Oh – no it isn’t, at least not by VMware.  I guess that puts a dampener on that.

Would I like to see memory over commitment supported in production?  Yes.  I’d love it.  But it isn’t right now so I guess it shouldn’t be a requirement for any measure of virtualisation suitability for the enterprise.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.