Power Utilisation Comparison Of Rack VS Blade Servers

This blog post by Data Center Strategies reports on a publication by HP.  HP compared the power usage of DL rack mounted servers and BL Blade servers.  It was … interesting.  When idle, the blades used significantly less power.  But when busy, there was little in the way of difference.

So …

  • If you are building a small to medium power intensive server farm you might be tempted to go with rack servers instead of blades.  There’s a big cost saving to be made.  Server prices have increased over the last year to compensate for the lack of sales … we need less physical boxes because we are virtualising.  Server capacity is up, though.
  • Blades do have some nice features.  There’s a lot less cabling and hardware virtualisation enables boot from SAN that turns your physical servers into anonymous replaceable appliances.  All the intelligence is in the chassis and all the OS/data is on the SAN.
  • As committed to blades/SAN as we are at work, there’s still times where we’ve found DL rack servers to be more appropriate, functionally and cost wise.

I’ve not looked at the cost of the C3000 “Shorty”.  There’s some cool stuff you can now do with their Flex-10 10GB networking that enables you to use the C3000 for virtualisation.  The C3000 has 8 slots for server, tape and storage blades.  The problem with the Shorty blades is that they only take one mezzanine card.  That means you can’t do complex virtualisation clusters that could require 6 NIC’s or more per server.  With Flex-10 you get 10GB networking in the backplane.  You can divide that up and create virtual NIC’s on your blades.  Potentially (don’t ask me about support for this because I don’t know) you could have 8 NIC’s per blade for virtualisation … 2 for the parent partition, 2 for the heartbeat, 2 for VMotion/Live Migration and 2 for the virtual switches.  This could be fine in small deployments, e.g. a branch office.  AFAIK, you could then use iSCSI to mount the shared storage for VMFS/CSV.

But you know, now if I was building a virtual server farm now with a traditional known growth limit (not like in hosting where the growth is hopefully endless) then I’d go with normal rack servers.  There’s a big investment in a blade chassis that is hard to justify now.  On the HP storage side the Lefthand iSCSI stuff looks very tempting for DR implementations.  It is pricey but it would make DR very easy.

EDIT #1

As exected, HP’s marketing was not very happy with this report.  Some investigations were done and it turns out the rack server configurations weren’t on par with the blade comparisons.  The rack servers only has one power supply and had redundant NIC’s disabled.  Anything that could be done to reduce power consumption was done.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.