Big shocker! Server sales are down thanks to virtualisation! Whoda thunk it! Duh! What is news is that the type and design of servers that manufacterers are trying to sell has changed.
That Network World article tells us that blade servers with integrated storage (I/O) is the way to go. You can get more blades into a rack than you can with 1U servers. For example, a 42U rack can take 64 * HP BL460C blades or 42 * DL360 rack servers.
I noted something interesting. Manufacturers “are still fighting the perception that blade servers–which make up only 15 percent of the total market–are more expensive than other servers and that consolidated infrastructure products would be more expensive still”. The reality is that blades are more expensive than rack server installations unless you are doing massive installs. A blade server by itself is quite a cheap unit. But add on the mezzanine cards, enclosure, switches (one per card/socket), enclosure admin module, remote management, power supplies, ….. well, you get the picture.
For me, my server decision making process starts with CPU/RAM capacity and my ability to monitor them with OpsMgr. I’ve preached about the latter enough. Once I have the basic requirements, iron is iron. I’d prefer blade servers with integrated networking because I hate cabling and I’m a disaster at it. But to be honest, storage is more important. That affects performance, scalability, provision automation, backup (via VSS provider), and disaster recovery design.
And that’s what hardware salesmen who anticipated the changes brough by virtualisation have been focusing on. Server sales have gone down. The individual units are more powerful but revenue has gone down. Storage requirements, on the other hand, have gotten more complex and larger. Virtualisation with good performance requires more disk rather than less.
3 thoughts on “Selling Servers Is Changing”
They are down to us, that’s for sure. But how many are they shipping in the “Container” or simular form factor? I was told in 2008 that 50% of all servers world wide are bought by only a handfull of players in the cloud business …
The remarks you mention on blades are spot on. This is the reason why we don’t use ‘m a lot. We buy 1U, 2U (sometimes 4U), cluster them and the payload is virtualized highly available on a SAN. Beats blade pricing for smaller shops. Also you can get a lot more NIC ports, and no the 10 Gb switch port cost isn’t an issue when you can buy 24 port layer 2 10Gbps switches at 1.500 Euro VAT included. Power consumption, heat and space used buy network equipement? I’ll show you the SAN’s … they down right dwarf all the rest. Most setups we do don’t have the volume to make blades the better deal and you said it very correctly: virtualization is making this even more visible. The cable management, I concur … blades rock. But actually the 1U, 2U servers have more redundant NIC ports, HBA’s with seperate cabling, that they can survive complete cable, HBA and NIC failures. What happens if that blade chassis goes down? Yes that happens!
That’s a good point about chassis failure. That’s a single point of failure for many blades. I’ve not seen it personally. The HP blade is pretty dumb – which is a good thing. I think the backplane, which is just a bunch of connectors, is the only point of failure as long as you purcahse pairs of virtual connects (siwtches) and 2 admin modules (€$£). I hope Hans sees that one and responds.
The largest Belgian Hospital is giving HP blades their last change to measure up … “@reinoudreynders Discussion with my teamleaders about our HP blades. We will give them a second (a third?!?) change. I love paper and pencil. 99.99999 % upt” & reinoudreynders Postmortem of blade chassis failure: a passive back plane can’t fail! Yeh right! It took 1 week to figure that out! What excuse ‘ll they use …