Linux Integration Components V2 RC2

The second release candidate of version 2 of the Linux IC’s has been released on Connect.  This test version supports SUSE Linux Enterprise Server 10 SP2, (x86/x64), SUSE Linux Enterprise Server 11 (x86/x64) and Red Hat Enterprise Linux 5.2/5.3 (x86/x64).  Using the IC’s gives you better performing synthetic drivers, i.e. an enlightened operating system.

VMware vSphere 4 Distributed Power Management

This post talks about VMware’s DPM.  DPM is a cool feature of VMware’s solution.  When host server utilisation is low, vSphere will VMotion VM’s to fewer host servers.  This is possible thanks to VMFS, VMotion and RAM oversubscription.  Idle host servers with no VM’s are then powered off.  That could potentially save a fortune in electrical costs.  When host server resource utilisation increases then those powered off servers are powered up automatically as required.  VM’s are relocated to spread the resource utilisation load.

Windows Server 2008 and Windows Server 2008 R2 Hyper-V do not have this.  I can see how it could be done using OpsMgr and VMM 2008 R2.  Windows Server 2008 R2 has Live Migration (VMotion) and Cluster Shared Volume (aka CSV or VMFS).  However, Hyper-V does not have RAM oversubscription.  It was in the early feature list for 2008 R2 last year but MS had to pull it because it wasn’t ready.  I know from talking to others that this is a highly desired feature so it won’t surprise me to see it in Windows Server 8.

Windows Server 2008 Hyper-V approaches power savings from another direction by consolidation resource usage in the server, rather than across the server farm.  Honestly, I’d like both.  2008 R2 includes Core Parking.  When the host OS notices idle CPU cores, they get powered down and the work load is consolidated to fewer cores.  Powering down the cores/CPU’s saves CPU power utilisation, reduces heat generation, reduces fan power utilisation and reduces air conditioning power utilisation.

Power Utilisation Comparison Of Rack VS Blade Servers

This blog post by Data Center Strategies reports on a publication by HP.  HP compared the power usage of DL rack mounted servers and BL Blade servers.  It was … interesting.  When idle, the blades used significantly less power.  But when busy, there was little in the way of difference.

So …

  • If you are building a small to medium power intensive server farm you might be tempted to go with rack servers instead of blades.  There’s a big cost saving to be made.  Server prices have increased over the last year to compensate for the lack of sales … we need less physical boxes because we are virtualising.  Server capacity is up, though.
  • Blades do have some nice features.  There’s a lot less cabling and hardware virtualisation enables boot from SAN that turns your physical servers into anonymous replaceable appliances.  All the intelligence is in the chassis and all the OS/data is on the SAN.
  • As committed to blades/SAN as we are at work, there’s still times where we’ve found DL rack servers to be more appropriate, functionally and cost wise.

I’ve not looked at the cost of the C3000 “Shorty”.  There’s some cool stuff you can now do with their Flex-10 10GB networking that enables you to use the C3000 for virtualisation.  The C3000 has 8 slots for server, tape and storage blades.  The problem with the Shorty blades is that they only take one mezzanine card.  That means you can’t do complex virtualisation clusters that could require 6 NIC’s or more per server.  With Flex-10 you get 10GB networking in the backplane.  You can divide that up and create virtual NIC’s on your blades.  Potentially (don’t ask me about support for this because I don’t know) you could have 8 NIC’s per blade for virtualisation … 2 for the parent partition, 2 for the heartbeat, 2 for VMotion/Live Migration and 2 for the virtual switches.  This could be fine in small deployments, e.g. a branch office.  AFAIK, you could then use iSCSI to mount the shared storage for VMFS/CSV.

But you know, now if I was building a virtual server farm now with a traditional known growth limit (not like in hosting where the growth is hopefully endless) then I’d go with normal rack servers.  There’s a big investment in a blade chassis that is hard to justify now.  On the HP storage side the Lefthand iSCSI stuff looks very tempting for DR implementations.  It is pricey but it would make DR very easy.

EDIT #1

As exected, HP’s marketing was not very happy with this report.  Some investigations were done and it turns out the rack server configurations weren’t on par with the blade comparisons.  The rack servers only has one power supply and had redundant NIC’s disabled.  Anything that could be done to reduce power consumption was done.

Start a User Group

I run the Windows User Group in Ireland.  It’s been a great resource, learning a lot about a variety of technologies in the Windows world.  We haven’t stuck to just Windows.  Our typical Windows engineer needs to know a lot more, e.g. how to manage the network and provide business solutions.  So we’ve run events on virtualisation, Exchange, OCS, etc.  Our next event is even on OpsMgr 2007 R2. 
 
Talking to other user group leads from other countries, we in Ireland have been lucky.  The DPE team in MS Ireland has been very involved with our local user groups: helping to find speakers, funding events, providing meeting locations.
 
You don’t need to be local.  Ireland may be small but transport is difficult thanks to an 18th century transport system.  So we went online with a LiveMeeting account that we were given by Culminis.  That allows us to do physical/virtual conferences.  Some user groups are 100% online – there’s a successful online PowerShell group.
 
This video talks about starting a user group and getting some help:
  

Windows 7 Upgrade Paths

This document details the supported upgrade paths to Windows 7.  Best advice: don’t do in-place upgrades.  Migrate user data (if it’s stored on the PC – should be on a server), rebuild using WDS, MDT or ConfigMgr and then restore user data.

EDIT #1:

And just as I publish this post they also release a document on Windows Server 2008 R2 upgrade paths.  OK, remember this is only an x64 OS.  No in-place upgrades from x86.  No in-place upgrades from Full to Core installations or vice versa.  You also can’t change languages in an upgrade.

MS strongly recommends against in-place server upgrades.  The only acceptable scenario is when a server is stable and has 100% MS components.  But even then, all the advice is migrate, not upgrade.  I’ve got a chapter on this subject in one of the Mastering Windows Server 2008 R2 books.  It was 65 pages when I submitted it to the editors/reviewers last week …. before screenshots.

Increased Official Support For Linux On Hyper-V

This was just posted on the Windows virtualisation blog:

“With the release of WS08 R2 version of the ICs, we’ll also add support for SLES 11 and RHEL 5.2 and 5.3”.

“Official support” is different to integration components.  As I reported earlier today, MS released the IC’s for Linux under the GPLv2 license so that enlightened drivers would be built into more Linux distributions for enhanced performance.  Official support means that MS has tested things completely and PSS will take calls.  The addition of RedHat and SLES 11 is superb news – something we knew was coming for ages but we didn’t know when.  We can already manage RH and SLES using Operations Manager 2007 R2.

Microsoft Writes GPL Drivers For Linux Guests On Hyper-V

Microsoft has published drivers for running Linux virtual machines on Hyper-V. 

“Microsoft released 20,000 lines of device driver code to the Linux community. The code, which includes three Linux device drivers, has been submitted to the Linux kernel community for inclusion in the Linux tree. The drivers will be available to the Linux community and customers alike, and will enhance the performance of the Linux operating system when virtualized on Windows Server 2008 Hyper-V or Windows Server 2008 R2 Hyper-V”.

This means that potentially any Linux operating system could run with maximum performance provided by the integration components.  There would be no messy (it’s actually quite easy – even for a penguin-phobe like me) IC installation.  You can run any XEN enabled Linux on Hyper-V using synthetic drivers but they don’t perform as well as enlightened OS’s with integration components for the underlying hypervisor devices.  Right now, we have support for SUSE SLES.  RedHat support is on the way.  But this announcement will open the floodgates for things like Ubuntu (expected to dominate in the third world) and CentOS (dominates the hosting world).

I suspect Microsoft will still have support statements.  For example, MS has a mutual support operation in place with SLES on Hyper-V.  You can call MS PSS with an issue and they can forward it to Novell.  They won’t have that for CentOS.

Well done Hyper-V team!

EDIT #1:

This is the announcement video.  It’s interesting to see how committed MS is to open source.  I was familiar with most of it but never stopped to consider the sum of their efforts.

Eircom Confesses DNS Cache Poisoning Attack

Those Internet “experts”, Eircom, finally confessed that they were victims of a DNS attack called cache poisoning.  When your client goes to your ISP’s DNS server to convert http://www.honestjoescars.com into an IP address your DNS server might need to go to another DNS server to do the job.  It will get the result, cache it and then pass the result back to you.  It might hold that cached lookup for 24 hours and all subsequent client requests for http://www.honestjoescars.com will get that cached result.

Someone planted poison cache results on the Eircom DNS servers.  When Eircom’s customers tried to browse certain sites they were sent elsewhere … to the sort of site you might not want to go to.  Eircom fervently denied there was an attack.  Everything was fine.  But customers who changed their TCP settings to use the OpenDNS servers had no issues.  Strange, eh?

Siliconrebuplic posted the news.