Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware

This 7 page article on InfoWorld makes for an interesting read.  And it appears to me that the author was doing his best to be fair when comparing Hyper-V, XenServer, vSphere, and RedHat.  In the end, he appears to favour vSphere slightly more than Hyper-V for two reasons:

  1. Simplicity of set up
  2. Management

I will concede on point #1.  I’ve done vSphere and, you may not have noticed, I am a wee bit of a Hyper-V fan.  When it comes to setup, vSphere is easier to set up, mainly because it is a virtualisation platform and nothing else.

On the management side, if you look 100% at the virtualisation slice of the pie, then you might concede that vSphere has the tiniest of an edge.  The author picked on Microsoft adding complexity to the management setup by using several tools.

Let me ask you a question: Why do businesses have IT?  Is it so they can own servers, switches, routers, disks, and firewalls?  Or is it because they want applications to enable the business to carry out operations and make profit?  Hopefully it is the latter … otherwise you work for a soon-to-be dot.bomb.

Microsoft have observed why business have IT and have developed their management stack to cater for the entire computing stack, not just virtualisation.  I’ve bleated on about that over and over so I’ll leave that there.

As a MS partner, I like Hyper-V because it brings the possibility of selling other licenses and services such as enterprise monitoring, backup, automation, and so on.  My relationship with the customer does not end after I sell some servers/storage and some virtualisation licenses.

Give the report a read for yourself.  Interestingly, he seems to reckon all the solutions are excellent.

Performance Issue with W2008 R2 Hyper-V on Intel CPUs

This KB article (KB2517329) for Windows Server 2008 R2 (including SP1) hosts with Intel Westmere or Sandy Bridge processors just popped up in my feeds.

“Consider the following scenario:

  • You have a Windows Server 2008 R2-based computer that has a large amount of physical memory and that has Intel Westmere or Sandy Bridge processors.
    For example, you have a computer that has Intel Xeon 5600 series processors and that has 48 gigabytes (GB) physical memory.
  • You install the Hyper-V role on the computer.

In this scenario, the performance of the computer may decrease.
For example, the following performance issues may be encountered:

  • The CPU usage is high and the server responds slowly when you copy large files on the computer. For example, you copy a 10-GB file.
  • The disk I/O performance of the virtual machines (VMs) is slow.
  • Windows takes a long time to start.

This issue occurs because the hypervisor supports only eight variable range Memory Type Range Registers (MTRRs). Additionally, the hypervisor cannot access the additional variable MTRRs that are introduced on recent Intel processors. Therefore, some regions of system memory are set to the default Uncacheable memory type, and the performance of the computer significantly decreases.

Notes

  • MTRRs are processor model specific registers (MSRs) that control the default caching for ranges of physical memory.
  • Intel Westmere and Sandy Bridge processors introduce additional variable MTRRs to enable systems to use a large amount of memory”.

There is a link to a hotfix on the page.  If it is applicable test it and then deploy.  Then I’d recommend (assuming your tests results are OK) making it a part of your standard host build.

Some News Isn’t News; It’s Marketing

I am a cynic and the world made me that way.  I’ve gotten used to not believing the news when it comes to politics, the economy, and so on.

Now I take my technology news with a large lump of salt.  I recently saw just about every Irish tech news source publish a press release about “Ireland’s largest Hyper-V cluster”.  The first thing that struck me was that not a single Microsoft rep was quoted.  The second thing was that I knew it was definitely not the largest Hyper-V cluster in Ireland.

It was just marketing by the organisation that sold the solution that was discussed in the press release.  I’ve been on the dark side press releases and there is a certain level of “fact stretching” that goes on to get the editors to publish them.

With a little knowledge you can start to pick stories apart, finding what is news, fact, and what is marketing fluff.  I’ve fallen victim to this in the past and I will again in the future.  I’m getting to the point where I am able to identify more and more fluff.  I try to not mindlessly retweet or blog it because I’m not someone’s paid-for marketing person and I make zero cents from licensing or hardware sales – though that might change soon Winking smile

So when you do read some announcement, try read between the lines.  Don’t take it at face value.  Does that number of 3,000,000 really mean what people are interpreting it as.  Does largest really mean that or is it hyperbole?  Has the editor just printed the press release as a story, or has the blogger just retweeted?

System Center Opalis: First Impressions

I tried out System Center Opalis 6.3 a little while ago.  I had waited until Microsoft said that it supported Windows Server 2008 R2.  There was no point in looking at it until then.

What did I think of it?

Not much.  Wait; that’s not true.  I’ll be honest.  I thought it stunk.

So far, I think this blog post might be longer than the deployment documentation that was available for it at the time.  That wasn’t good.  I also couldn’t get agents to deploy onto the W2008 R2 machines that I wanted to orchestrate actions on.  The error box gave no useful information.  I couldn’t find any logs.  My lab network was completely open.  I went searching for help and only found others who had the same problem but never found a solution.  Not good.  So I abandoned it, thinking I’d wait until the next major release.

Yes, there are lots of blog posts and videos from Microsoft staff showing how wonderful it is.  But they do have access to DLs (distribution lists that are internal only) where they can get access to troubleshooting information that is not in the public domain.  From my outsider perspective, these articles are just marketing without the substance I need to make the product work.  That may sound harsh to some but it is my opinion, having tried the product, wanting to get it to work.

Back in 2004, I ran the MS infrastructure for a finance company.  It was a "grey field” deployment … a bit of a rip and replace that we did the year before.  You can’t do everything at once so we added to it as we went along.  We were looking for a monitoring solution.  Our MS account manager suggested that we consider MOM 2005.  I didn’t know of Microsoft Operations Manager so I looked it up.  The reviews for MOM 2000 were awful.  Microsoft had acquired it just previously, rebadged it, and started selling it, while working on a new version.  They did the same with Visio, and the same with Antigen (later Forefront for Exchange).  I gave MOM 2005 beta a shot and, soon enough, the beta was monitoring some key machines in globally located branch offices.

Opalis is in that “tweener” stage right now; caught between pre-Microsoft and fully Microsoft.  The idea is great but, in my opinion, the recently acquired product still isn’t up to the standard set by other, more mature, Microsoft System Center products.  We know (from MMS 2011) that a new version is on the way and that it is being renamed to System Center Orchestrator.  Maybe then it’ll work better and be better documented.  Until then, I’m not going to show Opalis too much interest.

More Microsoft Downloads to Consider

Windows Server 2008: Planning for Active Directory Forest Recovery

“This guide contains best-practice recommendations for recovering an Active Directory forest, if forest-wide failure has rendered all domain controllers in the forest incapable of functioning normally”.

iSCSI Initiator Users Guide for Windows 7 and Windows Server 2008 R2

“Users Guide for the iSCSI Initiator”.

Holistic Approach to Energy Efficieny in Datacenters

“The Datacenter Efficiency whitepaper discusses Microsoft’s holistic approach”.

RD Virtualization Host Capacity Planning in Windows Server 2008 R2

“This white paper is intended as a guide for capacity planning of RD Virtualization Host in Windows Server 2008 R2”.

Microsoft Application Request Routing Version 2.5 for IIS 7 X86 & X64

“Microsoft Application Request Routing (ARR) for IIS7 is a proxy based routing module that forwards HTTP requests to application servers based on HTTP headers and server variables, and load balance algorithms. ARR Version 2.5 improves the performance and scalability of disk caching features in ARR”.

Start Learning About Configuration Manager 2012

I first became an MVP with a Configuration Manager expertise.  It was kind of odd timing; I had done quite a bit of writing and blogging on it but not in a while; I’d actually moved on to Hyper-V at the time!  That was because I was working in the hosting space where there were no desktops to manage, and, well, hosters do everything on the “cheap” because it’s a dog-eat-dog world out there!

But I love ConfigMgr.  Sure it’s big, and yes you can sometimes find your head is swimming with all the options that it has.  But if you’re an IT megalomaniac like I am, then you’ll love having the ability to know everything about your infrastructure and be able to effect change whenever you want.  You can even do mad things like creating a recurring advertisement to play the sound of a nuclear explosion or kill OUTLOOK.EXE/NLNOTES.EXE on the PC of some user that has annoyed you … not that I would do that myself or recommend that you do it either!  It will leave an audit trail.  You’re better off using task scheduler or Remote Desktop Services Manager to do that sort of thing.

Anyway …

ConfigMgr 2012 will be out later this year and Jeff Wettlaufer has been recording some videos to demonstrate the functionality of it, including how the new user-centric features work.

Once you’re done there, head on over to Windows-Noob to see what Niall C. Brady (ConfigMgr MVP) has been writing on ConfigMgr 2012.  He’s been at it since beta 1.

Technorati Tags: ,

Average Age of a Work Computer in the UK is Five Years and Two Months

This comes from another survey and news story, this time from Silicon Republic.  The story reports on a survey done for Mozy (the EMC owned online [cloud or SaaS] backup service).

In it they report:

“… the average age of a work computer in the UK is five years and two months … In fact, the average PC is now more than a year and half past the date it was planned to be scrapped”.

I have had the “pleasure” of working in environments where the PC was 5-7 years old.  Damn, I still need therapy from that.

The report makes some very valid points.  PC’s are not only old, but they are costing the business in many indirect ways.  Operating systems aren’t refreshed.  Applications are unmanaged – often being old, unpatched, and of various different versions.  Features are breaking – trying assessing such an old PC infrastructure using WMI-powered MAP and you’ll soon see what I mean.  End users have to deal with slow boot/login times, freezes and blue screens.  IT have a lot of fire fighting to do that could otherwise be avoided.  And let’s not forget the time and money required to go fishing for spare parts if a PC breaks and the business expects it to be repaired instead of replaced. 

What the business has is a tool (the PC) that is not fit for purpose because it is costing the user (the revenue generator/enabler) time.

Consider a Windows 7 upgrade.  Company A might have a a policy to upgrade 20-33% of their PCs every year to current business recommended specifications.  Running MAP in that environment will probably determine that only a tiny percentage of machines might require either an upgrade or replacement.  On the other hand, Company B clings onto PCs like Charlton Heston’s cold dead hands are holding onto his rifle.  Running an assessment in there will take longer because the machines are a mess and WMI is broken all over the place.  Once a result eventually comes in, the business will get the nasty report that says 90% of hardware needs to be replaced.  Ouch!

Or consider a software development house where the programmers are charged out at €1,000 per day on fixed price contracts.  PCs are old and crash once every two days.  Reboots take time, logons are slow, and work is lost from time to time.  Every developer loses 1 hour every two days.  Over time that builds up.  It’s fixed price work, so the company has to redo a lot of work, either pays out overtime or misses deadlines, and no one pays except for the company.

Or do they?  What you end up with is annoyed users.  Their work experience flat-out sucks.  The tool they need to use doesn’t work and they cannot do their job.  They find themselves redoing work, working later, dealing with stress from the boss, and never getting a badly needed replacement machine.

In my opinion, the age and often decrepit state of these PCs are a symptom of how a business values the function of IT. And that isn’t an IT mistake; it’s a business strategy mistake.

Are you an IT admin trapped in this nightmare?  I sympathise with you.  It would be easy for me to get on my high horse and tell you that you need to market internally, to deploy WDS/MDT/ConfigMgr to take control of the desktop, enable rapid standard OS image deployment, and provide self-service software deployment.  But the truth is that all that requires some level of investment by the organisation.  If that will isn’t there then making that PO request is pointless.  What might be possible is to record what the ancient hardware is costing the business in hidden losses.  How much time/revenue is lost when PCs crash?  How much time do you spend fire fighting instead of doing business enabling projects?  The reason the business doesn’t value IT is because they see it as a cost centre.  They’re focused on money and controlling spending.  If you can make an argument based on saving money then you might have some luck.  All I can suggest is that you present a very quick summary – the relevant decision makers will likely be “too busy” to be distracted by IT “geek talk”. 

I wish you luck if you are in this situation – I have not been able to change the situation when I was there in the past.

Whitepaper: How to Build a Hyper-V Cluster Using the Microsoft iSCSI Software Target v3.3

I’ve just uploaded a step-by-step guide on how to build a Hyper-V cluster for a small production or lab environment, using the Microsoft iSCSI Software Target v3.3.  This target is a free add-on for Windows Server 2008 R2 and is included with Windows Storage Server 2008 R2.  it goes through all the steps:

  • Installing and configuring the storage
  • Building a standalone host to run System Center VMs
  • Building a 2 node Hyper-V cluster

“The Microsoft iSCSI Software Target is a free iSCSI storage solution. It is included as a part of Windows Storage Server 2008 R2, and it is a free download for Windows Server 2008 R2. This allows a Windows Server to become a shared storage solution for many computers. It also provides an economic way to provide an iSCSI “SAN” for a Failover Cluster, such as Hyper-V.

This document will detail how to build a 2 node Hyper-V cluster, using the Microsoft iSCSI Software Target for shared storage, which is managed by System Center running on virtual machines, hosted on another Hyper-V server and stored on the same shared storage.”

There is a possibility to get your company advertised in the document.  Contact me an we can work out terms.

EDIT #1:

Hans Vredevoort contacted some of the storage folks in Microsoft to discuss the MPIO/cluster member initiators issue.  It turns out that the Microsoft page in question was incorrect.  It used to be true, but the v3.3 Software Target does support iSCSI initiators that are members of a cluster.  The document has been updated with this note, but I have not added configuration steps for MPIO.

Do I Need a Private Cloud?

With this post I am going to stay technology agnostic.  I’m also going to stay clear of marketing terms.

Before we answer the central question of the blog post, let’s get something clear.  A private cloud does not equal server virtualisation.  A private cloud is an extension of server virtualisation.  It provides a complex self-service mechanism where non infrastructure administrators can deploy services.  In this context and using the ITIL view of things, a service is a business application comprised of things like IIS/Apache, SQL/MySQL, virtual machines with operating systems, application components (Perl/.NET, database schemas, and web content), and additional fabric configurations like load balancers and storage.  In other words, a person from the department that manages business applications can deploy the virtual infrastructure that they need to meet a business need without any effort/time required from the IT department that manages the infrastructure.

This accomplishes a bunch of things that the business will care about.  But the key piece here is that non infrastructure people are doing the deployment.

Server virtualisation is a subset of the private cloud.  You can do server virtualisation without deploying a private cloud.  My bet is that you already have – years ago.  But you cannot do private cloud without server virtualisation.

Taking all into account (up to now, and this might change) I have one rule to answer the central question of this blog post.

Question: Do I need a private cloud

Consultants Answer: Who deploys and manages your applications?

I know, I know.  I’ve answered a question with a question.  Go read how I briefly described a private cloud.  The think you noticed was that the infrastructure administrators were delegating deployment tasks to people who manage applications.  That’s the crux.  Do those people exist in your organisation?

In a small and some medium organisations, there are a few IT infrastructure administrators who do everything.  They manage the firewalls, the run the domain, they do server virtualisation, they run the CRM application (I’m picking on CRM today!), they manage the SQL databases, and so on.  There is no one to delegate service deployment tasks to.  So what is the point in deploying all the additional infrastructure of a private cloud?  There is no valid business reason that I can envision (at the moment).  All that small team really needs is their virtualisation management tools, preferably joined by a set of systems management tools (no brands – I said I’d be agnostic).

On the other hand some medium and large organisations do have various different departments that manage various aspects of the business application portfolio.  There will also be branch offices where servers have been centralised in a virtual farm.  Here there absolutely is a reason to deploy a private cloud.  The central IT infrastructure department could employ people to deploy VMs and install things like IIS/Apache or SQL/MySQL all day long.  And that still wouldn’t meet the deadlines of their internal customers.  Deploying a private cloud would allow those internal customers, who are IT savvy, to deploy their own services in a timely and controlled manner, using policies and quotas that are defined centrally by the business. 

My rule of thumb here (at the moment) is that:

  • If the IT infrastructure team is doing all application deployment/management then there should not be a private cloud.
  • If there are other departments or teams that are doing application deployment/management then there should be a private cloud.

That’s my view on the “Should I deploy a private cloud?” question.  I’ll be interested in other opinions.  This is early days for this stuff and I figure many of the questions and answers for the private cloud will evolve over the coming years.