The IT Infrastructure Shake Up

It’s becoming clearer and clearer that things are changing drastically in the IT infrastructure (IT Pro) world.  Last year I attended a talk by Don Jones who scared the ***t out of an audience.  He said that in a few years time, there would be much fewer IT Pro jobs.  What we’d have is a smaller set of junior or operator engineers, almost no one in the middle and a small group of senior engineers.  Those who would rise to the top and stay in IT would be those who could learn something inside-out and learn to leverage automation.  And to get to that level, these engineers will have to be interested in their jobs, not be one of those 10-till-4 types I’ve discussed before.  Key to their success will be the ability to learn on their own and to provide business solutions, not IT ones.

How this is going to happen is becoming evident now.  You’ve probably heard of Cloud Computing and SaaS but I’ll quickly talk about them. 

Would you build a nuclear power plant in your back yard if you need electricity in your house?  Probably not, but we take this approach whenever we need a new business application.  Take a CRM application.  It might need a database and an application server.  If fault tolerance is required then you need more servers, clustering, etc.  All this IT complexity is added to non-IT companies every day and they find themselves becoming accidental IT companies.  Sure, there are consultancy and field engineering companies but they don’t take the pain away.  For example, when a CRM must be upgraded there’s more servers, operating systems, a costly project and a data migration.  The non-IT company finds itself immersed in an IT project that consumes time and money and puts their business data at risk.  It’s not just CRM either … it’s everything from the SBS server, tape backups, databases, ERP systems, risk management, etc.

The principle of Software as a Service (SaaS) is that you should avoid this on-site installation and consume applications on an as-needed basis.  Your service should be like a household utility, e.g. sign a supply contract for electricity and turn the power switch on and off as required.  You know how much a unit costs and you can budget accordingly.  We’ve seen how companies like SalesForce and Google have done this with their services.  Microsoft isn’t far behind (BPOS) either.  SaaS isn’t just for special online solutions.  You can cut the costs and complexities of owning many solutions, e.g. a DR site, your internal IT systems in an outsourced deployment, etc.

This all requires a service delivery mechanism.  Some companies like Google and MS are big enough and skilled to host their own solutions on the Internet in high quality data centres.  However, smaller or niche companies looking to build a SaaS service can’t build something of that quality.  They need a quality data centre (not a computer room) because their business is dependent on this facility.  Not all data centres are the same either.  You’ll want to check them out and get advice from people who know the industry.  Don’t base any decisions on web sites, press releases, marketing or your own IT experiences.  The data centre world is very complex and full of many deep pitfalls that can end your career.

So if you cannot build your own data centre for your SaaS product then you can use Cloud Computing.  The idea is simple.  A service provider owns, manages and leases an infrastructure.  You simply subscribe for the functionality you require as you need it.  Most software developers aren’t IT infrastructure experts so building and managing a best practice and secure architecture is hard for them.  With a good hosting partner, they can use this black box solution called Cloud Computing to rapidly get the server/network resources they need and grow/shrink them as customer demand changes.

Before I go any further, not every application will be SaaS and not every server will migrate to the cloud.  Some organisations just won’t be able to for security, unionisation or complexity reasons.

Here’s the rub for IT pros.  If you don’t work for one of these cloud computing firms you might not have a job in 10 years.  Think about this … if your employer can reduce costs and complexity by using SaaS applications or servers that reside in the cloud then do they really need you?  They already perceive IT infrastructure departments as a cost centre that eats up budget and delivers 80% of what they promise … late.

So where I am thinking most IT pro jobs will be in the future?  In the data centre that hosts cloud computing infrastructures.  Operators are junior staff that look after the physical infrastructure.  They rack servers, run cables and look after the NOC.  They’re the first point of call for support issues.  There’s always a good number of these folks to maintain a 24 operation (… or there should be.  Try knocking on the door of any data centre you’re considering at 2am to check out promises of 24 hour on site presence.  You’ll be surprised who makes claims and who fails to live up to them!).  The folks who design and deploy systems will be the senior engineers.  In a large facility these will often be specialists, e.g. firewall CCIE’s, messaging experts, server OS guru’s, DBA’s, etc.  Not only must they understand complex technologies but they must know how to handle huge workloads efficiently.  They’ll be managing a huge number of machines and applications so automation will be critical.  Using correctly designed solutions they can take control of the network.  Have a look at the concepts of Optimised Infrastructure and you’ll see what I’m getting at.

We’ve heard the talk about outsourcing before.  Some companies did bring in IT companies to replace their IT staff but there’s no cost savings there.  Quite the opposite to be honest when you replace like for like at a higher daily rate.  But being able to access subscription based services from a quality data centre with centralised expertise and management systems will give the financial and business reasons for employers to reconsider their IT situation.  This isn’t just me talking, it’s every big brain out there.  I attended a session in Barcelona that said IT will have completed a swing that way in 10 years time.  If you’re an application developer then you actually need to be engineering your SaaS solution now or it’s already too late!

If you’re an IT Pro and that’s your career choice rather than an accident then my advice is to get really good at something and learn how to use automation to manage a network.

Windows 2008 Access Based Enumeration

Novell admins always had one big complaint about Windows file shares.  It was a legitimate one too.  How come users who didn’t have access to folder could see it?  Microsoft gave us ABE or Access Based Enumeration for Windows Server 2003.  I was looking at a solution today where ABE would be handy.  However, this would be a Windows Server 2008 deployment.  I found someone had already done a nice job on documenting how to use ABE in W2008.  Once you enable it, anyone not in a group with access permissions will not be able to see the folder in a share.

Operations Manager 2007 R2 Beta Now Available To Test

The first public beta release of OpsMgr 2007 R2 is available for testing now.  Features include:

  • Manage UNIX and Linux seamlessly.  I’ve seen this in action.  It is not a bolt on.  Operationally, it looks very nice.  They decided to do it only in R2 because it needed some changes in how to use the role user accounts.
  • More VMM integration
  • Improved web application monitoring
  • The SLA stuff appears to be integrated rather than being a bolt on as it currently is
  • A faster console
  • Better management pack … management (stuck for a better word there!)
  • Simplified notification  – it was needed because it’s a maze to figure out for the first time
  • Improved and simplified authoring – I really hope so because discovery is a nightmare

Microsoft To Release Free Consumer Anti-Malware

Microsoft currently sells a subscription service product for consumer computer security called Live OneCare.  It takes care of AV, firewall, spyware, etc.  I’ve used the trial and I reckoned it was pretty good for the domestic user.  I didn’t subscribe – I’ve been using AVG free for a while now (Avast beforehand) and I find it pretty good.

According to Bink, MS are going to stop selling OneCare via retail in June 2009 and replace it with a free product.  The aim is to get as many people protected as possible, thus giving Windows consumers the protection from malware that they need.  The decision to phase out OneCare allows MS to focus their efforts on a single consumer product.  Making it free spreads the cover of their protection to the maximum possible install base.

Credit: Bink.

Just Installed Live Mesh

I decided to have a play with Live Mesh tonight.  I’ve wanted a way to synch my Favourites folder between laptop, desktop and my work laptop.  That rules out using folder redirection on my network at home – anyway I’m thinking of flattening the SBS box and reusing the machine for something else.

I installed it from the web site and synched the favourites from my personal laptop.  I could then sign into mesh on my work laptop and view the folder contents.  I was then able to install the client on that machine and view the folders.  I opened Favourites from my personal laptop (on my work machine) and copied in the work machine’s Favourites.  They instantly appeared over on my laptop.  They now stay in synch whenever I change a file.

It looks like you can share your files via Mesh as well.  You could think of it as a very limited (in function and size) version of Groove.

I also tried out the remote desktop feature.  It appears that is uses Remote Assistance tunnelled over HTTPS.  That bypasses those ISP’s who’ve been blocking "work" network protocols such as PPTP, RDP and IPsec.  The performance was quite good.  The only downside was that it required an approval for the connection on the target machine.

VMM 2008 P2V To Hyper-V Of DL360G5

I’ve used a "security server" running DL360’s with WSUS and AV in several jobs now.  They’re great candidates for virtualisation so the security server at work was my first target to convert to a virtual machine, thus freeing up some h/w for profit making.

The P2V process of VMM 2008 is pretty easy.  I found no fault with it.  However, I did have some problems that were non-VMM 2008 related.

The VM would hang on boot up.  I got it into safe mode and disabled the HP services.  They were trying to access hardware that didn’t exist.  Ideally you would uninstall this stuff before P2V but I needed to keep the physical machine online until the virtual was ready.

Once the VM was ready I installed the integration components in VMM 2008.  I fired up the VM and tried to log in … uh oh!  It needed to be reactivated.  Luckily I’d put the machine on a test network with Internet access so that was done.  Then I had a service failure pop-up.  The event log showed that was OK, the server was looking for the domain and not finding it … it’s still on the test network while the physical machine is still providing services.

Now the killer.  I got a pop up about WMIPRVSE failing.  That repeated 9 times when I closed it.  I also had dozens of WINMGMT errors in the application log.  To troubleshoot I made a checkpoint and started googling and trying things out.  In the end here’s what it came down to:

  • Uninstall anything related to HP.
  • Edit the registry and searched for anything to do with HPWBEM.  I deleted the relevant keys/values.  Some needed to be edited instead of deleted.  This took ages!
  • Searched for HP services in CurrentControlSetServices.  They weren’t removed by uninstall’s.
  • Rebooted
  • Removed HP folders from Program Files.
  • Uninstalled the OpsMgr agent (I wasn’t taking chances now – because I was still getting the error after reboots).
  • Removed the ATI driver which I’d forgotten to remove.
  • I reset the WMI repository.
  • After a reboot the WMI errors disappeared.

As I said, the P2V worked perfectly.  Any problems were related to the HP software, e.g. not uninstalling correctly.  There seemed to be loads that needed to be done.  I’d tried lots of combinations in various attempts by restoring the checkpoint.  Looking back on it, I doubt the OpsMgr agent was a factor but I removed it anyway in case it was doing some heavy WMI stuff that was no longer applicable.

CAUTION: Edit the registry at your own risk.  I’m not recommending it.  It’s just what I did to solve my problem.  If you screw up your server then it’s your problem, not mine.

Admin Rights On Workgroup Or Un-trusted Domain Hyper-V

John Howard from MS wrote a 5 page article on how to grant remote admin rights using the Hyper-V MMC to Hyper-V servers that were not in your domain, e.g. in an un-trusted domain or in a workgroup.  It was 5 long pages of detailed instructions where anything could go wrong.  It was quite off-putting.

He’s just shared a new tool that will do the job for you.  HVRemote works quite simply, you just tell it to add or remove a user’s admin rights.  Well done John!

Hyper-V Architecture

I was just reviewing this stuff this morning on the laptop while on the train.  I checked my RSS feeds and I saw that Kurt Roggen was doing some blogging recently (and doing a nice job too).

Understanding things like VMBus, VSC’s and VSP’s is recommended when working with Hyper-V.  This post will teach you some of this.

What I’ll add to this is that your VM’s (child partitions) have a 1-1 connection to the parent partition.  This secure channel, the VMBus, is at Ring 0 and is protected by Data Execution Protection (DEP).  This is why turning this on in the BIOS is a requirement for installing Hyper-V.

Credit: Kurt Roggen.