Why I Recommend Software Assurance For Hyper-V Licensing

If you’re in a non-subscription licensing program with Microsoft, then I recommend that you purchase Software Assurance with your Hyper-V licensing (this is the Enterprise or Datacenter licenses that covers your host OS and VM guest OS requirements).

There’s 3 reasons:

Hyper-V Upgrades

Every new release of Windows Server brings new features.  We went from Windows Server 2008 with Quick Migration, to Windows Server 2008 R2 with Live Migration, CSV, and Dynamic Memory.  That was nothing compared to what’s coming next year in Windows Server 8 (2012).  And who knows what’ll come with Windows Server 8 (2012) R2!?!?  You will want those upgrades … because we always have wanted them so far.

Guest OS Availability

Maybe you’re happy with Windows Server 2003/R2 for most of your P2Vd VMs for the next 900 or so days of remaining extended support.  But at some point in the future:

  • You will want to start deploying new VMs with the newest version of Windows Server that hasn’t been released yet
  • You will want to upgrade/replace those legacy OS VMs

You could fall into the trap of thinking you can buy a few copies of Windows Server 8 Standard Edition for those few VMs on your Windows Server 2008 R2 cluster.  Yes – but it wouldn’t be legal.  That’s because you can only Live Migrate or Failover those VMs with the one-off licenses between different pieces of hardware once every 90 days.  Those once-off licenses are assigned to hardware, not to VMs.

To be able to deploy a new version of Windows Server than you bought for the cluster, you need to upgrade the licensing for the entire cluster.  You could go and re-spend all that money all over again.  But you would save a lot of money by buying Software Assurance.

Partner Opportunity

For those of you who sell software as a part of system integration/consulting services: you’d be a moron not to try to sell SA so you have a future upgrade project with your customer, and make available future opportunities to implement the newest features of the next version of Windows that could solve some problem the customer is having.

Windows Server 8 Hyper-V Replica, Veeam, Etc With Hosting Companies

If you replicate a VM from your licensed hosts to a hosting company of some sort using Hyper-V Replica (Windows Server 8) or one of the plethora of 3rd party alternatives, then you need to license the installation of Windows that is in each replica VM … even if it is powered off or locked in a replicating state.  Don’t bother with any of the usual “it’s not being used” or “it’s only being replicated” arguments … it needs a license so that’s that.

A benefit of Software Assurance is Cold Back-ups for Disaster Recovery.  This means that if you license your hosts (and thus your guest OSs if correctly licensed with Enterprise/Datacenter editions) with SA, then you get a benefit of licensing for the cold backup copy.  The alternative is to not buy SA for the host/guests and have to buy full licenses for the offline replicas.  This benefit allows your primary site to go offline and to power up the replicas during a catastrophic event.   You can do this without doing anything to activate the benefit or without communicating with Microsoft.

Conclusion

Long time readers know I’ve been critical of SA in the past.  There are times I recommend it, e.g. a customer wants Windows desktops and is considering looking at 3rd party encryption technology – Windows 7 Enterprise!  I believe that it makes sense to purchase SA with Windows Server when licensing for Hyper-V because of the real benefits of host/guest upgrade and offline replica rights.

What’s The Value Of Training Courses & What’s The Alternative?

Normal practice for a company to up-skill an employee is to send them off to a 5 day training course, usually costing €2,000 plus lost revenue/productivity, travel, and accommodation.  I’ve been on those courses, and I’ve sent people on them.  What are these courses worth? 

I’ve seen behind the curtain of the typical official training course.  Let’s just say that I was not impressed.  The authors were not experts on the subject matter.  Much of it was copied and pasted from the software publisher’s support site.  There was no education on why things are the way they are, best practices, or supported scenarios.  In fact, 40% of the content on that course was irrelevant content that was forced through by marketing.  Official approved training rooms usually have 1 PC per person and attendees are expected to learn failover clustering and management on that!  It seems to me that generic training fails to train.  And folks, things are not getting better.  The authoring of this sort of content is not developed by who you think it might be – it’s usually outsourced on a tender basis (and can mean the cheapest tender wins, which isn’t necessarily a good indicator of quality), they then outsource to SMEs (subject matter “experts”), and they often outsource again to friends or other online contacts because of deadlines.  What can you really learn from that?

Let’s look at the experience from a two perspectives:

The End Customer

A company has hired a consulting company to deploy some technology, e.g. Hyper-V, System Center, SharePoint, Exchange, etc that is new to the company.  At some point, the consultants will leave and the internal IT department will have to take over day to day engineering and management.  The traditional solution is that 1 or more admins are sent away for a week and expected to come back knowing all there is to know. 

OK – consider a Windows Server 8 Hyper-V course.  In the room they’ll get 1 PC with VMs to work on.  They might get partnered up with someone they don’t know, who could be a rookie/moron.  They have enough hardware to build a cluster but with no storage – unless the class hoster sacrifices one of the 10 PCs for iSCSI.  It’s likely the attendees will learn zip/nothing about Storage Pools, and they’ll get a pretty crap education on System Center too.  I reckon you’ll need 4 physical servers plus SAN access to learn Windows Server 8 Hyper-V.

If like the typical course, they’ll learn the absolute basics, and return back to the office the following week not really grasping anything that was implemented by the consultants.

The Consultant

I have to break the 1st rule of the magician’s circle here.  Very often when you hire an “expert” consultant, they’ve just been on the course the week before.  Sometimes, and it’s happened to me as a consultant, they don’t even know the product (I once was forced by sales to pretend I was an expert on a product that I didn’t know the first thing about, and the customers had been on 3 weeks of training – that was an experience full of ecumenical matters).  A lot of customers actually pay for their consultant to learn something because they are Googling on the customer site, or hammering the forums for help, while the customer pays their day rate.

What normally happens is the consulting business decides to sell and expertise but not develop it until they get traction – there’s “no point” in investing €2,000+ per consultant plus lost consulting days revenue until they know that they can sell it.  Then a customer bites and someone is hurriedly sent packing on a course.  The following Monday they are in on the customer site to do a complex install … but all they’ve learned is some terminology and how to run setup.exe.  For example, I hope you now understand that there’s more to a virtualisation project than installing ESXi or enabling the Hyper-V role!

So What’s To Be Done?

Let me talk about my last 3 training courses:

  • SMS 2003: This one was a MOC (MSFT official course) that I did years ago, using some tokens from a MSFT EA.  The content was shite.  I learned more about this product from one of my team and from reading by myself than I did from the course.  The MCT running the course didn’t know the product and we usually went home just after lunch because we ran out of content – she didn’t know how to expand on the brief written subject matter.  It was an appalling waste of time for me, and a waste of money for the others who attended.
  • VMware ESX 3.x: This was a course (official VMware I believe) that was run by the Irish VMware distributor that I did in 2007.  It was excellent – it taught me the fundamentals, the trainer knew the content and could answer my questions.  And the lab – we each had access to servers and storage kept in a back room, giving us close to real world environments.  The following week I was prepared to do a cluster deployment.  That was reviewed afterwards by a consulting company and given a pass.
  • DPM 2010: MSFT Ireland has heard the feedback on training from local partners and started running bespoke training last year.  The first round of courses were developed and run by John McCabe.  It focused on real world preparation of consultants for deploying DPM in customer sites, with plenty of practical work.  And this was made possible by each person having access to several physical servers plus iSCSI storage kept in a back room.

I know that MOC courses haven’t changed much in the 15 years that I’ve been aware of them.  I’ve steered clear, often telling my superiors that I’d much prefer to go to a technical conference where there is level 400 content presented by experts and hands-on-labs to learn on. 

The VMware experience is rare in the overall market – VMware have the advantage of having a small product portfolio, having a tight/small channel, and can control the quality and delivery of content by delivering from it at the distributor level.  The latter DPM 2010 example was the ideal one for MSFT product training IMO – but here in Ireland that experience has a limited number of seats that are only open to the first few Microsoft partners to register.

In the USA, there is a market for bespoke training for the public where the likes of Mark Minasi, Rhonda Layfield, Jeremy Moskowitz, and a host of others run scheduled classes at the local airport business hotel, full of content to teach you everything from the fundamentals to the advanced, often in condensed modules that minimise your time away from the office.  I know that this doesn’t really happen in Ireland. and other than a couple of the Scandinavian deployment gurus, I don’t know if it happens in Europe. 

My gut tells me that there is a market for bespoke training that isn’t MOC, that gives the attendee a real world education that isn’t driven by corporate marketing, and provides them with the equipment that is necessary for a real education.  2012 is going to be a crazy year full of Windows Server, Hyper-V, deployment, and System Center, with customers (adoption) and consultants (competencies) seeking to be re-educated.  What do you think?

How To Write An IT Book

First you need to wake up early in the morning.  You will need some fuel for the day and nothing beats a fried breakfast for that: sausages, bacon, hash brown, egg, baked beans and toast.  That’ll keep you going until dinner time with no need to stop for lunch.  Wash all that down with several cups of coffee and you’re ready for the next stage: munchies. 

You will need a constant supply of sugar to keep the brain buzzing.  One of those health freaks with plans to cash in on their pension will urge you to drink juices and eat fruit.  Pah!  Wusses!  What you need a chocolate chip cookies.  Lots of them.   Maybe some of those sugar coated jelly sweets too.  And don’t forget to buy some nice tasting coffee to make when you’re back at the house. 

But even before you get there, you’ll need some more coffee.  Stop at your favourite shop and get something like a 20oz cup full of java.  Mmm, now’re you’re suckin’ diesel.  Fueled and ready to rock and roll you return to the house.

It’s not quite time yet; You’re probably suffering from a sugar buzz right now and will be prone to writing shite.  It’s about 11:30 by now, and you need to catch up on some TV.  One thing leads to another and it’s 17:00.  No point in starting now because it’s nearly dinner time.  You could make dinner but that would be a waste of your valuable time.  Pizza is the food of champions – just ask an pro athlete … they won’t admit that it is because they want to keep it secret, and that makes it true!  You can’t have pizza without beer because that’ll get you sent to hell – trust me.  2 beers and you are now in that mellow sweet spot between the sugar rush and being drunk.  You are now ready to write.

But the time is now 22:30.  I guess you’ll make up for it the following day.  And that, folks, is how to write an IT book.

By the way, you’ll also need to do some research, come up with an idea, propose the business concept to publishers, write the book outline, get contracts signed, do labs, write the book, edit it, and promote it.  But that stuff isn’t as important as all the above.

Activating Lots of Windows Virtual Machines in a Cloud with KMS

You know, few of us ever think about the practical sides of Windows/Office licensing when it comes to deploying lots of machines.  It’s one thing to identify, buy, and deploy the licenses – but we never seem to think about activating the damned things until it’s a bit late (been there, took photos, and bought the t-shirt).

The challenge is that when we use automated techniques to deploy software (imaging for Windows, software distribution for Office) then we need a way to activate the software without us admins/engineers/consultants being actively involved.  End users won’t click on the activate prompts for MAK product key activation … and that leads us to help desk calls and outages.

If you have volume licensing then you are entitled to use Key Management Service (KMS) licensing.  KMS is kind of similar to RDS or TermSvcs licensing – you set up a local KMS on a machine that you activate, and then your local product installations contact it to activate.  This is all done using a KMS product key instead of a MAK product key.

This is ideal in the cloud.  Now you can allow end users to deploy Windows servers and activation will be handled automatically, or you can enable a VDI broker/architecture to deploy VMs automatically and Windows/Office will be activated automatically from a local service.

Note that it requires a minimum number of products to work:

  • Windows Server 2008 and Windows Server 2008 R2 you must have at least five (5) computers to activate.
  • Windows Vista or Windows 7 you must have at least twenty-five (25) computers to activate. These thresholds can be a mix of server and client machines to make up the threshold number.
  • Office 2010, Project 2010 and Visio 2010 you must have at least five (5) computers to activate. If you have deployed Microsoft Office 2010 products, including Project 2010 and Visio 2010, you must have at least five (5) computers running Office 2010, Project 2010 or Visio 2010.

Rather than me recreating the wheel, here are some useful links:

The half day of effort that you’ll put into this is a worthwhile investment.  Once you’re set up, the activations of machines (virtual or physical) Windows and Office installations will happen automatically, taking care of that last step of the deployment that we never think of until the helpdesk calls start coming in.

Started Reading a Hacking Insider’s Book Called Kingpin

I just started reading this book during lunch today – when possible, I like to get out of the office for an hour to do something that is not at the desk.

There’s been a lot of movies, TV shows, and books about hacking.  I imagine that it isn’t a world full of bikini-clad babes clicking on a mysterious Pi symbol on The Net, or people with multi-coloured pencils in their hair typing out >Go Hack Now with instantaneous results.  The description of this book, Kingpin, got me interested.  It’s a story with the insider’s perspective:

In a previous life, Poulsen served five years in prison for hacking. So the Wired senior editor and "Threat Level" blogger knows intimately the terrain he explores in this page-turning tale of the criminal exploits of a hacker of breathtaking ambition, Max Butler, who stole access to 1.8 million credit card accounts. Poulsen understands both the hows of hacking, which he explains clearly, as well as the whys, which include, but also can transcend, mere profit. Accordingly, his understanding of the hacking culture, and his extensive interviews with Butler, translates into a fascinating depiction of a cybercriminal underworld frightening in its complexity and its potential for harm, and a society shockingly vulnerable to cybercrime. The personalities, feuds, double dealing, and scams of the hackers are just one half of this lively story. The other half, told with equal verve, is law enforcement’s efforts to find and convict Butler and his accomplices. (Butler is now serving a 13-year sentence and owes .5 million in restitution.) Poulsen renders the hacker world with such virtual reality that readers will have difficulty logging off until the very end.

But the question remains – does the president get saved in 24 hours?  I’ll post a review when I’ve finished reading it.

Technorati Tags: ,

Just Because You Can Do Something, It Doesn’t Mean You Should

I get it; money is tight and people need to be creative.  But I also know that you shouldn’t do something just because you can.

Take backup of Hyper-V for example.  Several times, I’ve been challenged on “support statements” for Hyper-V.  People want to, and are, installing backup software (the management product, not just the agent) on the parent partition of Hyper-V hosts.

Microsoft are quite clear on this: it is not supported.  The only things that are supported are management agents such as anti-malware, monitoring, backup, etc.  I don’t care what the backup software vendor says.  If you have a problem with that host when it breaks, you better hope the Company X knows how to fix Hyper-V because Microsoft support will tell you that you did something that wasn’t supported.

Like I said earlier – I’ve been challenged on this during presentations.  OK, I’m quick on my feet when I’m presenting.  I gave the persons in question a simple analogy.  I can hold a loaded gun to my head and pull the trigger.  There is absolutely nothing in the architecture of my rib cage, shoulder, arm, hand, neck, head, the gun or the bullet that prevents that.  However, it turns out that the manufacturer doesn’t support that and there’s a good risk that my brain will fail to function (although some might claim that happened quite a while ago).  Just because you can do something, that isn’t a reason that you should.

Creative engineering is good.  I’ll be among the first to applaud a cool design.  But doing stuff to save €100 here and there, while not understanding the tech, while deliberately contravening the manufacturers support statement, and putting your customer (internal or external) at risk is just plain dumb.  In fact, I’ll have to be stronger about that; knowingly contravening manufacturer support statements is negligent.

Technorati Tags: ,

The Importance of A Virtualisation Assessment …

… and I bet if you don’t do one you end up on the TechNet Forums or contacting someone like me for help.  Also known as the blog post where I laugh openly at those who assume things about virtualisation.

Last week, I did a tour of 4 cities in Ireland talking to Microsoft partners about how to improve their deployments of Hyper-V.  One subject kept coming up, over and over: the Assessment … or to put it rightly, the fact that an assessment is rarely done in a virtualisation project.

There is a reason why I dedicated an entire chapter of Mastering Hyper-V Deployment to the subject of the assessment.  I can guarantee that I didn’t want to fill up 20-40 pages. 

The assessment accomplishes a critical discovery & measurement step at the start of a virtualisation project (Hyper-V, XenServer, or vSphere):

  1. Discovery of Servers: find out what servers are on the network.  I have been on even mid-sized client sites where servers had been forgotten about.  In fact, I’ve been on one site (not recently admittedly) where they had some sort of appliance on the network that the client’s admins were afraid to remove  or mess with it because anyone who knew what it did was long since retired.  Quite simply, you need to find out what machines are out there and what applications are running on them.
  2. Application Virtualisation Support Statements: I bet hardly anyone even considers this.  I bet the most common thought process is – “sure, it’s only Windows or Linux, and it’s got to be the same in a VM”.  If you assume something then you should assume that you are wrong, and I don’t care how experienced or expert you consider yourself or your employees to be.  If you use the “we know the requirements your/our environment” then you assume and you are wrong.  Server products and those who publish them have support statements.  Domain controllers, SQL Server, SharePoint, Exchange Server, Oracle, Lotus Notes, and so on, all have support statements for virtualisation.  They impact whether a product can be virtualised, what virtualisation software they can run on (see Oracle), what features of the virtualisation product they can use, how you should build a virtual machine running that application, and so on.  The Hyper-V product group might support something in production, but does the application vendor also support it?  You’ll only have yourself to blame if you assume.
  3. Measurement: “Measure twice and cut once”.  That’s the best lesson I learned in woodwork class in school.  There are things to understand here.  Some people assume (there’s that word again) that there is a “standard” virtualisation build.  Pfft!  I’m tired of answering the “what’s a good spec for a small/mid business?” question.  You need what you need.  If your apps’ cumulative processor requirement is 8 quad core CPUs then that’s what’s required.  There is no magic compression.  The savings you get with virtualisation is that you are running many app workloads on fewer CPUs and server chassis.  If an app requires 50% of a quad core CPU in rack server form then it needs that capacity in a VM form.  The only way to find out what is required is to take the list of servers from the above step 2, and measure resource utilisation.  Only with this information can you truly correctly size and design any virtualisation environment.

The assessment feeds into so much, that it’s ridiculous.  Only with this data can you make design decisions based on size and performance.  How many hosts do you need?  How much CPU do you need?  How much memory do you need?  If you’re a systems integrator, sure you can oversell the customer by a few servers or terabytes of disk – but remember that you’re making a paltry 5-15% margin on that and you’ve drained the customer’s ability to pay for more profitable services.  And that decision to deploy passthrough disks or 1 VM/LUN for performance reasons – was it justified?  What were the IOPS requirements of the original installation?  Heck, do you know the difference between an in-server array/LUN versus an in-SAN diskgroup/vDisk? 

By the way, the skewed response to the Great Big Hyper-V Survey (skewed because they were more informed than the normal consumer) stated that less than 50% actually did an assessment.  Pretty silly considering that most deployment are not huge and the Microsoft Assessment and Planning Toolkit is free, and would require only a fee hours to get some meaningful data.

Something tells me I’ve wasted a lot of valuable electrons. I figure that the “experts” out there “who know all this already” couldn’t give a stuff about doing their jobs correctly and giving their customers a good production environment. I’ve gotten to the point with this topic where politeness has to stop and harsh words have to be spoken. And if I hear you say that you assumed something and that was justification for not doing an assessment then you only have yourself to blame.