User State Virtualization

What the hell is USV?  It’s simple; it’s using technologies to unbind user data from the PC.  You’re talking about features like roaming profiles, redirected folders and offline files.

Believe it or not, most companies I encounter have not done this.  For them, a PC repair is the timely process.  A PC upgrade is a potentially nasty piece of work to use USMT to capture a user state and restore it.

That’s why MS has released a Planning and Designing Guide for Windows User State Virtualization (USV).  Reading this, you can enjoy the tech that the rest of us have been using since the mid 1990’s.  Some of us stated using redirected folders and offline files back with W2003 and XP.  Admittedly, I disabled Offline Files when managing XP because it was a royal PITA (not a good thing).  Vista/Windows 7 appear to have solved that.

Getting the user state off of the PC is invaluable:

  • Windows upgrades are simple and quick.
  • PC repair which might take more than 10 minutes can be replaced by PC rebuild.
  • User data is centralized and easier to back up.
  • Those worried about regulators can do archiving.

Notable Changes in SP1 Beta for Win7 and W2008 R2

There are a number of notable changes in the Service Pack 1 beta for Windows 7 and Windows Server 2008 R2.  You might not have heard it, but they do go beyond Hyper-V.  There is a document you can read with all the details.  Here’s the highlights for the server OS:

  • Hyper-V Dynamic Memory
  • RemoteFX
  • A new IP address enforcement feature that is not in the beta release.
  • Enhancements to scalability and high availability when using DirectAccess
  • Support for Managed Service Accounts (MSAs) in perimeter networks
  • Support for increased volume of authentication traffic on domain controllers connected to high-latency networks
  • Enhancements to Failover Clustering with Storage

Here are the improvements for the desktop OS:

  • Additional support for communication with third-party federation services
  • Improved HDMI audio device performance
  • Corrected behaviour when printing mixed-orientation XPS documents

Both desktop and server:

  • Change to behaviour of “Restore previous folders at logon” functionality
  • Enhanced support for additional identities in RRAS and IPsec
  • Support for Advanced Vector Extensions (AVX)

Burton Group Versus Hyper-V

This morning I read an article on Network World that I thought I’d write about.  It reported a claim by the Burton Group (yes; them again) where it was claimed that:

  • You should virtualise Exchange
  • You should not use Hyper-V to do it: because it does not have ordered virtual machine start-ups.

Let’s take these two, one at a time.

Virtualise Exchange

You can imagine that I’m all for virtualising as much as is reasonable.  A recommendation to virtualise Exchange always needs to come with a disclaimer.  You know this already if you’re a regular reader: Microsoft does not support highly available Exchange databases on any highly available virtualisation platform.  That means no Exchange 2007 CCR on VMware HA/DRS/VMotion.  No Exchange 2010 DAGs on XenServer clusters.  It doesn’t matter what virtualisation product you use; you cannot mix Exchange clustering in virtual machines with virtualisation clustering.  I’ve already flogged this one so I’ll quit now.

Ordered Virtual Machine Start-up

This is the Burton Group’s answer to Charlton Heston’s corpse gripping his six-shooter (oh yes; I did go there!).  This is a tiny thing and the difference between what they prefer (in VMware) to what Hyper-V has is tiny.  The Burton Group’s preferred ordering mechanism for VM’s would be:

  1. VM1 starts up
  2. Wait for VM1, then start VM2 and VM3
  3. Wait for …. etc.

Microsoft went a different way.  You can specify (in seconds) how long a virtual machine should wait before starting up after a host powers up.

Here’s my thinking: The Burton Group would like you to avoid the virtualisation solution that can really change how IT works and go with something else because of one tiny feature.  Huh!  I love how great IT experts take care of their customers and readers ;-)  Hyper-V is not the complete solution.  It’s the facilitator for Dynamic IT and for Optimised IT.  System Center are the agents that make use of Hyper-V.  You can change how you deploy servers and applications.  You can change how you monitor them.  You can change how you back up your business.  You can change how you present user applications to the business.  You can do all of this from an integrated management solution that manages Hyper-V and your physical infrastructure.  So …. get all that versus pay between 2 and 5 times more for a virtualization solution with the ability to start up VM’s in a specific order.  I know which enterprise ready solution I’d go for.

Linux Integration Components 2.1 RTM

You wanted 4 virtual CPU’s in a Hyper-V Linux virtual machine?  You wanted clock sync and host shutdown sync?  Now you got it!

Ben Armstrong has just blogged that the version 2.1 integration components (or services if you are a VMM head) are released.  Mike Sterling is the man in the know so you can read what he has blogged to get all the news.  BTW, I included this version of the IC’s in Mastering Hyper-V Deployment. *end shameless plug*

This release is a huge step forward in gaining acceptance for Hyper-V from the Linux admins because SLES and RHEL are really equal citizens on Hyper-V now.  Now we just need VMM to catch up 😉

Volume Activation

Thanks to being in the hosting business for the past 3 years and doing short term contracting before that, I’ve never had to deal with the nightmare that is Microsoft volume activation.  My new role requires I understand it, and it crops up plenty in an exam I’m preparing for.  KMS, MAK, MAK with VAMT are three activation methods that spring to mind.  KMS is what you’ll try to use in a large environment with more than 25 clients.  KMS clients must be on the network to reactivate every 180 days.  MAK with VAMT is recommended for up to 50 clients…. there’s a grey cross over area there!  MAK is recommended for smaller environments.

You can’t install KMS on W2008, but you can with a patch, but you have to activate Windows 7 with a W2008 R2 key, and you can’t activate Office 2010 with it, but you can with a W2003/W2008R2/Windows 7 KMS … you see where I’m going with all this?

Maybe volume activation needs a rethink?  Maybe it should be engineered to be as simple as Terminals Services (RDS) Licensing is.

You can read the Volume Activation Deployment Guide Windows 7 to get some help.  And remember that Office 2010 also requires activation.

Deploying Microsoft RemoteFX for Personal Virtual Desktops Step-by-Step Guide

“This step-by-step guide walks you through the process of setting up a working personal virtual desktop that uses RemoteFX in a test environment. Upon completion of this step-by-step guide, you will have a personal virtual desktop with RemoteFX assigned to a user account that can connect by using RD Web Access. You can then test and verify this functionality by connecting to the personal virtual desktop from RD Web Access as a standard user”

Back on the Certification Trail

Microsoft exams are a funny beast.  I’ve worked in the hosting business for the last 3 years.  The only reason that hosting companies even bother with the MS partnership program is because it is a requirement to be at least a registered partner to get SPLA.  After that, it’s pretty pointless because MS is a competitor (Azure, etc) rather than being a partner to hosting companies.  So I didn’t really do anything to maintain my certification status other than complete my 2000-to-2003 MCSE upgrade a few years ago.

Now I’m working for a consulting company that is a partner and where the partnership is very important (naturally enough).  I’ve got to get certain exams and I’ve got to upgrade from 2003 MCSE.  I’ve also got to replace my dust-collecting elective exams from the 2000 generation.  I was looking through syllabus material yesterday and decided I’d sit the OpsMgr 2007 exam this morning.

I found the exam to be pretty easy 2.5 years of using OpsMgr every day including design, deployment, and troubleshooting prepared me perfectly.  Most of the exam was based on management pack management and customization, notifications, and a little backup/recovery.  Oddly enough, there was more material in the exam on certificate enabled agents than you’ll find in any whitepaper or technet page!  I’ve previously blogged about this subject (around 2 years ago). 

Now, most of us know what MS exam questions are like.  Don’t answer with real world solutions; instead you should answer with the marketing solutions.  And sometimes, there is a question that makes absolutely no sense at all.

For example, I had one question that gave me a scenario where an agent did not appear in a view. How would I troubleshoot it.  The answer was … to review the agent in the view where it wasn’t appearing in the first place!!!! I know I got the right answer because my exam score was 1000/1000.  I left a comment on the question to explain the silliness of the scenario.  I knew the answer was not really a real world answer.  I was only sure of the “answer” for this question because the other 3 options made no sense or weren’t options.  A struggling person familiar with agent deployment would have assumed that one of the other was was the answer because the real “answer” made no sense.  That’s quite unfair.

I struggled with this stuff when I originally started doing MS certification.  I’ve no problem admitting that I miserably failed my first ever exam: Windows NT 4.0 Workstation.  I answered questions based on what I knew, what I learned, and what was documented in the real world.  That experience drove me away from exams for quite a while.  After one or two 2000 exams, I learned what to look for.  There’s usually a key word or phrase in a question.  My problem is that I get wound up in an exam and speed read, missing that key word or phrase.  I learned to control this, catch the phrase and that would guide you to the answer.  But then there is the marketing question/answer.  Those are a struggle because sometimes one of the alternative and wrong answers is a stepping stone to a real solution.  But you have to ignore that.  Those are the questions I tick for review before ending the exam.  I’ve had times when I’ve gone over those 4 or 5 times, changing my mind over and over.

Anway, I’m considering ConfigMgr for my next exam as an elective replacement.  I also have to do the R2 virtualisation exam.  I haven’t really looked at VDI – can anyone explain to me why there is a full module on VDI in the R2 virtualisation exam when there is a dedicated VDI exam?  And I’ll have to find time to replace my AD design elective and do the 2 MCSE 2003 -> 2008 upgrade exams.  Ugh!

Attack on Windows via Siemens Software

I just read about this attack.  It uses Siemens software to install a root kit.  The vulnerability starts with a static password that Siemens inserted. (I once worked in a bank where I am told MSBlaster got in via a Siemens phone engineer using the modem in their systems servers to dial out to the net).  The root kit then uses a stolen private certification key to pretend to be a RealTek driver so that it can install on 64-bit OS’s (Vista and later).  MS and RealTek have figured out a solution (requires your Windows Updates to be working.  Interesting stuff.

Technorati Tags: ,

Thoughts on Hyper-V VDI Hosts

Lots of out-loud thinking here ….

If you put a gun to my head right now and asked me to pick a hardware virtualization solution for VDI then I honestly wouldn’t pick Hyper-V.  I probably would go with VMware.  Don’t get me wrong; I still prefer Hyper-V/System Center for server virtual machines.  So why VMware for VDI?

  • I can manage it using Virtual Machine Manager.
  • It does have advanced memory management features.

The latter is important because I feel that:

  • Memory is a big expense for host servers and there’s a big difference between PC memory cost and data centre memory cost.
  • Memory is usually the bottleneck on low end virtualisation.

Windows Server 2008 R2 Service Pack 1 will change my mind when it RTM’s thanks to Dynamic Memory.  What will be my decision making process then, because we do have options.  You can always switch to Hyper-V then if you have to push out VMware (free ESXi) hosts now.

Will I want to make the VDI virtual machines highly available?

Some organizations will want to keep their desktop environment up and running, despite any scheduled or emergency maintenance.  This will obviously cost more money because it requires some form of shared storage.  Thin provisioning and deduplication will help reduce the costs here.  But maybe a software solution like that from DataCore is an option?

Clustering will also be able to balance workloads thanks to OpsMgr and VMM.

Standalone hosts will use cheaper internal disk and won’t require redundant hosts.

Will I have a dedicated VDI Cluster?

My thinking is that VDI should be isolated from server virtualisation.  This will increase hardware costs slightly.  But maybe I can reduce this by using more economic hardware.  Let’s face it, VDI virtual machines won’t have the same requirements as SQL VM’s.

What sort of disk will my VDI machines be placed on?

OK, let me start an argument here.  Let’s start with RAID:  I’m going RAID5.  My VDI machines will experience next to no change.  Data storage will be on file servers using file shares and redirected folders.  RAID5 is probably 40% cheaper than RAID10.

However, if I am dynamically deploying new VM’s very frequently (for business reasons) then RAID10 is probably required.  It’ll probably make new VM deployment up to 75% faster.

What type of disk?  I think SATA will do the trick.  It’s big and cheap.  I’m not so sure that I really would need 15K disk speeds.  Remember, the data is being stored on a file server.  I’m willing to change my mind on this one, though.

The host operating system & edition?

OK: if the Hyper-V host servers are part of the server virtual machine cluster then I go with Windows Server 2008 R2 Datacenter Edition, purely because I have to (for server VM Live Migration).

However, I prefer having a dedicated VDI cluster.  Here’s the tricky bit.  I don’t like Server Core (no GUI) because it’s a nightmare for hardware management and troubleshooting.  If I had to push a clustered host out now for VDI then I would use Windows Server 2008 Enterprise Edition.  That will give me a GUI, Failover Clustering, and Live Migration.

If I had time, then I would prepare an environment where I could deploy Hyper-V Server 2008 R2 from something like WDS or MDT.  That would allow me to treat a clustered host as a commodity.  If the OS breaks, then 5 minutes of troubleshooting, followed by a rebuild with no questions asked (use VMM maintenance mode to flush VM’s off if necessary).

Standalone hosts are trickier.  You cannot turn them into a commodity because of all the VM’s on them.  There’s a big time investment there.  They lose points for this.  This might force me into troubleshooting an OS (parent partition) issue if it happens (to be honest, I cannot think of one that I’ve had in 2 years of running Hyper-V).  That means a GUI.  If my host has 32GB or less of RAM then I choose W2008 R2 Standard Edition.  Otherwise I go with W2008 R2 Enterprise Edition.

I warned you that I was thinking out loud.  It’s not all that structured but this might help you ask some questions if thinking about what to do for VDI hosts.