I run Windows Server 2008 Hyper-V managed by System Center Virtual Machine Machine Manager (VMM) 2008. One of the perks of virtualisation is the ability to rapidly provision servers. We can use the traditional methods associated with physical deployments or we can use templates stored in a library. With VMM this means storing sysprep’ed VHD’s (virtual hard disks) in the library. VMM makes this easy – you right click on the template VM, choose to convert it and VMM does the sysprep and moves the VM into the library. You can then use that stored VHD as a template for future VM deployments. The new VM boots up and goes through the mini setup wizard.
Here’s the problem. If you use fixed sized VHD’s then a fixed sized VHD is stored in the library. In the real world, storage is not cheap. We don’t use laptops or PC’s in the data centre. Server/SAN storage is not €100/terabyte. A library of 40GB+ VHD’s to cover our varied builds is going to consume lots of space and someone has to pay for that. Here’s my situation: the cost has to be passed on to the customer and we can’t be dong that.
What I do instead of using the power of VMM deployment is that I build my template VM’s with dynamic VHD’s. I then store them in the library in their sysprep’ed form. I deploy VM’s without a disk and then use the edit disk feature on the Hyper-V console on the host parent partition to edit the desired template disk and convert it to be a fixed sized VHD stored in the VM’s folder. That’s a time consuming process but it’s worth it to save disk. I wish VMM did that out of the box for library VHD operations but it doesn’t.
I’ve been working on deployment scenarios of Windows Server 2008 R2 and Windows 7 as part of a writing project, the upcoming launch events and as a member of Microsoft’s STEP program. I had a realisation a few days ago that I need to consider an alternative way to deploying servers.
The free Microsoft Deployment Toolkit 2010 utility allows you to capture images of PC’s and servers as WIM files. You can then deploy those images using either a USB media, a DVD, an ISO or via a PXE boot (using Windows Deployment Services to serve a WIM boot image). What if I did this instead of using my above process for VMM?
- Create a file share with scripts to do things like install IIS roles, install SQL 2008, etc.
- Build my standard images for Web, Standard, Enterprise and DataCenter editions.
- Make all my customisations, patch them, etc.
- Use a capture task sequence to capture the builds (WIM’s) and store them on the MDT server.
- Build task sequences that deploy my captured WIM’s.
- Build alternative deploy task sequences, e.g. “Web Edition Web Server” will deploy the Web Edition WIM file and then run a script to configure IIS, “Enterprise SQL Server” will deploy the Enterprise edition WIM file and then run the script to install SQL.
To deploy a new VM I could do this:
- Create a hardware template that has no hard disk and boots from PXE by default. The network card will be configured to use the VLAN that I run currently WDS and would run MDT on. Call it my factory network.
- Deploy that VM to a host.
- Fire up the VM and boot it up. Hit <F12> to boot from the network
- Lot into MDT and deploy the required task sequence, e.g. “Web Edition Web Server”.
- Sit back and drink a nice beverage while a new and nearly completely configured web server is deployed.
- Eventually log in, make a few customisations, patch it, change whatever passwords and change the NIC VLAN binding.
This accomplishes a few things.
- Firstly, I only use a few GB’s of space for each edition of Windows. A WIM file is a compressed storage medium. It’s a file based image with single instance storage. So I’m not storing 40GB VHD’s. Also, I don’t need to do my manual edit disk process to convert from the library dynamic VHD to VM fixed sized VHD.
- I’ve saved a LOT of time. With a MDT task sequence I can do some serious post boot customisations such as running SERVERMANAGERCMD.EXE with an answer file (Windows 2008) or PowerShell (Windows 2008 R2 – SERVERMANAGERCMD.EXE is being deprecated by MS, still there but PowerShell is better) to add roles and features.
- I can have 4 WIM files, 1 for each Server edition, and deploy any number of custom images with little storage space being consumed.
- Theoretically, with WIM files you could use the same WIM files and deployment process for both physical and virtual servers. I’d want to look at a way to automate installing hardware specific software, e.g. HP PSP.
If you’re using Configuration Manager 2007 (SP2 for W2008 R2 support) then you’ll get the same functionality. I’ve seen Mark Gibson of Microsoft Ireland give a Camtasia demo of this. Odds are if you’re using Hyper-V and VMM then you’ve got OpsMgr too, all licensed by System Center Enterprise/Datacenter CAL’s/SAL’s. Then you’re entitled to a ConfigMgr CAL/SAL too. However, MDT is lightweight and free. My lab MDT machine is running 512MB of RAM and doesn’t require a SQL instance.
Anyway, there’s an alternative way to tackle VM deployment. This would also work in an ESX/vSphere architecture. I’m leaning strongly towards doing this. I use WDS already for deploying blade server operating systems. Moving to MDT seems like a logical choice to me now.
I’d love to get your feedback on this and hear what alternative ways you’re using to deploy VM’s.
We use smartImager. We’ve been using it for about 3 years now and we haven’t found anything that fits our environment better.