How I’m Building Our Demo Lab Environment

I’ve talked about our lab in the past and I’ve recorded/shown a few demos from it.  It’s one thing to build a demo, but it’s a whole other thing to build a lab environment, where I need to be able to build lots of different demos for W2008 R2 (current support for System Center 2012), Windows Server 8 Hyper-V, OS deployment, and maybe even other things.  Not only do I want to do demos, but I also want to learn for myself, and be able to use it to teach techies from our customer accounts.  So that means I need something that I can wipe and quickly rebuild.

WDS, MDT, or ConfigMgr were one option.  Yes, but this is a lab, and I want as few dependencies as possible.  And I want to isolate the physical lab environment from the demo environment.  Here’s how I’m doing it:

image

I’ve installed Windows Server 2008 R2 SP1 Datacenter as the standard OS on the lab hardware.  Why not Windows Server 8 beta?  I want an RTM supported environment as the basis of everything for reliability.  This doesn’t prevent Windows Server 8 Beta from being deployed, as you’ll see soon enough.

Lab-DC1 is a physical machine – it’s actually a HP 8200 Elite Microtower PC with some extra drives.  It is the AD DC (forest called lab.internal) for the lab environment and provides DHCP for the network.  I happen to use a remote control product so I can get to it easily – the ADSL we have in the lab doesn’t allow inbound HTTPS for RDS Gateway Sad smile  This DC role is intended only for the lab environment.  For demos, I’ve enabled Hyper-V on this machine (not supported), and I’ll run a virtual DC for the demos that I build with a forest called demo.internal (nothing to do with lab.internal).

Lab-Storage1 is a HP DL370 G7 with 2 * 300GB drives, 12 * 2TB drives, and 16 GB RAM.  This box serves a few purposes:

  • It hosts the library share with all the ISOs, tools, scripts, and so forth.
  • Hyper-V is enabled and this allows me to run a HP P4000 virtual SAN appliance (VSA) for an iSCSI SAN that I can use for clustering and backup stuff.
  • I have additional capacity to create storage VMs for demos, e.g. a scale out file server for SMB Direct (SMB 2.2) demos

The we get on to Lab-Host1 and Lab-Host2.  As the names suggest, these are intended to be Hyper-V hosts.  I’ve installed Windows Server 2008 R2 SP1 on these machines, but it’s not configured with Hyper-V.  It’s literally an OS with network access.  It’s enough for me to copy a VHD from the storage server.  Here’s what I’ve done:

  • There’s a folder called C:VHD on Lab-Host1 and Lab-Host2.
  • I’m enabling boot-from-VHD for the two hosts from C:VHDboot.vhd – pay attention to the bcdedit commands in this post by Hans Vredevoort.
  • I’m using Wim2VHD to create VHD files from the Windows Server ISO files.
  • I can copy any VHD to the C:VHD folder on the two hosts, and rename it to boot.vhd.
  • I can then reboot the physical host to the OS in boot.vhd and configure it as required.  Maybe I create a template from it, generalize it, and store it back on the library.
  • The OS in boot.vhd can be configure as a Hyper-V host, clustered if required, and connected to the VSA iSCSI SAN.

Building a new demo now is a matter of:

  • Replace virtual DC on Lab-DC1 and configure it as required.
  • Provision storage on the iSCSI SAN as required.
  • Deploy any virtual file servers if required, and configure them.
  • Replace the boot.vhd on the 2 hosts with one from the library.  Boot it up and configure as required.

Basically, I get whole new OS’s by just copying VHD files about the network, with hosts and storage primarily using 10 GbE.

If I was working with just a single VHD all of the time, then I’d check out Mark Minasi’s Steadier State.

Windows Server 8 Hyper-V Failover Cluster Failover Startup Priority

There’s a blogger out there who used to claim that the only reason he wouldn’t consider Hyper-V as an enterprise virtualisation solution was because he couldn’t set the ordering of automatic VM startup during a failover scenario, e.g. start up the SQL server, then the middle tier server, then the web server. 

Windows Server 8 Hyper-V Failover Clustering has this feature, enabling you to set VMs into one of 4 buckets and thus order their startup when they failover from one host to another:

  1. High: These VMs start up first
  2. Medium: The default, and they start up after the high priority ones
  3. Low: These VMs start up after the high and medium priority VMs
  4. No auto start: These VMs fail over but do not start up automatically

How does it work?  Check it out for yourself:

 

See Windows Server 8 Hyper-V Simultaneous Live Migration & Cluster Host Drain In Action

Yesterday I showed you how my Windows Server 8 Hyper-V lab is currently built (I’m in the process of wiping to build something more flexible).  Today, I’m going to show you two things:

  1. Not just Live Migration in action, but simultaneous Live Migration.  I’ll be moving all 66 VMs from Host1 to Host2, and they’ll move 20 at a time.  This is a huge improvement over the 1 at a time that we can do in W2008 R2, and way more than the maximum of 4 (on 1 GbE) or 8 (on 10 GbE) that vSphere 5.0 can handle.  BTW, I was moving all of them at once last night Smile
  2. I’m going to perform the move by draining Host1 using a new pause function.  This is used for host maintenance (similar to VMM maintenance mode) and will Live Migrate the VMs to the most suitable host (Failover Clustering measures memory, where VMM does Intelligent Placement).  This pause function is used by Windows Server 8 Cluster Aware Updating.

In the demo, you’ll see my 20 GbE NIC team that is used for Live Migration and the 1 GbE file server where the VMs are located:

 

 

Deploy Pre-Configured Windows Server 8 Hyper-V VMs From A Template VHD

Windows Server 8 Hyper-V is giving me so many more cool options.  I wanted to deploy a new VM that would be a DC, DNS, and DHCP server.  I copied my template VHD, and created a VM.  Before powering up, I fired up Server Manager in Windows Server 8 and decided to add roles/features.  I added the DC, DNS, and DHCP roles to the VHD.  Then I powered up the VM.  The roles were pre-installed, and all I had to do was DCPROMO (now done in Server Manager) and configure the DHCP service.  Nice!

I haven’t checked but I guess I can automate this using Server Manager PowerShell cmdlets.  Yet more options!  Loving it!

See Windows Server 8 Hyper-V Shared Nothing Live Migration In Action

A month ago I went into some detail on Windows Server 8 Hyper-V Live Migration.  In short, Live Migration has been removed from Failover Clustering and is possible between clustered and non-clustered hosts.  The VMs can be on a SAN or on an SMB 2.2 file share (or Scale Out File Server/SOFS).  The VMs can also reside on internal disk/DAS on the host.  This can take advantage of Live Storage Migration to move the VM (process, memory, CPU) and its storage (VHD or VHDX), if you want, to the new host.  There’s nothing like a demo to illustrate this:

In this demo, I move a running VM from Host1 to Host2, both running Windows Server 8 Beta.  The only networking is a single 1 GbE NIC.  I ping the VM, move it, and double check that (a) there is no network loss (normally you expect 1 ping to be lost in the UDP-based ping, but TCP would tolerate this) and (b) the VM is still running.

Hyper-V Replica (Demo Video) Proving To Be The Killer Feature I Expected

This week I clocked up a lot of miles doing another 4 corners tour of Ireland, with the MSFT partner team, speaking to MSFT partners in Belfast, Galway and Cork.  It covered a number of things with different speakers, cloud, Windows 8, Windows Server 8, and I spoke for around an hour on System Center 2012 and Windows Server 8 Hyper-V.  The audience was mostly a manager/sales audience so we kept more to the business side of things, but some tech just proves the argument, and I had a feeling that nothing would do that better than Hyper-V Replica.

If you’re presenting to this kind of audience, it’s one thing to show them a new product they can sell, and that will get some interest/traction.  But if you can show them a whole new service that they can develop and use to do develop yet another service, and be able to sell this to the breadth audience that hears way too much about Fortune1000 tech, then you really have a winner.  And that’s Hyper-V Replica:

  • A DR replication solution built into Hyper-V, at no extra cost, designed for small/medium businesses with commercial broadband
  • Replicate from host-host, host-cluster, cluster-host, or cluster-cluster.
  • Replicate office to office, data centre to data centre, branch office to HQ, or customer to hosting provider (which could be a managed IT services company with some colo hosted rack space) … and maybe use that as an entry point into a cloud/IaaS solution for SMEs.

And that’s the hook there.  Most MSFT partners have experience with s/w based replication in the past.  It’s troublesome, and often assumes lots of low latency bandwidth and a 3rd witness site.  Not so with Hyper-V Replica, as I demonstrated in this video:

Of all the stuff I’ve presented in the last 2 weeks, Hyper-V Replica was the one that caused the most buzz, and rightfully so in my opinion.  It’s an elegant design; the genius is the “simplicity” of it.  It should prove to be reliable, and perfect for the audience it’s being aimed at.

Hyper-V Replica Test Failover Is Like Jean-Claude Van Damme in Time Cop

That got your attention Smile In the movie Time Cop, the catch with time travel was that a person who went back in time could not be in the same place as their past self or the universe would implode or something.

Note: Movie nerds and Dr. Sheldon Cooper wannabes can save their efforts and keep the correction comments to themselves.

The same is true with a server or application.  It really can’t exist twice in the same network or your career might implode or something.  Think about it, you enable DR replication of virtual machines from one place to another.  You want to test your DR, so you bring the replica VMs online … on the same network.  Good things will happen, right, won’t they?!?!?!  Two machines with identical names, identical application interactions on the network, identical IP addresses, both active on the same network at the same time during the work day … nope; nothing good can come of that.

Hyper-V Replica has you covered.  You just need to remember to configure it after you enable VM replication and if testing failover is even a slight possibility (I”m sure you could automate this with POSH but I’m too lazy to look – it is after 9pm no a Sunday night when I’m right this post).

You’ll be auto asked after you enable Replica if you want to configure network settings.  If you do (you can revisit later by editing the settings of the VM and expanding Network Adapter) then you’ll see this:

image

In Network Adapter – Test Failover you’ll have the option to set a Virtual Switch.  See how it is not configured to connect to a network by default?  Phew!  When you do a test failover of a Replica VM, then the VM will power up on this virtual switch.  Obviously this should be an isolated virtual switch (e.g. Internal or Private), and it should exist on all possible replica hosts (if the DR site is clustered), to avoid the Time Cop rule.

The New Hyper-V Gotcha – No Permission to Remotely Manage VMs on SMB Shared Folder

Windows Server 8 allows us to store virtual machines on file shares.  As Taylor Brown explains, when you are managing VMs from RSAT on your desktop, and those VMs are running on a host and stored on a file server, then your authentication is between you and the host.  The file server doesn’t know who you are and rejects your efforts.

Up to now, un-merged snapshots were the big gotcha in Windows Server 2008/R2 Hyper-V.  I suspect this Kerberos “issue” will be the new one, especially because SMB for storing VMs will probably be widely adopted in the breadth market.

The solution is constrained delegation, which is something you’ve been doing if you’ve been sharing ISO files so that VMs can mount them across the network.  Taylor Brown goes into some detail on a best practice method for enabling constrained delegation for correctly managing VMs that are stored on an SMB file share.

Windows 8 GA in October 2012

Bloomberg is reporting that Windows 8 will be generally available in October of this year.  That’s not so different to the Windows 7 schedule:

Windows 7 was released to manufacturing on July 22, 2009, and reached general retail availability worldwide on October 22, 2009.

Therefore I won’t be surprised to see Windows 8 (client and server) RTM in July or August.

Technorati Tags: ,

Change Windows Server 8 Hyper-V VM Virtual Switch Connection Using PowerShell

I’m building a demo lab on my “beast” laptop and want to make it as mobile as possible, independent of IP addresses, while retaining Internet access.  I do that by placing the VMs on an internal virtual switch and running a proxy on the parent partition or in a VM (dual homed on external virtual switch).  I accidentally built my VMs on an external virtual switch and wanted to switch them to an internal virtual switch called Internal 1.  I could spend a couple of minutes going through every VM and making the change.  Or I could just run this in an elevated PowerShell window, as I just did on my Windows 8 (client OS) machine:

Connect-VMNetworkAdapter –VMName * –SwitchName Internal1

Every VM on my PC was connected to the Internal1 virtual switch.