Event – The Path To Windows 8

I will be speaking at an event in London UK called The Path To Windows 8.

The Path to Windows 8 event brings together the brightest IT professionals from around the world to talk about Windows 8 and how Microsoft can help you migrate your current desktop infrastructure to Windows 8.

In addition, the team will demonstrate all aspects of a Windows migration and, with Windows XP life support ending april 2014, it is time to learn what and how you can do these tasks.

  • Date: 5 July 2013
  • Location: Microsoft UK Cardinal Place, 80-100 Victoria St, London SW1E 5JL
  • Capacity: 104 people
  • Click here to register

Agenda:

  • 8:30 – 9:30         Registration
  • 9:30 – 10:00       Keynote, Edward Cook – Microsoft Partner Technology Advisor
  • 10:00 – 11:15     Path to Windows 8, David Nudelman – Microsoft MVP
  • 11:15 – 12:30     Successful migrations to Windows 8 with Configuration Manager, Raphael Perez – RFL Systems – Microsoft MVP
  • 12:30 – 13:15     Lunch Break and Networking
  • 13:15 – 14:30     Creating your Configuration Manager Infrastructure with Hyper-V, Aidan Finn – Technical Sales Lead at MicroWarehouse – Microsoft MVP
  • 14:30 – 15:45     The Future of Desktop, Simon May – Microsoft Evangelist
  • 15:45 – 16:00     Break
  • 16:00 – 16:45     Ask the experts and Prize draw

KB2769588 – An Update That Adds 4-Node Failover Cluster Deployment Support In OEM WS2012 OOBE

I can’t say I know anything beyond the text in this new KB article, so don’t bother asking.  See the Edit1 note below.  This one is just for Storage Server 2012.

This update introduces a new feature for the OEM Appliance Out-Of Box Experience (OOBE) feature to support four-node failover cluster deployment in Windows Server 2012. This update also adds a wizard to create a domain controller on a Hyper-V virtual machine for using in a first-server environment. The wizard is added to support the Cluster-in-a-Box design.

Note Currently, the OEM Appliance OOBE feature supports only two-node failover cluster deployment.

It reads like it’s aimed at manufacturers for a set-up wizard.

A supported hotfix is available from Microsoft.

EDIT1:

I got a clarification from the Failover Clustering group.  This update is intended for Storage Server appliances.  Everyone else: please ignore!

Maintenance Windows For Patching A WS2012 Hyper-V Cluster Make No Sense To Me

Here I am, working on a Sunday (when I wrote this post).  It’s not so bad, it’s raining outside, so that rules out going for a walk or doing some photography.  I jumped onto Twitter and saw someone moaning that they had to work on a Sunday to patch their Hyper-V cluster.  To me that’s a WTF! moment.
image
Windows Server 2012 Failover Clustering gives us Cluster Aware Updating (CAU).  Using this you can patch a Hyper-V cluster without getting manually involved in “maintenance modes” and Live Migration.  The process will:
  1. Download updates from Microsoft, WSUS, etc, or a file share, to the hosts (and this is expandable to 3rd party updates such as OEMs).
  2. Put host 1 into maintenance mode – that drains it of virtual machines using Live Migration and … Quick Migration (for VMs marked as LOW priority, by default, which I DO NOT agree with).  You can make it 100% Live Migration so no services suffer an outage during the moves.  The more bandwidth your Live Migration network has, the faster this will be – using 1 Gbps networking for 512 GB RAM hosts is stupid!
  3. Patch and reboot host 1
  4. Wait for host 1 to come back online
  5. Bring host 1 out of maintenance mode
  6. Repeat steps 2-5 for each host
This process orchestrates the entire process.  All you’ve go to do is make it happen:
  • You can manually invoke CAU from a Failover Cluster Manager console not running on a cluster member
  • You can set up a special CAU role on the cluster with a patching schedule – it’s a clustered role so it will move just like the VMs
And the process is customizable, e.g. don’t proceed/continue if Y hosts are offline.
So … let me ask you a question.  If your VMs are moving around using Live Migration, and their services never go offline … why do you need a maintenance window?  Why exactly do you want to be a sad bastard like me and work on a Sunday?
Me, I think I’d do my host patching on a Wednesday morning, at around 11am, in a typical business.  Why?  A few reasons:
  1. Live Migration keeps services online so the business should not notice.
  2. I’m “in” the office already.  If something does go wrong, I am not getting a call at 3am or at the weekend.  I’m sober, awake (as much as I will be, anyway), and able to respond immediately.
  3. Any support services will have their primary staff available.  If I do need to call someone for hardware or software support, they are online, and I’m not dealing with the red-eye team at 3am on a Sunday morning.
  4. I can monitor for exceptions quite happily.
  5. The business doesn’t need to pay me overtime or give me time-in-lieu.
  6. Peak business in IT is at either end of the week (“password reset Monday” and “I didn’t want to bother you” Friday afternoons) so Wednesday seems like a nice balance.
So yeah, I do think that CAU should kill the Hyper-V cluster patching window.
Edit 1:
The same person was on Twitter many hours later, complaining that patching Hyper-V took them “11 hours”.  Really!?!?! Hmm, I think if that was me I’d be asking what I was doing wrong.  Just sayin’  is all …
You can learn more about Windows Server 2012 Hyper-V from the book, Windows Server 2012 Hyper-V Installation And Configuration Guide:

Another WS2012 Hyper-V Converged Fabric Design With Host & Guest iSCSI Networks

Back in January I posted a possible design for implementing iSCSI connectivity for a host and virtual machines using converged networks. 

In that design (above) a pair of virtual NICs would be used for iSCSI, either in the VM or the management OS of the host.  MPIO would “team” the NICs.  I was talking with fellow Hyper-V MVP Hans Vredevoort (@hvredevoort) about this scenario last week but he brought up something that I should have considered.

Look at iSCSI1 and iSCSI2 in the Management OS.  Both are virtual NICs, connecting to ports in the iSCSI virtual switch, just like any virtual NIC in a VM would.  They pass into the virtual switch, then into the NIC team.  As you should know by now, we’re going to be using a Hyper-V Port mode NIC team.  That means all traffic from each virtual NIC passes in and out through a single team member (physical NIC in the team).

Here’s the problem: The allocation of virtual NIC to physical NIC for traffic flow is done by round robin.  There is no way to say “Assign the virtual NIC iSCSI1 to physical NIC X”.  That means that iSCSI1 and iSCSI2 could end up being on the same physical NIC in the team.  That’s not a problem for network path failover, but it does not make the best use of available bandwidth.

Wouldn’t it be nice to guarantee that iSCSI NIC1 and iSCSI NIC2, both at host and VM layers, were communicating on different physical NICs?  Yes it would, and here’s how I would do it:

image

The benefits of this design over the previous one are:

  • You have total control over vNIC bindings.
  • You can make much better use of available bandwidth (QoS is still used)
  • You can (if required by the SAN vendor) guarantee that iSCSI1 and iSCSI2 are connecting to different physical switches

Don’t worry about the lack of a NIC team for failover of the iSCSI NICs at the physical layer.  We don’t need it; we’re implementing MPIO in the guest OS of the virtual machines and in the management OS of the host.

Confused?  Got questions?  You can learn about all this stuff by reading the networking chapter in Windows Server 2012 Hyper-V Installation And Configuration Guide:

9781118486498 cover.indd