Network Requirements for Live Migration

As more and more people start deploying Windows Server 2008 R2 Hyper-V, the most common question will be: “how many NIC’s or network cards do I need to implement Live Migration?”.  Here’s the answer for you.

Your minimum optimal configuration is:

  • NIC #1: Parent partition (normal network)
  • NIC #2: Cluster heartbeat (private network)
  • NIC #3: Live Migration (private network)
  • NIC #4: Virtual Switch (normal/trunked network)

You’ll need to add more NIC’s if you want NIC teaming or need to dedicate NIC’s to virtual switches or VM’s.  This does not account for iSCSI NIC’s which should obviously be dedicated to their role.

How does Windows know which NIC to use for Live Migration?  Failover Clustering picks a private network for the job.  You can see the results by launching the Failover Clustering MMC, opening up the properties of a VM, and going to the last tab.  Here you’ll see which network was chosen.  You can specify an alternative if you wish.

I’ve gone with a different layout.  We’re using HP Blade servers with virtual connects.  Adding NIC’s is a pricey operation because it means buying more pricey virtual connects.  I also need fault tolerance for the virtual machines so a balance had to be found.  Here’s the layout we have:

  • NIC #1: Parent partition (normal network)
  • NIC #2: Cluster heartbeat / Live Migration (private network)
  • NIC #3: Virtual Switch (trunked network)
  • NIC #4: Virtual Switch (trunked network)

I’ve tested this quite a bit and pairing live migration with the cluster heartbeat has had no ill effects.  But what happens if I need to live migrate all the VM’s on a host?  Won’t that flood the heartbeat network and cause failovers all over the place?

No.  Live Migration is serial.  That means only one VM is transferred at once.  It’s designed not to flood a network.  Say you initiate maintenance mode in VMM on a cluster node.  Each VM is moved one at a time across the Live Migration network.

You can also see I’ve trunked the virtual switch NIC’s.  That allows us to place VM’s onto different VLAN’s or subnets, each being firewalled from each other.  This barrier is controlled entirely by the firewalls.  I’ll blog about this later because it’s one that deserves some time and concentration.  It has totally wrecked the minds of very senior Cisco admins I’ve worked with in the past when doing Hyper-V and VMware deployments – eventually I just told them to treat virtualisation as a black box and to trust me 🙂

I just thought of another question.  “What if I had a configuration that was OK for Windows Server 2008 Hyper-V Quick Migration?”.  That’s exactly what I had and why I chose the last configuration.  Really, you could do that with 3 NIC’s instead of 4 (drop the last one for no virtual switch fault tolerance).

Recommended Reading:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.