With a team formed you have the failover part of LBFO figured out. But what about the load balancing piece of LBFO? That’s what this post is going to discuss.
First, think about some concepts:
- A packet sent into the NIC team should not be fragmented and sent across multiple NICs. We like BIG packets because they fill bandwidth and reduce the time to get data from A to B.
- Sometimes we need to make the path of traffic predictable … very predictable. And sometimes we don’t … but there still needs to be some organisation.
There are 2 traffic or load distribution algorithms in WS2012 NIC teaming (actually it’s more if you dig into it). The one you choose when creating/configuring a team depends on the traffic and the purpose of the team.
Hyper-V Switch Port
Generally speaking, this is the load distribution that you should use when creating a NIC team that will be used to connect a Hyper-V external virtual switch to the LAN, as below.
However, you do not have to choose this type of load distribution for this architecture, but it is my rule of thumb. Let’s get into specifics.
Hyper-V Switch Port will route traffic from a virtual NIC (either in a VM or in the management OS) to a single physical NIC in the team (a team member). Let’s illustrate that. In the below diagram, the NIC team is associating the traffic from the vNIC in VM01 with the team member called pNIC1. The traffic from the vNIC in VM02 is being sent to pNIC2. Two things to note:
- The traffic path is predictable (unless a team member fails). Incoming traffic to the virtual NICs is also going to flow through their associated physical NICs
- This is not a per-VM association. It is a per-virtual NIC association. If we add a second virtual NIC to VM01 then the traffic for that virtual NIC could be associated with any team member by the NIC team.
This is one of the things that can confuse people. They see a team of 2 NICs, maybe giving them a “2 Gbps” or “20 Gbps” pipe. True, there is a total aggregation of bandwidth, but access to that bandwidth is given on per-team member basis. That means the virtual NIC in VM02 cannot exceed 1 Gbps or 10 Gbps, depending on the speeds of the team members (physical NICs in the team).
Hyper-V Switch Port is appropriate if the team is being used for an external virtual switch (like the above examples) and:
- You have more virtual NICs than you have physical NICs. Maybe you have 2 physical NICs and 20 virtual machines. Maybe you have 2 physical NICs and you are creating a converged fabric design with 4 virtual NICs in the management OS and several virtual machines.
- You plan on using the Dynamic Virtual Machine Queue (DVMQ) hardware offload then you should use Hyper-V Switch Port traffic distribution. DVMQ uses an RSS queue device in a team member to accelerate inbound traffic to a virtual NIC. The RSS queue must be associated with the virtual NIC and that means the path of inbound traffic must come through the same team member every time… and Hyper-V Switch Port happens to do this via the association process.
As I said, there are times, when you might not use Hyper-V Switch Port. Maybe you have some massive host, and you’re going to have just 2 massive VMs on it. You could use one of the alternative load distribution algorithms then. But that’s a very rare scenario. I like to keep it simple for people: use Hyper-V Switch Port if you are creating the NIC team for a Hyper-V external virtual switch … unless you understand what’s going on under the hood and have one of those rare situations to vary.
This method of traffic distribution in the NIC team does not associate virtual NICs with team members. Instead, each packet that is sent down to the NIC team by the host/server is inspected. The destination details of the packet (which can include MAC address, IP address, and port numbers) are inspected by the team to determine which team member to send the packet to.
You can see an example of this in the below diagram. VM01 is sending 2 packets, one to address A and the other to address B. The NIC team receives the packets, performs a hashing algorithm (hence the name Address Hashing) on the destination details, and uses the results to determine the team member (physical NIC) that will send each packet. In this case, the packet being send to A goes via pNIC1 and the packet being sent to B is going via pNIC2.
In theory, this means that a virtual NIC can take advantage of all the available bandwidth in the NIC team, e.g. the full 2 Gbps or 20 Gbps. But this is completely dependent on the results of the hashing algorithm. Using the above example, if all data is going to address A, then all packets will travel through pNIC1.
And that brings us to a most common question about NIC teams and bandwidth. Say I have a host (or any server) that uses a nice big fat 20 GbE NIC team for Live Migration (or any traffic of a specific protocol). I want to test Live Migration and the NIC team. I pause the host, open up PerfMon and expect to see Live Migration using up all 20 Gbps of my NIC team. What is going on here, under the hood?
- Host1 is sending data to the single IP address of Host2 on the Live Migration network.
- Live Migration is sending packets down to the NIC team. The NIC team inspects each packet, and every one of them has the same destination details: the same MAC address, the same IP address, and same TCP port on Host2.
- The destination details are hashed and result in all of the packets being sent via a single team member, pNIC1 in this case (see the below figure).
- This limits Live Migration to the bandwidth of a single team member in the team.
That doesn’t mean Live Migration (or any other protocol – I just picked Live Migration because that’s the one Hyper-V engineers are likely to test with first) is limited to just a single team member. Maybe I have a 3rd host, Host3, and pausing Host1 will cause VMs to Live Migrate to both Host2 and Host3. The resulting hashing of destination addresses might cause the NIC team to use both team members in Host1 and give me a much better chance at fully using my lovely 20 GbE NIC team (other factors impact bandwidth utilization by Live Migration).
A misconception of Address Hashing is that packet1 to addressA will go via teamMember1, and pack2 to the same address (addressA) will go via teamMember2. I have shown that this is not the case. However, in most situations, traffic is going to all sorts of addresses and ports, and over the long term you should see different streams of traffic balancing across all of the team members in the NIC team … unless you have a 2 node Hyper-V cluster and are focusing on comms between the two hosts. In that case, you’ll see 50% utilization of a 2-team-member NIC team – and you’ll be getting the FO part of LBFO until you add a third host.
If you configure a team in the GUI, you are only going to see Hyper-V Switch Port or Address Hash as the Load Balancing Mode options. Using PowerShell, however, and you can be very precise about the type of Address Hashing that you want to do. Note that the GUI “Address Hashing” option will use these in order of preference depending on the packets:
- TransportPorts (4-Tuple Hash): Uses the source and destination UDP/TCP ports and the IP addresses to create a hash and then assigns the packets that have the matching hash value to one of the available interfaces.
- IPAddresses (2-Tuple Hash): Uses the source and destination IP addresses to create a hash and then assigns the packets that have the matching hash value to one of the available interfaces. Used when the traffic is not UDP- or TCP-based, or that detail is hidden (such as with IPsec)
- MacAddresses: Uses the source and destination MAC addresses to create a hash and then assigns the packets that have the matching hash value to one of the available interfaces. Used when the traffic is not IP-based.
My rule of thumb for Address Hashing is that I’ll use it for NIC teams that are nothing to do with a Hyper-V virtual switch, such as a NIC team in a non-host server, or a NIC team in a host that is nothing to do with a virtual switch. However, if I am using the NIC team for an external virtual switch, and I have fewer virtual NICs connecting to the virtual switch than I have team members, then I might use Address Hashing instead of Hyper-V Switch Port.
WS2012 R2 added a new load distribution mode called Dynamic. It is enabled by default and should be used. It is a blend of Address Hashing for outbound traffic and Hyper-V Port for inbound traffic. Microsoft urges you to use this default load balancing method on WS2012 R2.
This information has been brought to you by Windows Server 2012 Hyper-V Installation and Configuration Guide (available on pre-order on Amazon) where you’ll find lots of PowerShell like in this script: