Windows Server 2012 NIC Teaming Part 6 – Support Policies

Windows Server 2012 NIC Teaming Part 1 – Back To Basics

Windows Server 2012 NIC Teaming Part 2 – What’s What?

Windows Server 2012 NIC Teaming Part 3 – Switch Connection Modes

Windows Server 2012 NIC Teaming Part 4 – Load Distribution

Windows Server 2012 NIC Teaming Part 5 – Configuration Matrix

Windows Server 2012 NIC Teaming Part 6 – NIC Teaming In The Virtual Machine

This post is going to focus on support policies.  I expect most mistakes in NIC teaming will be where people try to do things that are not supported.  I want to summarise the support policies here.  This content, once again, is taken from Microsoft’s document on NIC teaming.

Feature Support

There are lots of networking features in Windows Server 2012.  Some support NIC teaming, some work great with NIC teaming, some (like SR-IOV) ignore NIC teaming in the host but are fine with NIC teaming in the guest OS, and some flat out do not support NIC teaming.

Feature

Comments

Datacenter Bridging (DCB)

Works below NIC Teaming in the NIC so is supported if the team members support it.

IPsec Task Offload (IPsecTO)

Supported if all team members support it.

Large Send Offload (LSO)

Supported if all team members support it.

Receive side coalescing (RSC)

Supported in hosts if any of the team members support it. Not supported through Hyper-V switches.

Receive side scaling (RSS)

NIC teaming supports RSS in the host. The Windows Server 2012 TCP/IP stack programs the RSS information directly to the Team members.

Receive-side Checksum offloads (IPv4, IPv6, TCP)

Supported if any of the team members support it.

Remote Direct Memory Access (RDMA)

NIC team members cannot support RDMA because the protocol bypasses the networking stack.

Single root I/O virtualization (SR-IOV)

You cannot do NIC teaming with SR-IOV enabled NICs.  Do the teaming in the guest OS with 2 Hyper-V external switches.

TCP Chimney Offload

Not supported through a Windows Server 2012 team.

Transmit-side Checksum offloads (IPv4, IPv6, TCP)

Supported if all team members support it.

Virtual Machine Queues (VMQ)

Supported when teaming is installed under the Hyper-V switch.

QoS in host/native OSs

Supported, but use of minimum bandwidth policies will degrade throughput through a team.

Virtual Machine QoS (VM-QoS)

VM-QoS is affected by the load distribution algorithm used by NIC Teaming. For best results use HyperVPorts load distribution mode.

802.1X authentication

Not compatible with many switches. Should not be used with NIC Teaming.

Team Members

Between 1 and 32 team members (2 in a guest OS NIC team) on the Windows Server 2012 hardware compatibility list (WHQL or logo’d).  The NICs can be of mixed model and manufacturer. 

The active team members can be of different speed, but must be operating at the same speed.  A hot-spare NIC (in a 2-team-member team) maybe run at a different speed than the active team members.

You cannot use NICs other than Ethernet in your team – no Wi-Fi, Bluetooth, etc.

Guest OS NIC Teams

See my previous post on Guest OS NIC teaming.

The preferred (that is, not the only way, just the preferred way) to support multiple VLANs in a single VM is to:

  1. Create multiple vNICs in the VM, one per required VLAN
  2. Enable trunking on the physical switch(es)
  3. Configure a VLAN ID for each virtual NIC, one per required VLAN
  4. Rename the virtual NICs (the VLAN ID as part of the name) using Rename-NetAdapter

Teaming

You cannot create a team made up of other teams.  You cannot use the team interface of another NIC teaming technology as a WS2012 team member or vice versa.

Microsoft says:

… it is STRONGLY RECOMMENDED that no system administrator ever run two teaming solutions at the same time on the same server.

In other words, don’t use HP/Dell/Intel/Broadcom NIC teaming on a machine where you intend to use WS2012 NIC teaming.  And remember – Microsoft does not support any 3rd party NIC teaming and never has.

You cannot create a NIC team that is made up of management OS NIC teams. MPIO is not NIC teaming and we can use MPIO for multiple management OS virtual NICs for iSCSI. 

MAC Address and Switch Independent/Address Hash Teams

This is an odd one.  Remember that this type of team receives inbound traffic on a single IP team member?  That’s because it uses a single MAC address to register the IP address.  Removing the primary team member from the team, and reusing it for something else on the network can cause a MAC address conflict with unpredictable results.  To prevent/resolve this, disable and re-enable the team interface – that causes the team to pick a new MAC address to register on the network.

VLANs

To support VLANs on the NIC team or in guests:

  • Set the team members physical switch ports to trunk mode
  • Do not filter VLANs on the team members

Virtual Switch

When using a NIC team for a virtual switch, there should be just a single team interface.  This team interface is used just for the virtual switch.  There should be no VLAN filtering in the team or the team interface.  All VLAN filtering should be done by the switch (properties of virtual NIC), or within the guest OS.

This is the last planned post in this series, but I might revisit with more if I think of something.  I am not doing any technical posts on how to create/configure/use teams – you can read all about that in Windows Server 2012 Hyper-V Installation and Configuration Guide (available on pre-order on Amazon, and due in Feb/March) where you’ll find in-depth discussion of NIC teaming, as well as all the other pieces of networking that will make your WS2012 Hyper-V hosts sing.  The chapter I wrote on Hyper-V networking is a monster, taking you from the very basics, through virtual switches, extensibility, VLANs, NIC teams, hardware offloads/optimisations, QoS, and converged fabrics … with lots of examples and PowerShell:

image

And no, there are no preview/beta copies of the book.

4 thoughts on “Windows Server 2012 NIC Teaming Part 6 – Support Policies”

  1. Great articles on this subject, i’ve been testing different scenario’s on our new soon-to-be Server 2012 Hyper-V environment.

    In all scenario’s I read the storage network is separate from the VM/Host traffic. Host/VM traffic is now truly converged through the Hyper-V switch, and working as it should on my test server. But I want to use the same physical NICS for MPIO iSCSI traffic to the equallogic san units.

    As the MS NIC Teaming is enabled, it is not possible to enable ipv4 to allow tcp/ip connectivity to the equallogics, so it seems MPIO isn’t going to work.

    So is installing a separate NIC for iscsi (and separate switches/cabling) the only way? (Apart from using NPAR to create different partitions on the hardware nic, but this will introduce another layer of QOS/configuration, and we’re not able to use DCB since the switches can’t see the different partitions)

    Also in the view of true convergance, using 2 nics for all traffic/purpose would be prefered.

    What is your view on this?

    We’re using a Dell M1000e Blade chassis with M620 Blades, 1 Dual-Port 10Gb broadcom NIC and dual 10GB MXL switches (80Gb stacked).

    1. You can converge iSCSI (search my blog for other posts) but that means you have to use non-dedicated switches. Take this up with your SAN OEM to see if they’ll support it. While MSFT might support it if you do it correctly, you can’t go to MSFT to get support for your DISM or your SAN.

      1. Saw your converged post before, and didnt read/realise the single team option depends on SAN supplier requirements/support.
        Since Dell Equallogic doesn’t require different subnets/fabrics I guess there is a chance it’s supported. I’m waiting for an answer from their solutions team. (and start testing with this config in the meantime).
        Thanks!

  2. When following your instructions (which are the same as many others so this is not something to with your directions) regarding the creation of the virtual switch and management vEthernet. Windows 2012 R2 assigns the vSwitch the same MAC as a member of the team, this is by design and one has to watch out when and if they change the members of the team around. But when Windows creates the default vEthernet on the parent partition, it uses the same MAC as another team member and this causes warnings in the event log. Can you explain this? Thank you

Leave a Reply to Bart Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.