A friend recently asked me a question. He had recently deployed a Windows Server 2012 cluster with converged fabrics. He had limited amounts of NICs that he could install and limited number of switch ports that he could use. His Hyper-V host cluster is using a 10 GbE connected iSCSI SAN. He also wants to run guest clusters that are also connected to this storage. In the past, I would have said: “you need another pair of NICs on the iSCSI SAN and use a virtual network on each to connect the virtual machines. But now … we have options!
Here’s what I have come up with:
iSCSI storage typically has these two requirements:
- Two NICs to connect to the SAN switches, each on a different subnet.
- Each NIC is on a different subnet
In the diagram focus on the iSCSI piece. That’s the NIC team on the left.
The Physical NICs and Switches
As usual with an iSCSI SAN, there are two dedicated switches for the storage connections. That’s a normal (not always) support requirement by SAN manufacturers. This is why we don’t have complete convergence to a single NIC team, like you see in most examples.
The host will have 2 iSCSI NICs (10 GbE). The connected switch ports are trunked, and both of the SAN VLANs (subnets) are available via the trunk.
The NIC Team and Virtual Switch
A NIC team is created. The team is configured with Hyper-V Port load distribution (load balancing), meaning that a single virtual NIC cannot exceed the bandwidth of a single physical NIC in the team. I prefer LACP (teaming mode) teams because they are dynamic (and require minimal physical switch configuration). This type of switch dependent mode requires switch stacking. If that’s not your configuration then you should use Switch Independent (requires no switch configuration) instead of LACP.
The resulting team interface will appear in Network Connections (Control Panel). Use this interface to connect a new external virtual switch that will be dedicated to iSCSI traffic. Don’t create the virtual switch until you decide how you will implement QoS.
The Management OS (Host)
The host does not have 2 NICs dedicated to it’s own iSCSI needs. Instead, it will share the bandwidth of the NIC team with guests (VMs) running on the host. That sharing will be controlled using Quality of Service (QoS) minimum bandwidth rules (later in the post).
The host will need two NICs of some kind, each one on a different iSCSI subnet. To do this:
- Create 2 management OS virtual NICs
- Connect them to the iSCSI virtual switch
- Bind each management OS virtual NIC to a different iSCSI SAN VLAN ID
- Apply the appropriate IPv4/v6 configurations to the iSCSI virtual NICs in the management OS Control Panel
- Configure iSCSI/MPIO/DSM as usual in the management OS, using the virtual NICs
Do not configure/use the physical iSCSI NICs! Your iSCSI traffic will source in the management OS virtual NICs, flow through the virtual switch, then the team, and then the physical NICs, and then back again.
The Virtual Machines
Create a pair of virtual NICs in each virtual machine that requires iSCSI connected storage.
Note: Remember that you lose virtualisation features with this type of storage, such as snapshots (yuk anyway!), VSS backup from the host (a very big loss), and Hyper-V Replica. Consider using virtual storage that you can replicate using Hyper-V Replica.
The process for the virtual NICs in the guest OS of the virtual machine will be identical to the management OS process. Connect each iSCSI virtual NIC in the VM to the iSCSI virtual switch (see the diagram). Configure a VLAN ID for each virtual NIC, connecting 1 to each iSCSI VLAN (subnet) – this is done in Hyper-V Manager and is controlled by the virtualisation administrators. In the guest OS:
- Configure the IP stack of the virtual NICs, appropriate to their VLANs
- Configure iSCSI/MPIO/DSM as required by the SAN manufacturer
Now you can present LUNs to the VMs.
Quality of Service (QoS)
QoS will preserve minimum amounts of bandwidth on the iSCSI NICs for connections. You’re using a virtual switch so you will implement QoS in the virtual switch. Guarantee a certain amount for each of the management OS (host) virtual NICs. This has to be enough for all the storage requirements of the host (the virtual machines running on that host). You can choose one of two approaches for the VMs:
- Create an explicit policy for each virtual NIC in each virtual machine – more engineering and maintenance required
- Create a single default bucket policy on the virtual switch that applies to all connected virtual NICs that don’t have an explicit QoS policy
This virtual switch policy give the host administrator control, regardless of what a guest OS admin does. Note that you can also apply classification and tagging policies in the guest OS to be applied by the physical network. There’s no point applying rules in the OS Packet Scheduler because the only traffic on these two NICs should be iSCSI.
Note: remember to change the NIC binding order in the host management OS and guest OSs so the iSCSI NICs are bottom of the order.
I checked with the Microsoft PMs because this configuration is nothing like any of the presented or shared designs. This design appears to be OK with Microsoft.
For those of you that are concerned about NIC teaming and MPIO: In this design, MPIO has no visibility of the NIC team that resides underneath of the virtual switch so there is not a support issue.
- Use the latest stable drivers and firmwares
- Apply any shared hotfixes (not just Automatic Updates via WSUS, etc) if they are published
- Do your own pre-production tests
- Do a pilot test
- Your SAN manufacturer will have the last say on support for this design
If you wanted, you could use a single iSCSI virtual NIC in the management OS and in the guest OS without MPIO. You have the path fault tolerance that MPIO provides via NIC teaming. Cluster validation would give you a warning (not a fail), and the SAN manufacturer might get their knickers in a twist over the lack of dual subnets and MPIO.
And … check with your SAN manufacturer for the guidance on the subnets because not all have the same requirements.