Converged fabrics give us options. There’s no one right way to implement them. Browse around TechNet and you can see that. That means we have options, and options are good. Maybe you don’t like options, so maybe you pick an architecture, script the deployment, and reuse that script for every host configuration. The benefit of that approach is extreme standardisation and it removes most of the human element where mistakes happen.
Sample Configuration 1 – Standalone Host Using All The NICs
Right now, I’m thinking to myself, “how many people looked at the picture and though that this stuff is only for big companies” and didn’t bother reading this text, missing out on something important, including the small business with a couple of VMs.
In this example a small company is installing a single host, or a few non-clustered hosts. Or it could be a hosting company installing dozens or hundreds of non-clustered hosts. The server comes with 4 * 1 GbE NICs or with 2 * 10 GbE NICs. All the NICs are teamed. A single virtual switch is created and bound to the team. The VMs talk via that. Then a single virtual NIC is created in the management OS for managing and connecting to the host.
The benefit is that all functions of the host and VMs go through a single LBFO team. I can script the entire setup, by adding all NICs into the team. There’s no figuring out what NIC is what. Combined with QoS, I also get link aggregation meaning lots of pipe, even with 4 * 1 GbE NICs.
Sample Configuration 2 – Clustered Host With SAS/Fibre Channel
In this example, I have 2 more additional virtual NICs in the management OS, giving me cluster communications (and CSV) and Live Migration networks. All three NICs (VM and management OS)are probably on isolated physical VLANs through VLAN ID binding and trunking the physical switch ports of the converged fabric.
The benefit of this example is that I’ve been able to switch to 10 GbE using 2 on-board NICs that come in the new DL380 and R720. I don’t need 8 NICs for these connections (4 * 2) for NIC teaming like I would have in W2008 R2. I get access to big pipe with much fewer switch ports and NICs, with QoS guaranteeing quality of service with burst capability.
Sample Configuration 3 – Clustered Host with Physically Isolated iSCSI
The one major rule we have with iSCSI NICs is never use NIC teaming. We use MPIO for pairs of iSCSI NICs. But what if we want to converge the iSCSI fabric as well? We’re still in Release Candidate days so there is not right/wrong, best practice, or support statements yet. We just don’t know yet. In my demos, I’ve had a single virtual NIC for iSCSI without using DCB. If I wanted to be a bit more conservative, I could use the above configuration. It takes the previous configuration, and adds a pair of physically isolated NICs to use for iSCSI.
Sample Configuration 4 – Clustered Host with SMB 3.0 and Physically Isolated Virtual Switch
The above is one that was presented at the Build conference last September. The left machine is an SMB 3.0 file server for storing the VMs’ files. The virtual switch is physically isolated, using a pair of teamed NICs. Another NIC team in the host has virtual NICs directly connected to it for the management OS and cluster functions.
A benefit of this is that RSS can be employed on the management OS NIC team to give us SMB 3.0 multichannel – multiple SMB data streams over multiple RSS capable NICs. The virtual switch NICs can assume the Hyper-V load distribution model and DVMQ can be enabled to optimise VM networking, assuming the NICs support it. Note that DVMQ and RSS should not be used on the same NICs. That’s why the loads are isolated here.
I’m sure if I sat down and thought about it, there would be many more configurations. Would the be best practice? Would they be supported? We’ll find out later on. But I do know for certain that I can reduce my NIC requirements and increase network path fault tolerance with converged fabrics.