On another recent outing I got to play with some Gen8 HP blade servers. I was asked to come up with a networking design where (please bear in mind that I am not a h/w guy):
- The blades would have a dual port 10 Gbps mezzanine card that appeared to be doing FCoE
- There were 2 Flex Fabric virtual connects in the blade chassis
- They wanted to build a WS2012 Hyper-V cluster using fiber channel storage
I came up with the following design:
The 2 FCoE (I’m guess that’s what they were) adapters were each given a static 4 Gbps slice of the bandwidth from each Virtual Connect (2 * 4 Gbps), which would match 4 Gbps Fiber Channel (FC). MPIO was deployed to “team” the FC HBA’s.
One Ethernet NIC was presented from each Virtual Connect to each blade (2 per blade), with each NIC getting 6 Gbps. WS2012 NIC teaming was used to team these NICs, and then we deployed a converged networks design in WS2012 using virtual NICs and QoS to dynamically carve up the bandwidth of the virtual switch (attached to the NIC team).
Some testing was done and we were running Live Migration at a full 6 Gbps, moving a 35 GB RAM VM via TCP/IP Live Migration in 1 minute and 8 seconds.
For WS2012 R2, I’d rather have 2 * 10 GbE for the 2 cluster & backup networks and 2 * 1 or 10 GbE for the management and VM network. If the VC allowed it (didn’t have the time), I might have tried the below. This would reduce the demands on the NIC team (actual VM traffic is usually light, but assessment is required to determine that) and allow an additional 2 non-teamed NICs:
Leaving the 2 new NICs (running at 4 Gbps) non-teamed leaves open the option of using SMB 3.0 storage (without RDMA/SMB Direct) on a Scale-Out File Server. However, the big plus of SMB 3.0 Multichannel would be that I would now have a potential 8 Gbps to use for Live Migration via SMB 3.0 But this is assuming that I could carve up the networking like this via Virtual Connects … and I don’t know if that is actually possible.