TechEd NA 2014–Cloud Optimized Networking In Windows Server 2012 R2

I am live blogging this session. Press F5 to get the latest updates.

Bob Combs and Greg Cusanza are the speakers. Each are PMs in the Windows Server data centre networking team.

Bob starts with a summary of 2012 R2 features.

The scenarios that they’ve engineering for:

  • Deliver continuously available services
  • Improve network performance
  • Advanced software defined networking
  • Networking the hybrid cloud
  • Simplify data centre networking

The extensible virtual switch is the policy edge of Hyper-V. Lots of built in features such as Port ACLs, but third party’s extend the functionality of the virtual switch too, including 5nine.

Those port ACLs were upgraded to Extended ACLs with stateful inspection in WS2012R2. The key thing here is that ACLs now can include port numbers, not just IP addresses. This takes advantage of cool design of vNIC and switch port in Hyper-V. The rules travel with a VM when it migrates. That’s because the switch port is an attribute of the vNIC, not of the vSwitch. Policies apply to ports so policies move with VMs.

A few people in the room know what RSS is. About 90% of the room are using NIC teaming. About half of the room have heard of Hyper-V Network Virtualization.

Greg takes over. Greg shows a photo of his data center. It’s a switch with 5 tower PCs. Each PC has 2 NICs. 2 hosts, with virtual switch on the 2 NIC team. Host1 runs AD, WAP and SPF VMs. Host 2 runs VMM and SQL VMs, and some tenant VMs. One storage host, running iSCSI target and SOFS VMs. 2 VMs set up as a Hyper-V cluster for the HNV gateway cluster. There is one physical network.

Note that the gateway template assumes that you are using SOFS storage.

The host networking detail: Uses vNICs for management, cluster, and LM. Note that if you use RDMA then you need additional rNICs for that. He’s used multiple vNICs for the storage (non RDMA) for SMB Multichannel. And then he has a vNIC for Hyper-V Replica.

VMM uses logical networking to deploy consistent networking across hosts. Needed for HNV.  Uplink port profile creates the team. Virtual switch settings create the virtual switch. Virtual adapters are created from port profiles. If a host “drifts” this will be flagged in VMM and you can remediate it.

Remember to set a default port on you logical switch. That’s what VMs will connect to by default.

Then lots of demo. No notes taken here.

The HNV gateway templates are available through the Web Platform Installer. The 2-NIC template is normally used for private cloud. The 3-NIC template us normally used for public cloud. Note, you should edit the gateway properties to edit the network settings, admin username/password, product key, etc. During template you should edit the VM/computer names of the VMs and their host placement. They are not HA VMs. Guest clustering is set up within the guest OS. This is because guest clustering service HA is faster to failover than VM failover (service migration is faster than guest OS boot up – quite logical and consistent with cloud service design where HA is done at the service layer instead of fabric layer).

Please follow and like us:

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.