Microsoft News Summary – 14 July 2014

After a week’s break in Finland, I am back with news from the last 10 or so days. It was a busy period!

Microsoft News Summary-2 July 2014

It’s been a long times since I posted one of these! I’ve just trawled my feeds for interesting articles and came up with the following. I’ll be checking news and Twitter for more.

Microsoft News Summary-23 May 2014

1,000,000 IOPS from Hyper-V VMs using a SOFS? Talk about nerd-vana!!! Here are the links I found interesting over the last 48 hours:

Microsoft News Summary-21 May 2014

I took a break from these posts last week while I was at TechEd, and then had work catch up to do this week. Let’s get back a rockin’. There is a distinct tendency towards cloud and automation in the news of the last week. That should be no surprise.

TechEd NA 2014–SCVMM Bare Metal Host Deployment

Speakers: Damian Flynn, MVP and Marc Van Eijk, MVP.

Confusing topic for many. The guys are very experienced in the real world so a good session to take notes from and share.

Environment Preparation

  • Rack the servers and configure the BMC card.
  • Build an OS image and add it to the library.
  • Configure DNS aliases for BMCs, set up certs (if required), and set up VMM Run As profiles, e.g. join a domain, log into BMC, etc.

Infrastructure Services

  • Set up WDS for SCVMM
  • You need a DHCP scope for your bare metal hosts for the deployment phase. The hosts will get static IPs after deployment.
  • Prep SCVMM – Import WDS, add OS image to the library (and refresh), add a Run As account for the domain join, and add a Run As account for the BMC.

The deployment

  • Configure the networking stack in SCVMM
  • Do a deep discovery to get hardware of the host
  • Deploy the VHD as boot-from-VHD on the host, install drivers, join domain, configure networking, enable Hyper-V,  etc.

YOU NEED TO UNDERSTAND YOUR TARGET

Concepts of the network in SCVMM

  • Logical network: A role
  • Network site: Part of logical network
  • Subnet/VLAN ID: A subnet that’s a part of a site
  • IP Pool: A pool of IPs for a subnet

A VM Network is an abstraction of a logical network. Required to connect NICs to a logical network.

Demo of Logical Network

Marc asks who has used VMM. Almost everyone. Who has done bare metal deployment: Very few. Who was successful first time: one brave person puts his hand up (I mock him – seeing as he is a friend).

Marc does:

  1. Create a host group.
  2. Creates a logical network called management. He sets VLAN-basd independent networks. There will be converged networks that are split up based on VLANs.
  3. Creates a network site called host that is set to be available on the host group. He sets a VLAN on 0 for PXE boot, and sets the IP subnet.
  4. Additional network site for Live Migration with a different VLAN
  5. Then he adds a third site for cluster communications with a VLAN. So one logical network with 3 network sites.
  6. Creates IP pools for each network site. Use to assign static IPs during deployment. Configures gateway and DNS settings for the management network.

Note that there is no need to do anything special to enable NVGRE. No subnets, logical networks, or anything else. A check box was left checked to allow NVGRE to be used when creating the logical network.

  1. Creates a new logical network called Cloud Network. This is what would appear in WAP when a customer creates a virtual network – so choose a suitable name.
  2. Checks “allow new VM networks ….” to use NVGRE.
  3. Creates a site with a VLAN and associates with the host group.
  4. Now he creates an IP pool for that site/logical network. The number of IPs in the pool will limit the number of VMs. No DNS or gateway settings.

So now we have two logical networks: Management and Cloud Network. The Cloud Network appears to be used for the PA Space.

  1. A third logical network called tenant VLANs
  2. Network site: Names the site after the VLAN ID.
  3. Adds more network sites, named based on the VLAN IDs.
  4. Adds IP pools.

WIN_20140513_154431

These VLANs appear to be used for tenants.

  1. Creates VM network for host/management.
  2. Creates VM network for cluster.
  3. Creates VM network for live migration.
  4. Creates a VM network for tenant A and another for tenant B

Back to presentation.

Network Design

Note that for VMM to create a team, you need to create a logical switch. BAD! Needless creation of virtual switches and limits things like RDMA. Complet convergence also not good for some storage, e.g. RDMA or iSCSI storage. Might do some convergence and non-converge your storage networks.

Benefit of logical switch

Repeatable consistency.

Note: also required for NVGRE, unless you want to go to PowerShell hell.

The design they are deploying:

image

Demo

  1. Create an uplink port profile to define a NIC team. This one is created for HNV/Tenants. Selects the Cloud Network and the tenant VLAN network sites. Also makes sure the enable NVGRE check box is left enabled.
  2. Creates an uplink port profile for the mangagement network. Now adds the cluster, host, and live migration network sites.

What he’s done: configure the above two network teams from the diagram and defined which networks will pass through the respective teams.

  1. Creates a logical switch for management. Selects the management uplink port profile and selects the teaming option. Even if you have just one NIC, you can add a NIC later and join it to the team. Now to define the convergence by adding virtual ports. A step in this is to define port classification – this does QoS. Select Host Management and match with management network – repeat for the reset of management networks.
  2. Creates a logical switch for tenants. And also teams with the tenant HNV uplink port profile. Adds three adapters (port profile classifications) for QoS – low, medium, and high (out of the box – weights of 1, 3, and 5).

WIN_20140513_160608

  1. Next up: create a physical computer profile. Choose a host profile. Select the virtual hard disk from the library that will be the host OS. Now the fun bit – network configuration in Hardware Configuration.
  2. Tip: Expand this dialog using the control in the bottom right corner.
  3. It starts with a single physical NIC with the management role. Add 4 more physical NICs.
  4. First and second added to the logical switch of management.
  5. Configure 3rd and 4th to the tenant logical switch.
  6. Edit the original physical NIC and select “Create a Virtual Network Adapter as the management NIC”. Set the transient physical newtork adapter as NIC 1. Apply a classification – host management. Set the IP Pool as Host.
  7. Add 2 virtual NICs. Connect 2st to logical switch management. Set to Live Migration. Connect the 3rd to the mangement logical switch and configure for cluster.
  8. Can also do some other stuff like filtering drivers from the library for precise PNP.
  9. Continue the wizard – set domain join and runas account. Set the local admin password, the company info and product key. An answer file can be added to customize the OS more, and you can run tasks with GUIRUNONCE.
  10. You can skip the default VM storage path for clustered hosts – VMM will control this in other ways later.

Deployment Demo

  1. Kicks off the wizard from the host group. Provision the new machine.
  2. Select a host group that has a valid host profile. Select the host profile.
  3. Kick off the deep discovery. The host reboots into WinPE to allow VMM to audit the host h/w. With CDN enabled, you can pre-bind NICs to logical switches/teams. Without it, you’ll need to know which NIC is plugged into which switch port, then you can bind NICs to right logical switches. The server schedules a shutdown after the audit.
  4. In VMM you can finish the host configuration: naming of the host. Binding of NICs to logical switches if you don’t have CDN in the host. If you’re quick, the server will not shutdown and the setup will kick off.

Notes

Converging things like SMB 3.0 or Live Migration through a logical/virtual switch disables RSS so you limit 10 GbE bandwidth to 3.5 Gbps or thereabouts. Can create multiple management OS vNICs for SMB multichannel, where VMQ dedicates a queue/core to each vNIC.

My approach: I do not converge my SMB/cluster/storage rNICs. They are not teamed, so they are basic logical networks. No need then for logical switch.

TechEd NA 2014–Cloud Optimized Networking In Windows Server 2012 R2

I am live blogging this session. Press F5 to get the latest updates.

Bob Combs and Greg Cusanza are the speakers. Each are PMs in the Windows Server data centre networking team.

Bob starts with a summary of 2012 R2 features.

The scenarios that they’ve engineering for:

  • Deliver continuously available services
  • Improve network performance
  • Advanced software defined networking
  • Networking the hybrid cloud
  • Simplify data centre networking

The extensible virtual switch is the policy edge of Hyper-V. Lots of built in features such as Port ACLs, but third party’s extend the functionality of the virtual switch too, including 5nine.

Those port ACLs were upgraded to Extended ACLs with stateful inspection in WS2012R2. The key thing here is that ACLs now can include port numbers, not just IP addresses. This takes advantage of cool design of vNIC and switch port in Hyper-V. The rules travel with a VM when it migrates. That’s because the switch port is an attribute of the vNIC, not of the vSwitch. Policies apply to ports so policies move with VMs.

A few people in the room know what RSS is. About 90% of the room are using NIC teaming. About half of the room have heard of Hyper-V Network Virtualization.

Greg takes over. Greg shows a photo of his data center. It’s a switch with 5 tower PCs. Each PC has 2 NICs. 2 hosts, with virtual switch on the 2 NIC team. Host1 runs AD, WAP and SPF VMs. Host 2 runs VMM and SQL VMs, and some tenant VMs. One storage host, running iSCSI target and SOFS VMs. 2 VMs set up as a Hyper-V cluster for the HNV gateway cluster. There is one physical network.

Note that the gateway template assumes that you are using SOFS storage.

The host networking detail: Uses vNICs for management, cluster, and LM. Note that if you use RDMA then you need additional rNICs for that. He’s used multiple vNICs for the storage (non RDMA) for SMB Multichannel. And then he has a vNIC for Hyper-V Replica.

VMM uses logical networking to deploy consistent networking across hosts. Needed for HNV.  Uplink port profile creates the team. Virtual switch settings create the virtual switch. Virtual adapters are created from port profiles. If a host “drifts” this will be flagged in VMM and you can remediate it.

Remember to set a default port on you logical switch. That’s what VMs will connect to by default.

Then lots of demo. No notes taken here.

The HNV gateway templates are available through the Web Platform Installer. The 2-NIC template is normally used for private cloud. The 3-NIC template us normally used for public cloud. Note, you should edit the gateway properties to edit the network settings, admin username/password, product key, etc. During template you should edit the VM/computer names of the VMs and their host placement. They are not HA VMs. Guest clustering is set up within the guest OS. This is because guest clustering service HA is faster to failover than VM failover (service migration is faster than guest OS boot up – quite logical and consistent with cloud service design where HA is done at the service layer instead of fabric layer).

Microsoft News Summary – 8 May 2014

Here’s the news for the last 24 hours. I suspect things will remain quiet until the keynote at TechEd. Even then, I’d expect news to be limited to cloud services.

Presentation – Microsoft Azure And Hybrid Cloud

I recently presented in the MicroWarehouse and Microsoft Ireland road show to Irish Microsoft partners on the topic of the Cloud OS, comprised of Azure, Windows Server 2012 R2, Hyper-V, and System Center 2012 R2. You can find the slide deck below.

 

Microsoft News Summary-2 May 2014

The big news yesterday was the general release of the new patch for IE on XP. Personally, I think this is a stupid mistake by Microsoft, and it will lead to some laggards to reason that Microsoft has reversed course on the end of support. Microsoft can comment all they want; most people never read blogs, press, or attend events. The mistake has been made, and it was one of the dumbest releases since Bob.

Microsoft News Summary-29 April 2014

There is a lot of reading material this morning.