W2008 R2 Hyper-V Advanced Networking

I was looking for some official documentation on VMQ and TCP chimney for Windows Server 2008 R2 Hype-V this morning.  All I was finding were incomplete 3rd party blog posts.  My last gasp searches eventually brought me to a Microsoft document called "Networking Deployment Guide: Deploying High-Speed Networking Features" which goes into a good bit of detail.  It looks pretty good at first glance.

Unicast Mode Network Load Balancing on Hyper-V

This Microsoft blog post discusses a problem you might encounter when you enable NLB in unicast mode on Hyper-V virtual machines.  Clients may not be able to access the virtual IP address of the NLB cluster or the VM’s themselves.  This is because MAC spoofing is disabled by default on W2008 R2.  The blog post shows you what change to make to each VM in the NLB cluster to resolve the issue.

Hyper-V VMQ Unsupported By Intel NIC Teaming Pre 15.0

I was looking up something for someone earlier when I found that Intel NIC teaming does not support Virtual Machine Queue (VMQ) or VMDq as Intel calls it. 

“… teaming is not compatible with VMDq and Hyper-V*. Intel PROSet version 14.7 or later will automatically disable VMDq for adapters in teams. Intel plans a future software release that will allow both ANS teaming and VMDq to be enabled at the same time.

If you use Intel PROSet versions prior to version 14.7 to configure teams or VLANs with Virtual Machine Queues enabled, system instability may occur including a potential Windows* bug check (popularly known as Blue Screen of Death or BSOD).

To recover from a Windows* bug check (BSOD) caused by configuring ANS teams or VLANS, unplug the Ethernet cables. After starting Windows remove the ANS configured teams and VLANs or disable Virtual Machine Queues”.

EDIT #1:

As you’ll see in the comments, the Intel v15.0 drivers do add support for VMW with Intel NIC teaming.  Thanks to Brian Johnson of Intel for that info.

Technorati Tags: ,

Increase Network Buffer Sizes on Hyper-V VMBus

I saw this post being re-tweeted by Ben Armstrong and read it this morning.  It might actually be the solution to a weird problem we’ve been having at work.

Think back to your computer science classes.  Every process on a computer only ever gets a slice of time on the processor.  When it is moved from the processor is is places in a frozen state, allowing another process to execute.  A 4 core processor allows 4 processes to run at once but lots of other processes are frozen in a non-responsive state.  These idle times are extremely short.  We cannot perceive them.

A virtual machine (on any platform) is a process.  Therefore a VM can at times not actually be executing on the processor and be unresponsive to the network.  Again, this is an incredibly short window.

Hyper-V handles this by buffering network traffic for the VM.  The default size is 1MB.  The blog post shows you how to make a change to this buffer size when dealing with larger amount of network traffic, i.e. there is a risk of the buffer filling and network traffic being lost.  They suggest expanding the buffer to 2MB those scenarios, or to its maximum of 4MB in extreme scenarios.

The process is rather manual because it is very VM specific (finding the GUID for the virtual network card, searching for the GUID in the registry, adding some values) which is a pity.  I hope MS comes up with a way to make this simpler, e.g. a slide control in the VM properties, or a policy setting for a VMM host group.

Technorati Tags: ,

0x0000007C BUGCODE_NDIS_DRIVER Blue Screen on Windows Server 2008 R2 with NLB

There is a blog post by a Microsoft employee that describes an issue where a virtual machine (Hyper-V or VMware) running Windows Server 2008 R2 will crash.  The VM is configured with Windows Network Load Balancing.  Their research found that the problem occurred with “certain” antivirus packages installed.  They didn’t (and probably won’t) specify which ones.  The two proposed solutions are:

  1. Configure NLB before installing the antivirus package
  2. Uninstall the antivirus package

Ireland: Slowest Internet Connectivity In Europe

It’s no surprise to 99.999% of Irish people that it has been announced that we in Ireland have the slowed internet connectivity in Europe.  Things like 30MB cable broadband or 24MB ADSL2 are irrelevant to the vast majority of us despite the number crunching by ISP’s.  Want an example?  When I visit my family home (not out in some wilderness) we can have 26KBPS dialup to work on.  Fantastic!  There’s no broadband and no 3G signal.  But according to the National “Broadband” scheme (which is 3G based) the area has full coverage.  Just a few miles from here beside one of our largest military camps is another big broadband black spot.

All this rubbish I hear about “smart economy”, “digital hub of Europe”, etc, from politicians: it’s all BS.  Until some backside is kicked and we have real connectivity, not just for homes but for businesses, then we’ll lag behind.  This is basic infrastructure, as important as rail and roads.  Modern business mandates the use of the Internet for communication and for cloud computing.  We’re being held back and made uncompetitive.

Technorati Tags:

Hyper-V and VLAN’s

How do you run multiple virtual machines on different subnets?  Forget for for just a moment that these are virtual machines.  How would you do it if they were physical machines?  The network administrators would set up a Virtual Local Area Network or VLAN.  A VLAN is a broadcast domain, i.e. it is a single subnet and broadcasts cannot be transmitted beyond its boundaries without some sort of forwarder to convert the broadcast into a unicast.  Network administrators use VLAN’s for a bunch of reasons:

  • Control broadcasts because they can become noisy.
  • They need to be creative with IP address ranges.
  • The want to separate network devices using firewalls.

That last one is why we have multiple VLAN’s at work.  Each VLAN is firewalled from every other VLAN.  We open up what ports we need to between VLAN’s and to/from the Internet. 

Each VLAN has an ID.  That is used by administrators for configuring firewall rules, switches and servers.

How do you tell a physical server that it is on a VLAN?

There’s two ways I can think of:

  • The network administrators would assign the switch ports that will connect the server to a specific VLAN
  • The network administrators can create a “trunk” on a switch port.  That’s when all VLAN’s are available on that port.  Then on the server you need to use the network card driver or management software to specify which VLAN to bind the NIC to.  Some software (HP NCU) allows you to create multiple virtual network cards to bind the server to multiple VLAN’s using one physical NIC.

How about a virtual machine; how do you bind the virtual NIC of a virtual machine to a specific VLAN?  It’s a similar process.  I must warn anyone reading this that I’ve worked with a Cisco CCIE while working on Hyper-V and previously with another senior Cisco guy while working on VMware ESX and neither of them could really get their heads around this stuff.  Is it too complicated for them?  Hardly.  I think the problem was that it was too simple!  Seriously!

Let’s have a look at the simplest virtual networking scenario:

imageThe host server has a single physical NIC to connect virtual machines.  A virtual switch is created in Hyper-V to pass the physical network that is attached to that NIC to any VM that is bound to that virtual switch.

You can see above that the switch only operates with VLAN 101.  Every server on the network operates on VLAN 101.  The physical servers are on it, the parent partition of the host is on it, etc.  The physical switch port is connected to the virtual machine NIC in the host using a physical network cable.  In Hyper-V, the host administrator creates a virtual switch.

Network admins: Here’s where you pull what hair you have left out.  This is not a switch like you think of a switch.  There is no console, no MIB, no SNMP, no ports, no spanning tree loops, nada!  It is a software connection and network pass through mechanism that exists only in the memory of the host.  It interacts in no way with the physical network.  You don’t need to architect around them.

The virtual switch is a linking mechanism.  It connects the physical network card to the virtual network card in the virtual machine.  It’s as simple as that.  In this case both of the VM’s are connected to the single virtual switch (configured as an External type).  That means they too are connected to VLAN 101.

How do we get multiple Hyper-V virtual machines to connect to multiple VLAN’s?  There’s a few ways we can attack this problem.

Multiple Physical NIC’s

In this scenario the physical host server is configured with multiple NIC’s. 

*Rant Alert* Right, there’s a certain small number of journalists/consultants who are saying “you should always try to have 1 NIC for every VM on the host”.  Duh!  Let’s get real.  Most machines don’t use their GB connections in a well designed and configured network.  That nightly tape backup over the network design is a dinosaur.  Look at differential, block level continuous incremental backups instead, e.g. Microsoft System Center Data Protection Manager or Iron Mountain Live Vault.  Next, who has money to throw at installing multiple quad NIC’s with physical switch ports all over the place.  The idea here is to consolidate!  Finally, if you are dealing with blade servers you only have so many mezzanine card slots and enclosure/chassis device slots.  If a blade can have 144GB of RAM, giving maybe 40+ VM’s, that’s an awful lot of NIC’s you’re going to need :)  Sure there are scenarios where a VM might need a dedicated NIC but there are extremely rare. *Rant Over*

imageIn this situation the network administrator has set up two ports on the switches, one for each VLAN to connect to the Hyper-V host.  VLAN 101 has a physical port on the switch that is cabled to NIC 1 on the host.   VLAN 102 has a physical port on the switch that is cabled to NIC 2 on the host.  The parent partition has it’s own NIC, not shown.  Virtual Switch 1 is created and connected to NIC 1 and Virtual Switch 2 is created and connected to NIC 2.  Every VM that needs to talk on VLAN 101 will be connected to Virtual Switch 1 by the host administrator.  Every VM that needs to talk on VLAN 102 should be connected to Virtual Switch 2 by the host administrator.

Virtual Switch Binding

You can only bind one External type virtual switch to a NIC.  So in the above example we could not have matched up two virtual switches to the first NIC and changed the physical switch port to be a network trunk.  We can do something similar but different.

imageWhen we create an external virtual switch we can tell it to only communicate on a specific VLAN.  You can see in the above screenshot that I’ve built a new virtual switch and instructed it to use the VLAN ID (or tag) of 102.  That means that every VM virtual NIC that connects to this virtual switch will expect to be on VLAN 102 with no exceptions.

Taking our previous example, here’s how this would look:

imageThe network administrator has done things slightly different this time.  Instead of configuring the two physical switch ports to be bound to specific VLAN’s they’re simple configured trunks.  That means many VLAN’s are available on that port.  The device communicating on the trunk must specify what VLAN it is on to communicate successfully.  Worried about security?  As long as you trust the host administrator to get things right you are OK.  Users of the virtual machines cannot change their VLAN affiliation.

You can see that virtual switch 1 is now bound to VLAN 101.  Every VM that connects to virtual switch 1 will be only able to communicate on VLAN 101 via the trunk on NIC 1.  It’s similar on NIC 2.  It’s set up with a virtual switch on VLAN 102 and all bound VM’s can only communicate on that VLAN.

We’ve changed where the VLAN responsibility lies but we haven’t solved the hardware costs and consolidation issue.

VLAN ID on the VM

Here’s the solution you are most likely to employ.  For the sake of simplicity let’s forget about NIC teaming for a moment.

imageInstead of setting the VLAN on the virtual switch we can do it in the properties of the VM.  To be more precise we can do it in the properties of the virtual network adapter of the VM.  You can see that I’ve done this above by configuring the network adapter to only communicate on VLAN (ID or tag) 102.

This is how it looks in our example:

imageAgain, the network administrator has set up a trunk on the physical switch port.  A single external virtual switch is configured and no VLAN ID is specified.  The two VM’s are set up and connected to the virtual switch.  It is here that the VLAN specification is done.  VM 1 has it’s network adapter configured to talk on VLAN 101.  VM 2 is configured to operate on VLAN 102.  And it works, just like that!

Waiver: I’m seeing a problem where VMM created NIC’s do not bind to a VLAN.  Instead I have to create the virtual network adapter in the Hyper-V console.

Here’s one to watch out for if you use the self servicing console.  If you cannot trust delegated administrators/users to get VLAN ID configuration right or don’t trust them security-wise then do not allow them to alter VM configurations.  If you do then they can alter the VLAN ID and put their VM into a VLAN that it might not belong to.

Firewall Rules

Unless network administrators allow it, virtual machines on VLAN 101 cannot see virtual machines on VLAN 102.  A break out is theoretically impossible due to the architecture of Hyper-V leveraging the No eXecute Bit (AKA DEP or Data Execution Prevention).

Summary

You can see that you can set up a Hyper-V host to run VM’s on different VLAN’s.  You’ve got different ways to do it.  You can even see that you can use your VLAN’s to firewall VM’s from each other.  Hopefully I’ve explained this in a way that you can understand.

W2008 R2 Hyper-V Networking Enhancements

Windows Server 2008 R2 includes some enhancements to optimise how networking works in Hyper-V.  I’m going to have a look at some of these now.

Virtual Machine Queue

Here’s the way things worked in Windows Server 2008.  The NIC (bottom left) runs at the hardware level.  VM1 has a virtual NIC. 

image When it communicates memory is copied to/from that NIC by the parent partition.  All routing and filtering and data copying is done by the parent partition in Windows Server 2008. 

Windows Server 2008 R2 takes advantage of Microsoft partnering with hardware manufacturers.

imageHow it works now is that the NIC, i.e. the hardware, handles the workload on behalf of the parent partition.  Hardware performs more efficiently than software.  All that routing, filtering and data copy is handled by the network card in the physical host.  This does rely on hardware that’s capable of doing this.

The results:

  • Performance is better overall.  The CPU of the host is less involved and more available.  Data transfer is more efficient.
  • Live Migration can work with full TCP offload.
  • Anyone using 10GB/E will notice huge improvements.

Jumbo Frames

TCP is pretty chatty.  Data is broken up and converted into packets that must be acknowledged by the recipient.  There’s an over head to this with the data being encapsulated with flow, control and routing information.  It would be more efficient if we could send fewer packets that contained more data, therefore with less encapsulation data being sent.

image

Jumbo Packets accomplish this.  Microsoft claims that you can get packets that contain 6 times more information with this turned on.  It will speed up large file transfers as well as reduce CPU utilisation.

Chimney Offload

This one has been around for a while with Windows but is support for Hyper-V was added with Windows Server 2008 R2.

imageIt’s similar to VMQ, requiring hardware support, and does a similar job.  The NIC is more involved in doing the work.  Instead of offloading from the parent partition, it’s offloading from the Virtual Machine’s virtual NIC.  The virtual NIC in the VM advertises connection offload capabilities.  The virtual switch in the parent partition offloads child partition TCP connections to the NIC.

Hardware Reliance

You need support from the hardware for these features.  During the RC release, the following NIC’s were included by MS in the media:

VM-Chimney Capable Drivers:

  • Broadcom Net-Xtreme II 1 Gb/s NICs (Models 5706, 5708, and 5709)
  • Broadcom 10Gb/s NICs (Models 57710, 57711)

VMQ Capable Drivers:

  • Intel Kawela (E1Q) 1 Gb/s NICs (also known as Pro/1000 ET NICs)
  • Intel Oplin NICs (IXE) 10Gb/s NICs (also known as 82598)