All My Windows Server 2012 Hyper-V Converged Fabric Posts From The Past 2 Weeks

Here’s a listing of each of my posts from the past 2 weeks on the subject of WS2012 Hyper-V converged fabrics:

You can lead a horse to water but you can’t make him drink.  It’s your turn now; I’ve given you more reading material than I had.  I learned by trying the stuff.  I’ve given you the starting point, now it’s up to you read these posts, watch the online sessions from Build (probably more online sessions coming at TechEd 2012), install Windows Server 2012, and try out these features for yourself.

Linux Integration Services V3.3 For Hyper-V

Version 3.3 of the Linux integration components was just released with support for Windows 8 and Windows Server 2012.

It supports the following versions of Hyper-V:

  • Windows Server 2008 Standard, Windows Server 2008 Enterprise, and Windows Server
  • 2008 Datacenter
  • Microsoft® Hyper-V Server 2008
  • Windows Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and Windows
  • Server 2008 R2 Datacenter
  • Microsoft Hyper-V Server 2008 R2
  • Windows 8 Release Preview
  • Windows Server 2012

See those last two?  Windows 8 and Windows Server 2012 are supported.

The supported guest OS’s are:

  • Red Hat Enterprise Linux 6.0-6.2 x86 and x64 (Up to 4 vCPU)
  • CentOS 6.0-6.2 x86 and x64 (Up to 4 vCPU)
  • Red Hat Enterprise Linux 6.0-6.2 x86 and x64 (Up to 32 vCPU when used on a Windows 8 Release Preview or Windows Server 2012 host)
  • CentOS 6.0-6.2 x86 and x64 (Up to 32 vCPU when used on a Windows 8 Release Preview or Windows Server 2012 host)

RHEL 6.2 and CentOS 6.2 were added to the list in v3.3. SLES and RHEL 5.x use version 2.1 of the Linux Integration Services.

Notice that RHEL and Centos support up to 32 virtual CPUs on Windows Server 2012 or Windows 8???  Nice scalable Linux workloads on Hyper-V Smile  OK let’s talk turkey.

Once you start adding lots of vCPUs to Linux, you have a few concerns:

  • Bear in mind that I’m a Linux noob and forgive me for lack of details, but Linux has issues where it needs some work to have more than 8 vCPUs in a VM.  One fix is to use Linux Kernel 3.4 or later.
  • With lots of vCPUs you need to handle NUMA nodes, and your Linux guest will be NUMA hardware aware on WS2012 with Linux Kernel 3.4 or later.

Thanks to the folks in MSFT for the quick updates!

What Do The New Windows Azure Services Mean to Us … and Hyper-V?

Azure, as it was previously, was a Platform-as-a-Service (PaaS), where developers could upload applications, run databases, and store data.  All that continues.  But there was no way to run virtual machines or websites like in traditional website or virtual private server (VPS) hosting.  PaaS on Azure looked very cool to developers with a lot of interesting back end services.  But the problem with PaaS is vendor lock-in.  You cannot take the application and move it to another hosting company like you can with a VM or a website; the code is written for Azure and its services.

Then a few years ago at the PDC conference, it was announced that virtual machine hosting was coming to Azure.  Surely this would give customers an atomic unit, a VM like we know in Hyper-V, that could be moved around?  Sort of.  The problem was that this proposed service would be stateless.  Reboot the VM and reset it back to its original state; data was stored on the other Azure services.  That’s not how we work with infrastructure so how could it be useful to us?

Then Mary Jo Foley reported many months ago that true stateful Infrastructure-as-a-Service (IaaS) was coming to Azure.  And yesterday, the details were announced by Microsoft.  They also released a document that gives a bit more detail on the new services:

Windows Azure Virtual Machines

You can take your normal Windows or Linux virtual machine workloads (Hyper-V compatible I guess), and run them in the public cloud (Azure).  These are persistent virtual machines, just like traditional VPS hosting.  The supported OSs at this point are:

  • Windows Server 2008 R2
  • Windows Server 2008 R2 with SQL Server 2012 Eval
  • Windows Server 2012 RC
  • Linux
  • OpenSUSE 12.1
  • CentOS-6.2
  • Ubuntu 12.04
  • SUSE Linux Enterprise Server 11 SP2

That looks pretty similar to the supported OSs for Hyper-V, with the addition of OpenSUSE 12.1.  I wonder if that’s in Hyper-V’s future?

Windows Azure Virtual Network

Question: Can I create a Hybrid cloud where I run services on my private cloud (in my data centre) and in a public cloud (Azure), where my public cloud service is not open to the entire Internet audience. 

Answer: Yes.  You can set up a site-site VPN using Windows Azure Virtual Network.  To be honest, some of the clues to this have been around for quite a while.  Take a look at some of the MSFT slides for Windows Server 2012, especially around VPN.

This is interesting:

With Virtual Network, IT administrators can extend on-premises networks into the cloud with control over network topology, including configuration of IP addresses, routing tables and security policies.

Does that sound familiar?  Do you think that there’s a bigger vision here, with MSFT providing a unified solution for public and private cloud, including Windows Server 2012 and Windows Azure Services?  You should.

Windows Azure Web Sites

Some people just want space to host a website.  Something nice and simple.  That’s exactly how I run this blog; I have a simple account that allows me X websites, space, and traffic.  I then upload/install a web app in the space and away I go, talking shite for years on end Smile

And when it comes to host, that’s the majority of what people want.  It’s enough of an online presence for the majority businesses, more flexible than the alternative that MSFT offered: SharePoint in Office 365.  Welcome Windows Azuer Web Sites:

…easily build and deploy websites with support for multiple frameworks and popular open source applications, including ASP.NET, PHP and Node.js. With just a few clicks, developers can take advantage of Windows Azure’s global scale without having to worry about operations, servers or infrastructure.

They go on:

It is easy to deploy existing sites, if they run on Internet Information Services (IIS) 7, or to build new sites, with a free offer of 10 websites upon signup, with the ability to scale up as needed with reserved instances.

Did I just read the word “free”?  Really?  What’s the catch?  Surely there is a catch?

This isn’t just for .NET and SQL Server either:

  • Multiple frameworks including ASP.NET, PHP and Node.js
  • Popular open source software apps including WordPress, Joomla!, Drupal, Umbraco and DotNetNuke
  • Windows Azure SQL Database and MySQL databases
  • Multiple types of developer tools and protocols including Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix

Windows Azure Management Portal

The most difficult piece of hosting is not the web servers and it’s not the virtualisation layer.  The most difficult piece is the portal, or as it’s traditionally known in the hosting business, the control panel. 

… the new Windows Azure Management Portal provides an integrated management experience across Windows Azure workloads in a single, modern user experience and is accessible from various platforms and devices.

The Windows Azure Preview Portal supports the following services:

  • Cloud Services
  • Virtual Machines (Preview)
  • Web Sites (Preview)
  • Virtual Network (Preview)
  • SQL Database (formerly known as SQL Azure)
  • Storage

There are other Azure improvements in this announcement, so check out the aforementioned document to get the details.

Online Presentation

Microsoft is running an online presentation later today to launch these new services.  It is on at 9PM Irish/UK time (10PM CET), and unfortunate time of day to choose for such an event.  A 9am PST event would have been better, then being 5pm UK/Irish time and 6PM CET.

What Does All This Mean?

Nothing has been announced but we could speculate Smile  At Build it was made clear that lots of lessons were learned from Azure to make Hyper-V better.  Network Virtualisation was pitched as a way to move VMs from the private cloud to a public cloud (exactly what Azure is) with minimal disruption.  So maybe you could move Hyper-V VMs right up there!  Could that be partly why we have Shared Nothing Live Migration?  That’s a bit of a stretch, because Live Migration does require bandwidth.

One of the sales pitches with Hyper-V Replica is virtual DR in the cloud.  Hmm, what if you could replicate VMs to Azure?  But don’t forget that there’s more to virtual DR than starting up your VMs.  Remember that user’s need a way to access the services, assuming that their PCs are burned down or under a flood too (see VDI or virtual RDS).

I think over the next 2 years we could see some very interesting ways for us to expand our infrastructure footprint into Azure, and in ways we might not be expecting … yet another reason to be considering Windows Server 2012 instead of the alternative.

What About Other Hosting Companies?

There are a few reasons that I chose to get out of the hosting business back in 2010.  One of the big ones was that I saw the writing on the wall.  The likes of HP, Dell, Amazon, and Microsoft are too big to compete with on a large scale.  Yes, there are lots of customers who will want the bespoke services that a boutique and local hosting company can offer, but there aren’t that many of them.  And the year 2012 reminds me of the year 2001: everyone with a modem is launching a cloud (hosting) company.  Not many of them will be around in 2014, and very few of those extinctions will be because of acquisition (the good way to go out of business).

Hosting companies that are Microsoft partners might feel like their partner relationship is strained this morning.  MSFT can be cheaper and out market you just by their pure scale.  Service innovation will be the key.  Do it better.  Give a more human service where there’s an account manager and the helpdesk is more responsive.  Offer engineering and customisation services (consulting).  Don’t sell space … because this is a commodity market and the big guy always wins.  At least, that’s what I think.

Some Different Converged Fabric Architectures For Windows Server 2012 Hyper-V

Converged fabrics give us options.  There’s no one right way to implement them.  Browse around TechNet and you can see that.  That means we have options, and options are good.  Maybe you don’t like options, so maybe you pick an architecture, script the deployment, and reuse that script for every host configuration.  The benefit of that approach is extreme standardisation and it removes most of the human element where mistakes happen.

Sample Configuration 1 – Standalone Host Using All The NICs

image

Right now, I’m thinking to myself, “how many people looked at the picture and though that this stuff is only for big companies” and didn’t bother reading this text, missing out on something important, including the small business with a couple of VMs.

In this example a small company is installing a single host, or a few non-clustered hosts.  Or it could be a hosting company installing dozens or hundreds of non-clustered hosts.  The server comes with 4 * 1 GbE NICs or with 2 * 10 GbE NICs.  All the NICs are teamed.  A single virtual switch is created and bound to the team.  The VMs talk via that.  Then a single virtual NIC is created in the management OS for managing and connecting to the host. 

The benefit is that all functions of the host and VMs go through a single LBFO team.  I can script the entire setup, by adding all NICs into the team.  There’s no figuring out what NIC is what.  Combined with QoS, I also get link aggregation meaning lots of pipe, even with 4 * 1 GbE NICs.

Sample Configuration 2 – Clustered Host With SAS/Fibre Channel

image

In this example, I have 2 more additional virtual NICs in the management OS, giving me cluster communications (and CSV) and Live Migration networks.  All three NICs (VM and management OS)are probably on isolated physical VLANs through VLAN ID binding and trunking the physical switch ports of the converged fabric. 

The benefit of this example is that I’ve been able to switch to 10 GbE using 2 on-board NICs that come in the new DL380 and R720.  I don’t need 8 NICs for these connections (4 * 2) for NIC teaming like I would have in W2008 R2.  I get access to big pipe with much fewer switch ports and NICs, with QoS guaranteeing quality of service with burst capability.

Sample Configuration 3 – Clustered Host with Physically Isolated iSCSI

image

The one major rule we have with iSCSI NICs is never use NIC teaming.  We use MPIO for pairs of iSCSI NICs.  But what if we want to converge the iSCSI fabric as well?  We’re still in Release Candidate days so there is not right/wrong, best practice, or support statements yet.  We just don’t know yet.  In my demos, I’ve had a single virtual NIC for iSCSI without using DCB.  If I wanted to be a bit more conservative, I could use the above configuration.  It takes the previous configuration, and adds a pair of physically isolated NICs to use for iSCSI. 

Sample Configuration 4 – Clustered Host with SMB 3.0 and Physically Isolated Virtual Switch

image

The above is one that was presented at the Build conference last September.  The left machine is an SMB 3.0 file server for storing the VMs’ files.  The virtual switch is physically isolated, using a pair of teamed NICs.  Another NIC team in the host has virtual NICs directly connected to it for the management OS and cluster functions.

A benefit of this is that RSS can be employed on the management OS NIC team to give us SMB 3.0 multichannel – multiple SMB data streams over multiple RSS capable NICs.  The virtual switch NICs can assume the Hyper-V load distribution model and DVMQ can be enabled to optimise VM networking, assuming the NICs support it.  Note that DVMQ and RSS should not be used on the same NICs.  That’s why the loads are isolated here.

I’m sure if I sat down and thought about it, there would be many more configurations.  Would the be best practice?  Would they be supported?  We’ll find out later on.  But I do know for certain that I can reduce my NIC requirements and increase network path fault tolerance with converged fabrics.

Windows Server 2012 UK Events Reminder – Come See What’s In The Release Candidate

Here’s quick reminder of the WS2012 Rocks events that are being run in a week’s time in the UK.  Myself and Alex Juschin (MVP) are presenting on Windows Server 2012 in a 4 hour event in Edinburgh (June 15th) and London (June 14th). 

Alex will be presenting on Windows Server management and the impressive advanced in RDS.  Seriously … would you like to deploy a VDI solution in a few mouse clicks?  Then you gotta attend.

I’ll be presenting on the advances in networking, with some storage thrown in for good measure … it’s hard to separate the two.  And then I cover Hyper-V … and you seriously will want to see the demo I have lined up for the UK to wrap up my sessions.  The Dublin demo was cool, but this one is way beyond what I did a few weeks ago.

This is, in my opinion, the biggest and most important release of Windows Server since 2000 … maybe even ever.  That’s not just hyperbole, as you can see by the list of virtualisation features alone.  Don’t get left behind.  Come to the event, see why we’re so excited, and get your career ahead of the pack.

For those of you in the UK, please feel free to spread the word of these events.  By the way London, Edinburgh is way ahead of you in registrations.  Are you really going to let that happen? Smile with tongue out

Windows Server 2012 Hyper-V Port ACLs

There are many reasons why you might want to isolate virtual machines at the NIC level in Hyper-V.  Maybe you have different tenants on a cloud.  Maybe you have some stuff that has high security requirements.  If so, then there’s a new feature in Windows Server 2012 Hyper-V that you’ll like: Port ACLs (access control lists).

Port ACLs allow you to set rules as follows:

  • Local MAC/IP address: what local address does this apply to?
  • Remote IP/MAC address: what remote address does this apply to?  Can be a specific IP address or network address or a wildcard.
  • Action: Do you want to block, allow, or measure traffic that this rule applies to?
  • Direction: Are you apply this rule to inbound traffic, outbound traffic, or traffic in both directions?

It’s important to note that Port ACLs works at the address level and not at the port or protocol level.  If you need that level of granularity, then check out one of the certified Hyper-V Switch extensions that MSFT partners such as Cisco and 5Nine are producing.

Here’s a pair of sample scripts that I use to demo Port ACLs:

Add-VMNetworkAdapterAcl -VMName VM60 -RemoteIPAddress * -Direction BOTH -Action Deny
Add-VMNetworkAdapterAcl -VMName VM60 -RemoteIPAddress 192.168.1.20 -Direction BOTH -Action Allow
Get-VMNetworkAdapterAcl -VMName VM60

The above script will:

  • Block all traffic to and from a VM called VM60.
  • Allow traffic to and from 192.168.160 for VM60.  The allow rule overrides the block rule.
  • The third line displays the Port ACL rules that are applied to VM60

In the demo, I ping the default gateway (192.168.1.1).  That stops working when I run this script on the host.  And remember, I can move this VM to another switch or another host, and these Port ACLs should still apply.  I then ping 192.168.1.20 and that works fine.  I return to pinging 192.168.1.1 (which fails) and run this script:

Remove-VMNetworkAdapterAcl -VMName VM60 -RemoteIPAddress * -Direction BOTH -Action Deny
Remove-VMNetworkAdapterAcl -VMName VM60 -RemoteIPAddress 192.168.1.20 -Direction BOTH -Action Allow
Get-VMNetworkAdapterAcl -VMName VM60

The above script will remove the rules that I previously added and displays the remaining rules (none).  Suddenly the failing ping to 192.168.1.1 starts to work.

Rather than just blocking/allowing traffic, you could choose to measure it.  For example, in a hosting environment you might create a rule to meter for traffic to/from the Internet and bill the customer based on that.

With Port ACLs, you have basic built in firewalling for virtual machines, and you have a way to measure traffic.

Updated –The Windows Server 2012 New Hyper-V Features List

I have updated and expanded the list of new features in Windows Server 2012 (previously Windows Server “8”) Hyper-V and related technologies.  The list is huge and changing, and I’m sure to have missed some things out.

VMware Cheaper To Manage? Ow! Must … Stop … Laughing

A study that VMware paid for claims that managing their virtualisation is cheaper than managing Hyper-V.  OK class, calm down.  You at the back … stop laughing before your head falls off!  Yes, and a study I paid for says that VMware are getting desperate … as in Novell in the year 2000 desperate.

Last year I wrote a post that compared the cost of Windows Server VMs on a 2U host, with 2 CPUs and 92 GB RAM., and 50 VMs  Hyper-V and all of System Center on one hand, and vSphere Standard (not Enterprise plus with all the features and all the additional cost) with just vCenter Operations.  Even with the most basic VMware solution (against the full MSFT pack), MSFT came in at 57% of the cost of VMware.

OK, since then, System Center 2012 SMLs are maybe a little more expensive than the old SMSD … but I can counter that now by switching to an ECI license (big discount for big orders) or CIS (small discount for small orders)  where Windows Server and System Center 2012 are bundled.

Maybe the VMware commissioned study is saying that the actual cost of operations are higher in the MSFT space?  How does one service pack or patch vSphere?  They do get released from time to time, you know.  Oh yeah … you don’t install them because they usually break the host.  But when you do, isn’t it time consuming?  Over on the MSFT space, I have Windows Update, WSUS or ConfigMgr to control the distribution of updates.  I can orchestrate the installation using VMM 2012, or I can use Cluster Aware Updating in Windows Server 2012.  Test, setup, fire and forget (well … run a report every now and then to check compliance).  Complete automation, baby!

What about the cloud?  How does that work in vSphere?  Spend lots and lots of money and hack the ell out of their rebadged point solutions.  In MSFT world, you have System Center 2012, download and add the Cloud Services Process Pack and there you have a private cloud, with self-service.  Now the “users” can deploy VMs for themselves with audit trails, governance, and all that jazz.  No need to involve IT in service deployment.

This could go on and on and on and on and on and …. 

Hmm, VMware, you really are sounding like you’re grasping for straws right now.

Windows Server 2012 Hyper-V DOES NOT Require SLAT (EPT/NPT) Capable Processors

While searching for good links for my WS2012 Hyper-V feature list, I came across a very misleading article by a Microsoft blogger from one of their regional offices.  The tile of the post claims that you require SLAT for Windows Server 8.

SLAT, or Second Level Address Translation, is where memory management is offloaded to Intel EPT or AMD NPT/RVI in the processor.  It greatly speeds up memory performance (great for memory trashing RDS/Terminal Services and SQL Server) and is a requirement for RemoteFX.

I’ve seen many a confused blog post and tweet on this subject.  The author appears to have been confused by a quote that relates to the client operating system, Windows 8, and not the server hypervisor.   That’s a risk when you’re dealing with pre-release product codenames, and (as in this case) you’re reading articles that are not in your native language.

Anyway, let’s clear up the confusion.  The server operating system continues with the traditional requirements.  These are also requirements for the client:

  • x64 CPU
  • No Execute Bit turned on in the BIOS (DEP – Data Execution Prevention)
  • Virtualisation turned on in the CPU

My old Dell Latitude with a Duo Core CPU can run Hyper-V on Windows Server 2012 (the server OS).

Windows 8 (the client OS) Pro and Enterprise include Hyper-V. The hypervisor on the client OS requires the above AND a SLAT capable processor, e.g. an i3, i5, or i7. My old Dell Latitude with a Duo Core CPU can not run Hyper-V on Windows 8 (client OS).

The blogger of the article in question does backtrack a little in a post comment, but how many people ever get to that point when searching?

In summary, you DO NOT need a SLAT capable processor to run Windows Server 2012 Hyper-V.  SLAT will, however, give memory trashing VMs a great performance boost, and enables you to take advantage of RemoteFX.

I Could Put An Entire Bank’s Systems On A Single WS2012 Hyper-V Host

I designed and managed the MSFT network for an international bank between 2003 and 2005.  At the end, we had 170 or so physical servers, with an average spec of 4 cores and 4 GB RAM.  Thanks to MOM 2005, I knew the average CPU utilisation was around 12% … perfect for virtualisation. 

Assuming that workload trebled over the years (and it wouldn’t have), that would be:

  • 2040 Cores at 12 %, giving us 244 logical processors requires (not very scientific but you get the gist), and that’s not accounting for increased horsepower in newer processors. 
  • 2040 GB RAM, which is half of the max of 4 TB in a host, without using Dynamic Memory.
  • 510 VMs

It’s amazing to think that all of this could run on a single Windows Server 2012 RC Hyper-V host.