Windows Server & System Center TP5 Downloads

Here are more download links for Technical Preview releases of Windows Server 2016 and System Center. Yesterday I posted the links for downloading WS2016, but more has been made available.

My friend, John McCabe (now a PFE at the MSFT office in San Francisco), wrote a free ebook for MS Press on Windows Server 2016 Technical Preview too.

Windows Server 2016 Licensing is Announced

Some sales/marketing/channel type in Microsoft will get angry reading this. Good. I am an advocate of Microsoft tech, and I speak out when things are good, and I speak out when things are bad. Friends will criticise each other when one does something stupid. So don’t take criticism personally and get angry, sending off emails to moan about me. Trying to censor me won’t solve the problem. Hear the feedback. Fix the issue.

We’re still around many months away from the release of Windows Server 2016 (my guess: September, the week of Ignite 2016) but Microsoft has released the details of how licensing of WS2016 will be changing. Yes; changing; a lot.

In 2011, I predicted that the growth of cores per processor would trigger Microsoft to switching from per-socket licensing of Windows Server to per-core. Well, I was right. Wes Miller (@getwired) tweeted a link to a licensing FAQ on WS2016 – this paper confirms that WS2016 and System Center 2016 will be out in Q3 2016.

image

There are two significant changes:

  • Switch to per-core licensing
  • Standard and Datacenter editions are not the same anymore

Per-Core Licensing

The days when processors got more powerful by becoming faster are over. We are in a virtualized multi-threaded world where capacity is more important than horsepower – plus the laws of physics kicked in. Processors now grow by adding cores.

The largest processor that I’ve heard of from Intel (not claiming that it’s the largest ever) has 60 (SIXTY!) cores!!! Imaging you deploy a host with 2 of those Xeon Phi processors … you can license a huge amount of VMs with just 2 copies of WS2012 R2 Datacenter (no matter what virtualization you use). Microsoft is losing money in the upper end of the market because of the scale-out of core counts, so a change was needed.

I hoped that Microsoft would preserve the price for normal customers – it looks like they have, for many customers, but probably not all.

Note – this is per PHYSICAL CORE licensing, not virtual core, not logical processor, not hyperthread.

image

Yes, the language of this document is horrendous. The FAQ needs a FAQ.

It reads like you must purchase a minimum of 8 cores per physical proc, and then purchase incremental counts of 2 cores to meet your physical core count. The customer that is hurt most is the one with a small server, such as a micro-server – you must purchase a minimum of 16 cores.

image

One of the marketing lines on this is that on-premises licensing will align with cloud licensing – anyone deploying Windows Server in Azure or any other hosting company is used to the core model. A software assurance benefit was allegedly announced in October on the very noisy Azure blog (I can’t find it). You can move your Windows Server (with SA) license to the cloud, and deploy it with a blank VM minus the OS charge. I have no further details – it doesn’t appear on the benefits chart either. More details in Q1 2016.

CALs

The switch to core-focused licensing does not do away with CALs. You still need to buy CALs for privately owned licenses – we don’t need Windows Server CALs in hosting, e.g. Azure.

System Center

You’re switching to per-core licensing too.

image

Nano?

This is just an installation type and is not affected by licensing or editions.

Editions?

We know about the “core” editions of WS2016: Standard and Datacenter – more later in this post.

As for Azure Stack, Essentails, Storage Server, etc, we’re told to wait until Q1 2016 when someone somewhere in Redmond is going to have to eat some wormy crow. Why? Keep reading.

Standard is not the same as Datacenter

I found out about the licensing announcement after getting an email from Windows Server User Voice to tell me that my following feedback was rejected:

image

I knew that some stuff was probably going to end up in Datacenter edition only. Many of us gave feedback: “your solutions for reducing infrastructure costs make no sense if they are in Datacenter only because then your solution will be more expensive than the more mature and market-accepted original solution”.

image

The following are Datacenter Edition only:

  • Storage Spaces Direct
  • Storage Replica
  • Shielded Virtual Machines
  • Host Guardian Service
  • Network Fabric

I don’t mind the cloud stuff being Datacenter only – that’s all for densely populated virtualization hosts that Datacenter should be used on. But it’s freaking stupid to put the storage stuff only in this SKU. Let’s imagine a 12 node S2D cluster. Each node has:

  • 2 * 800 GB flash
  • 8 * 8 TB SATA

That’s 65 TB of raw capacity per node. We have roughly 786 TB raw capacity in the cluster, and we’ll guestimate 314 TB of usable capacity. If each node costs $6155 then the licensing cost alone (forget RDMA network switches, NICs, servers, and flash/HDD) will be $73,860. Licensing for storage will be $73,860. Licensing. How much will that SAN cost you? Where was the cost benefit in going with commodity hardware there, may I ask?

This is almost as bad a cock-up as VMware charging for vRAM.

As for Storage Replica, I have a hard time believing that licensing 4 storage controllers for synch replication will cost more than licensing every host/application server for Storage Replica.

S2D is dead. Storage Replica is irrelevant. How are techs that are viewed with suspicion by customers going to gain any traction if they cannot compete with the incumbent? It’s a pity that some marketing bod can’t use Excel, because the storage team did what looks like an incredible engineering job.

If you agree that this decision was stupid then VOTE here.

image

DataON Gets Over 1 Million IOPS using Storage Spaces With A 2U JBOD

I work for a European distributor of DataON storage. When Storage Spaces was released with WS2012, DataON was one of the two leading implementers, and to this day, despite the efforts of HP and Dell, I think DataON gives the best balance of:

  • Performance
  • Price
  • Stability
  • Up-to-date solutions

A few months ago, DataON sent us a document on some benchmark work that was done with their new 12 Gb SAS JBOD. Here are some of the details of the test and the results.

Hardware

  • DNS-2640D (1 tray) with 24 x 2.5” disk slots
  • Servers with 2x E5-2660v3 CPUs, 32 GB RAM, 2 x LSI 9300-8e SAS adapters, and 2 x SSDs for the OS – They actually used the server blades from the CiB-9224, but this could have been a DL380 or a Dell R7x0
  • Windows Server 2012 R2, Build 9600
  • MPIO configured for Least Blocks (LB) policy
  • 24 x 400GB HGST 12G SSD

Storage Spaces

A single pool was created. Virtual disks were created as follows:

image

Test Results

IOMeter was run against the aggregate storage in a number of different scenarios. The results are below:

image

The headline number is 1.1 million 4K reads per second. But even if we stick to 8K, the JBOD was offering 700,000 reads or 300,000 writes.

I bet this test rig cost a fraction of what the equivalent performing SAN would!

A Roundup of WS2016 TPv3 Links

I thought that I’d aggregate a bunch of links related to new things in the release of Windows Server 2016 Technical Preview 3 (TP3). I think this is pretty complete for Hyper-V folks – as you can see, there’s a lot of stuff in the networking stack.

FYI: it looks like Network Controller will require the DataCenter edition by RTM – it does in TPv3. And our feedback on offering the full installation during setup has forced a reversal.

Hyper-V

Administration

Containers

Networking

Storage

 

Nano Server

Failover Clustering

Remote Desktop Services

System Center

Windows Server 2016 – Switch Embedded Teaming and Virtual RDMA

WS2016 TPv3 (Technical Preview 3) includes a new feature called Switch Embedded Teaming (SET) that will allow you to converge RDMA (remote direct memory access) NICs and virtualize RDMA for the host. Yes, you’ll be able to converge SMB Direct networking!

In the below diagram you can see a host with WS2012 R2 networking and a similar host with WS2016 networking. See how:

  • There is no NIC team in WS2012: this is SET in action, providing teaming by aggregating the virtual switch uplinks.
  • RDMA is converged: DCB is enabled, as is recommended – it’s even recommended in iWarp where it is not required.
  • Management OS vNICs using RDMA: You can use converged networks to use SMB Direct.

Network architecture changes

 

Note, according to Microsoft:

In Windows Server 2016 Technical Preview, you can enable RDMA on network adapters that are bound to a Hyper-V Virtual Switch with or without Switch Embedded Teaming (SET).

Right now in TPv3, SET does not support Live Migration – which is confusing considering the above diagram.

What is SET?

SET is an alternative to NIC teaming. It allows you to converge between 1 and 8 physical networks using the virtual switch. The pNICs can be on the same or different physical switches. Obviously, the networking of the pNICs must be the same to allow link aggregation and failover.

No – SET does not span hosts.

Physical NIC Requirements

SET is much more fussy about NICs than NIC teaming (which continues as a Windows Server networking technology because SET requires a virtual switch, or Hyper-V). The NICs must be:

  1. On the HCL, aka “passed the Windows Hardware Qualification and Logo (WHQL) test in a SET team in Windows Server 2016 Technical Preview”.
  2. All NICs in a SET team must be identical: same manufacturer, same model, same firmware and driver.
  3. There can be between 1 and 8 NICs in a single SET team (same switch on a single host).

 

SET Compatibility

SET is compatible with the following networking technologies in Windows Server 2016 Technical Preview.

  • Datacenter bridging (DCB)
  • Hyper-V Network Virtualization – NV-GRE and VxLAN are both supported in Windows Server 2016 Technical Preview.
  • Receive-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if any of the SET team members support them.
  • Remote Direct Memory Access (RDMA)
  • SDN Quality of Service (QoS)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if all of the SET team members support them.
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scalaing (RSS)

SET is not compatible with the following networking technologies in Windows Server 2016 Technical Preview.

  • 802.1X authentication
  • IPsec Task Offload (IPsecTO)
  • QoS in host or native OSs
  • Receive side coalescing (RSC)
  • Receive side scaling (RSS)
  • Single root I/O virtualization (SR-IOV)
  • TCP Chimney Offload
  • Virtual Machine QoS (VM-QoS)

 

Configuring SET

There is no concept of a team name in SET; there is just the virtual switch which has uplinks. There is no standby pNIC; all pNICs are active. SET only operates in Switch Independent mode – nice and simple because the switch is completely unaware of the SET team and there’s no networking (no Googling for me).

All that you require is:

  • Member adapters: Pick the pNICs on the host. The benefit is that when VMQ is used because inbound traffic paths are predictable.
  • Load balancing mode: Hyper-V Port or Dynamic. Outbound traffic is hashed and balanced across the uplinks. Inbound traffic is the same as with Hyper-V mode.

Like with WS2012 R2, I expect Dynamic will be the normally recommended option.

VMQ

SET was designed to work well with VMQ. We’ll see how well NIC drivers and firmware behave with SET. As we’ve seen in the past, some manufacturers take up to a year (Emulex on blade servers) to fix issues. Test, test, test, and disable VMQ if you see Hyper-V network outages with SET deployed.

In terms of tuning, Microsoft says:

    • Ideally each NIC should have the *RssBaseProcNumber set to an even number greater than or equal to two (2). This is because the first physical processor, Core 0 (logical processors 0 and 1), typically does most of the system processing so the network processing should be steered away from this physical processor. (Some machine architectures don’t have two logical processors per physical processor so for such machines the base processor should be greater than or equal to 1. If in doubt assume your host is using a 2 logical processor per physical processor architecture.)
    • The team members’ processors should be, to the extent practical, non-overlapping. For example, in a 4-core host (8 logical processors) with a team of 2 10Gbps NICs, you could set the first one to use base processor of 2 and to use 4 cores; the second would be set to use base processor 6 and use 2 cores.

Creation and Management

You’ll hear all the usual guff about System Center and VMM. The 8% that can afford System Center can do that, if they can figure out the UI. PowerShell can be used to easily create and manage a SET virtual switch.

Summary

SET is a great first (or second behind vRSS in WS2012 R2) step:

  • Networking is simplified
  • RDMA can be converged
  • We get vRDMA to the host

We just need Live Migration support and stable physical NIC drivers and firmware.

Introducing Windows Server Containers

Technical Preview 3 of Windows Server 2016 is out and one of the headline feature additions to this build is Windows Server Containers. What are they? And how do they work? Why would you use them?

Background

Windows Server Containers is Microsoft’s implementation of an open source world technology that has been made famous by a company called Docker. In fact:

  • Microsoft’s work is a result of a partnership with Docker, one which was described to me as being “one of the fastest negotiated partnerships” and one that has had encouragement from CEO Satya Nadella.
  • Windows Server Containers will be compatible with Linux containers.
  • You can manage Windows Server Containers using Docker, which has a Windows command line client. Don’t worry – you won’t have to go down this route if you don’t want to install horrid prerequisites such as Oracle VirtualBox (!!!).

What are Containers?

Containers is around a while, but most of us that live outside of the Linux DevOps world won’t have had any interaction with them. The technology is a new kind of virtualisation to enable rapid (near instant) deployment of applications.

Like most virtualisation, Containers take advantage of the fact that most machines are over-resourced; we over-spec a machine, install software, and then the machine is under-utilized. 15 years ago, lots of people attempted to install more than one application per server. That bad idea usually ended up in p45’s (“pink slips”) being handed out (otherwise known as a “career ending event”. That because complex applications make poor neighbours on a single operating system with no inter-app isolation.

Machine virtualisation (vSphere, Hyper-V, etc) takes these big machines and uses software to carve the physical hosts into lots of virtual machines; each virtual machine has its own guest OS and this isolation provides a great place to install applications. The positives are we have rock solid boundaries, including security, between the VMs, but we have more OSs to manage. We can quickly provision a VM from a template, but then we have to install lots of pre-reqs and install the app afterwards. OK – we can have VM templates of various configs, but a hundred templates later, we have a very full library with lots of guest OSs that need to be managed, updated, etc.

Containers is a kind of virtualisation that resides one layer higher; it’s referred to as OS virtualization. The idea is that we provision a container on a machine (physical or virtual). The container is given a share of CPU, RAM, and a network connection. Into this container we can deploy a container OS image. And then onto that OS image we can install perquisites and an application. Here’s the cool bit: everything is really quick (typing the command takes longer than the deployment) and you can easily capture images to a repository.

How easy is it? It’s very easy – I recently got hands-on access to Windows Server Containers in a supervised lab and I was able to deploy and image stuff using a PowerShell module without any documentation and with very little assistance. It had helped that I’d watched a session on Containers from Microsoft Ignite.

How Do Containers Work?

There are a few terms you should get to know:

  • Windows Server Container: The Windows Server implementation of containers. It provides application isolation via OS virtualisation, but it does not create a security boundary between applications on the same host. Containers are stateless, so stateful data is stored elsewhere, e.g. SMB 3.0.
  • Hyper-V Container: This is a variation of the technology that uses Hyper-V virtualization to securely isolate containers from each other – this is why nested virtualisation was added to WS2016 Hyper-V.
  • Container OS Image: This is the OS that runs in the container.
  • Container Image: Customisations of a container (installing runtimes, services, etc) can be saved off for later reuse. This is the mechanism that makes containers so powerful.
  • Repository: This is a flat file structure that contains container OS images and container images.

Note: This is a high level concept post and is not a step-by-step instructional guide.

We start off with:

  • A container host: This machine will run containers. Note that a Hyper-V virtual switch is created to share the host’s network connection with containers, thus network-enabling those containers when they run.
  • A repository: Here we store container OS images and container images. This repository can be local (in TPv3) or can be an SMB 3.0 file share (not in TPv3, but hopefully in a later release).

image

The first step is to create a container. This is accomplished, natively, using a Containers PowerShell module, which from experience, is pretty logically laid out and easy to use. Alternatively you can use Docker. I guess System Center will add support too.

When you create the container you specify the name and can offer a few more details such as network connection to the host’s virtual switch (you can add this retrospectively), RAM and CPU.

You then have a blank and useless container. To make it useful you need to add a container OS image. This is retrieved from the Repository, which can be local (in a lab) or on an SMB 3.0 file share (real world). Note that an OS is not installed in the container. The container points at the repository and only differences are saved locally.

How long does it take to deploy the container OS image? You type the command, press return, and the OS is sitting there, waiting for you to start the container. Folks, Windows Server Containers are FAST – they are Vin Diesel parachuting a car from a plane fast.

image

Now you can use Enter-PSSession to log into a container using PowerShell and start installing and configuring stuff.

Let’s say you want to install PHP. You need to:

  1. Get the installer available to the container, maybe via the network
  2. Ensure that the installer either works silently (unattended) or works from command line

Install the program, e.g. PHP, and then configure it the way you want it (from command line).

image

Great, we now have PHP in the container. But there’s a good chance that I’ll need PHP in lots of future containers. We can create a container image from that PHP install. This process will capture the changes from the container as it was last deployed (the PHP install) and save those changes to the repository as a container image. The very quick process is:

  1. Stop the container
  2. Capture the container image

Note that container image now has a link to the guest OS image that it was installed on, i.e. there is a dependency link and I’ll come back to this.

Let’s deploy another container with a guest OS image called Container2.

image

For some insane reason, I want to install the malware gateway known as Java into this container.

image

Once again, I can shut down this new container and create a container image from this Java installation. This new container image also has a link to the required container OS image.

image

Right, let’s remove Container1 and Container2 – something that takes seconds. I now have a container OS image for Windows Server 2012 R2 and container images for Java and Linux. Let’s imagine that a developer needs to deploy an application that requires PHP. What do they need to do? It’s quite easy – they create a container from the PHP container image. Windows Server Containers knows that PHP requires the Windows Server container OS image, and that is deployed too.

The entire deployment is near instant because nothing is deployed; the container links to the images in the repository and saves changes locally.

image

Think about this for a second – we’ve just deployed a configured OS in little more time than it takes to type a command. We’ve also modelled a fairly simple application dependency. Let’s complicate things.

The developer installs WordPress into the new container.

image

The dev plans on creating multiple copies of their application (dev, test, and production) and like many test/dev environments, they need an easy way to reset, rebuild, and to spin up variations; there’s nothing like containers for this sort of work. The dev shuts down Container3 and then creates a new container image. This process captures the changes since the last deployment and saves a container image in the repository – the WordPress installation. Note that this container doesn’t include the contents of PHP or Windows Server but it does link to PHP and PHP links to Windows Server.

image

The dev is done and resets the environment. Now she wants to deploy 1 container for dev, 1 for test, and 1 for production. Simple! This requires 3 commands, each one that will create a new container from the WordPress container image, which logically uses the required PHP and PHP’s required Windows Server.

Nothing is actually deployed to the containers; each container links to the images in the repository and saves changes locally. Each container is isolated from the other to provide application stability (but not security – this is where Hyper-V Containers comes into play). And best of all – the dev has had the experience of:

  • Saying “I want three copies of WordPress”
  • Getting the OS and all WordPress pre-requisites
  • Getting them instantly
  • Getting 3 identical deployments

image

From the administrator’s perspective, they’ve not had to be involved in the deployment, and the repository is pretty simple. There’s no need for a VM with Windows Server, another with Windows Server & PHP, and another with Windows Server, PHP & WordPress. Instead, there is an image for Windows Server, and image for PHP and an image for WordPress, with links providing the dependencies.

And yes, the repository is a flat file structure so there’s no accidental DBA stuff to see here.

Why Would You Use Containers?

If you operate in the SME space then keep moving, and don’t bother with Containers unless they’re in an exam you need to pass to satisfy the HR drones. Containers are aimed at larger environments where there is application sprawl and repetitive installations.

Is this similar to what SCVMM 2012 introduced with Server App-V and service templates? At a very high level, yes, but Windows Server Containers is easy to use and probably a heck of a lot more stable.

Note that Containers are best suited for stateless workloads. If you want to save data then save it elsewhere, e.g. SMB 3.0. What about MySQL and SQL Server? Based on what was stated at Ignite, then there’s a solution (or one in the works); they are probably using SMB 3.0 to save the databases outside of the container. This might require more digging, but I wonder if databases would really be a good fit for containers. And I wonder, much like with Azure VMs, if there will be a later revision that brings us stateful containers.

I don’t imagine that my market at work (SMEs) will use Windows Server Containers, but if I was back working as an admin in a large enterprise then I would definitely start checking out this technology. If I worked in a software development environment then I would also check out containers for a way to rapidly provision new test and dev labs that are easy to deploy and space efficient.

[Update]

Here is a link to the Windows Server containers page on the TechNet Library.

We won’t see Hyper-V containers in TPv3 – that will come in a later release, I believe later in 2015.

Windows Server 2016 Technical Preview 3 Is Coming Out Today

There’s enough clues out there to lead one to believe that TPv3 of Windows Server is about to be released, maybe even later today, as confirmed by Mary Jo Foley.

First there’s a new article on TechNet called What’s New in Windows Server 2016 Technical Preview 3 that even Jeffrey Snover has tweeted.

MVP Niklas Akerlund tweeted that he just saw Windows Server Technical Preview 3 on the Azure Marketplace.

https://pbs.twimg.com/media/CMxmRKgVAAEODjT.png:large

System Requirements and Installation was updated to refer to TPv3:

For example, if you choose Server with Desktop Experience at the beginning of the process, enter the product key, accept license terms, and then backtrack to choose Server Technical Preview 3, the installation will fail.

[Update]

GeekWire published Microsoft releases first Windows Server Container preview under Docker partnership, and Windows Server Containers are in TPv3.

The Server & Cloud blog published New Windows Server Preview Fuels Application Innovation with Containers, Software-Defined Datacenter Updates.

Not long now, I guess.

FYI, a tweet by Gabe Aul leads us to believe that a new release of RSAT for Windows 10 is due around the same time.

Windows Server 2003 End of Life

Today is a sad day; it’s the last day that Microsoft supports Windows Server 2003, Windows Server 2003 R2, and the related SBS versions.

The year was 2003 when I joined a spinoff of Hypovereinsbank called Hypo International, a finance company that would later try to crash the European economy (allegedly). HVB was stuck in the past running NT 4.0 Server and Workstation with Office 97. I worked in the HQ of the new company and was responsible for designing our global Windows network. I argued for Windows Server 2003 which had just gone GA, and I won out, and we deployed Windows XP on the desktop. We were going to be bleeding edge, doing all things by the book, and eventually we even ran what would become System Center to centrally manage the entire network. But powering it all was my baby, W2003. W2003 proved to be rock solid.

But times changed, as did the whims of the directors who attempted to move the IT department to Stuttgart (the new CIO later expressed to me how wrong a decision this ended up being) and I was made redundant. Work places changed, how we worked changed, W2008 came and went, W2008 R2 came and went, WS2012 came and went, WS2012 R2 arrived, and now we have a technical preview 2 release of WS2016.

So today, July 14th 2015, is the last day that Microsoft supports the aged W2003 and derivatives. The date was not a secret so there are no excuses. Fare thee well Windows Server 2003, and I look forward to working with your great, great, great grandchild in 2016.

Attempting to justify your stubbornness on not upgrading from W2003 on this site leaves you open to intense public derision and ridicule.

Technorati Tags:

Software-Defined Storage Calculator and Design Considerations Guide

Microsoft has launched an Excel-based sizing tool to help you plan Storage Spaces (Scale-Out File Server) and guidance on how to design your Storage Spaces deployments.

Here’s the sizing for a very big SOFS that will require 4 x SOFS server nodes and 4 x 60 disk JBODs:

image The considerations guide will walk you through using the sizing tool.

Some updates are required – some newer disk sizes aren’t included – but this is a great starting point for a design process.