Microsoft GAs The Last Vital Piece For VM Hosting

Microsoft announced that Azure Backup for Azure IaaS virtual machines (VMs) was released to generally availability yesterday. Personally, I think this removes a substantial roadblock from deploying VMs in Azure for most businesses (forget the legal stuff for a moment).

No Backup – Really?

I’ve mentioned many times that I once worked in the hosting business. My first job was as a senior engineer with what was then a large Irish-owned company. We ran three services:

  • Websites: for a few Euros a month, you could get a plan that allowed 10+ websites. We also offered SQL Server and MySQL databases.
  • Physical servers: Starting from a few hundred Euros, you got one or more physical servers
  • Virtual machines: I deployed the VMware (yeah, VMware) farm running on HP blades and EVA, and customers got their own VNET with one or more VMs

The official line on websites was that there was no backup of websites or databases. You lose it, you LOST it. In reality we retained 1 daily backup to cover our own butts. Physical servers were not backed up unless a customer paid extra for it, and they got an Ahsay agent and paid for storage used. The same went for VMware VMs – pay for the agent + storage and you could get a simple form of cloud backup.

Backup-less Azure

Until very recently there was no backup of Azure VMs. How could that be? This line says a lot about how Microsoft thinks:

Treat your servers like cattle, not pets

When Azure VMs originally launched in beta, the VMs were stateless, much like containers. If you rebooted the VM it reset itself. You were supposed to write your applications so that they used Azure storage accounts or Azure SQL databases. There was no DC or SQL Server VM in the cloud – that aws silly because no one deploys or uses stateful machines anymore. Therefore you shouldn’t care if a VM dies, gets corrupted, or is accidentally removed – you just deploy a new one and carry on.

Except …

Almost no one deploys servers like that.

I can envision some companies, like an Ebay or an Amazon running stateless application or web servers. But in my years of working in large and small/medium businesses, I’ve never seen stateless machines, and I’ve never encountered anyone with a need for those style of applications – the web server/database server configuration still dominates AFAIK.

So this is why Azure never had a backup service for VMs. A few years ago, Microsoft changed Azure VMs to be stateful (Hyper-V) virtual machines that we are familiar with and started to push this as a viable alternative to traditional machine deployments. I asked the question: what happens if I accidentally delete a VM – and I got the old answer:

Prepare your CV/résumé.

Mark Minasi quoted me at TechEd North America in one of his cloud Q&A’s with Mark Russinovich 2 years ago – actually he messed up the question a little and Russinovich gave a non-answer. The point was: how could I possibly deploy a critical VM into Azure if I could not back it up.

Use DPM!

Yeah, Microsoft last year blogged that customers should use System Center Data Protection Manager to protect VMs in Azure. You’d install an agent into the guest OS (you have no access to Azure hosts and there is no backup API) and backup files, folders, databases to DPM running in another VM. The only problem with this would be the cost:

  • You’d need to deploy an Azure VM for DPM.
  • You would have to use Page Blobs & Disks instead of Block Blobs, doubling the cost of Azure storage required.
  • The cost of System Center SMLs would have been horrific. A Datacenter SML ($3,607 on Open NL) would cover up to 8 Azure virtual machines.

Not to mention that you could not simply restore a VM:

  • Create a new VM
  • Install applications, e.g. SQL Server
  • Install the DPM agent
  • Restore files/folders/databases
  • Pray to your god and any others you can think of

Azure Backup

Azure has a backup service called Azure Backup. This was launched as a hybrid cloud service, enabling you to backup machines (PCs, servers) to the cloud using an agent (MARS). You can also install the MARS agent onto an on-premises DPM server to forward all/subset of your backup data to the cloud for off-site storage. Azure Backup uses Block Blob storage (LRS or GRS) so it’s really affordable.

Earlier this year, Microsoft launched a preview of Azure Backup for Azure IaaS VMs. With this service you can protect Azure VMs (Windows or Linux) using a very simple VM backup mechanism:

  1. Create a backup policy – when to backup and how long to retain data
  2. Register VMs – installs an extension to consistently backup running VMs
  3. Protect VMs: Associate registered VMs with a policy
  4. Monitor backups

The preview wasn’t perfect. In the first week or so, registration was hit and miss. Backup of large VMs was quite slow too. But the restore process worked – this blog exists today only because I was able to restore the Azure VM that it runs on from an Azure backup – every other restore method I had for the MySQL database failed.

Generally Available

Microsoft made Azure Backup for IaaS VMs generally available yesterday. This means that now you can, in a supported, simple, and reliable manner, backup your Windows/Linux VMs that are running in Azure, and if you lose one, you can easily restore it from backup.

A number of improvements were included in the GA release:

  • A set of PowerShell based cmdlets have been released – update your Azure PowerShell module!
  • You can restore a VM with an Azure VM configuration of your choice to a storage account of your choice.
  • The time required to register a VM or back it up has been reduced.
  • Azure Backup is in all regions that support Azure VMs.
  • There is improved logging for auditing purposes.
  • Notification emails can be sent to administrators or an email address of your choosing.
  • Errors include troubleshooting information and links to documentation.
  • A default policy is included in every backup vault
  • You can create simple or complex retention policies (similar to hybrid cloud backup in MARS agent) that can keep data up to 99 years.

Summary

With this release, Microsoft now has solved my biggest concern with running production workloads in Azure VMs – now we can backup and restore stateful machines that have huge value to the business.

Technorati Tags: ,,

MS15-105 – Vulnerability in Windows Hyper-V Could Allow Security Feature Bypass

Microsoft released a security hotfix for Hyper-V last night. They describe it as:

This security update resolves a vulnerability in Microsoft Windows. The vulnerability could allow security feature bypass if an attacker runs a specially crafted application that could cause Windows Hyper-V to incorrectly apply access control list (ACL) configuration settings. Customers who have not enabled the Hyper-V role are not affected.

This security update is rated Important for all supported editions of Windows 8.1 for x64-based Systems, Windows Server 2012 R2, and Windows 10 for x64-based Systems. For more information, see the Affected Software section.

The security update addresses the vulnerability by correcting how Hyper-V applies ACL configuration settings. For more information about the vulnerability, see the Vulnerability Information section.

KB3091287 does go into any more detail.

CVE-2015-2534 simply says:

Hyper-V in Microsoft Windows 8.1, Windows Server 2012 R2, and Windows 10 improperly processes ACL settings, which allows local users to bypass intended network-traffic restrictions via a crafted application, aka “Hyper-V Security Feature Bypass Vulnerability.”

Affected OSs are:

  • Windows 10
  • Windows 8.1
  • Windows Server 2012 R2

No Windows 8 or WS2012 – that makes me wonder if this is something to do with Extended Port ACLs.

Credit: Patrick Lownds (MVP) for tweeting the link.

ReFS Accelerated VHDX Operations

One of the interesting new features in Windows Server 2016 (WS2016) is ReFS Accelerated VHDX Operations (which also work with VHD). This feature is not ODX (VAAI for you VMware-bods), but it offers the same sort of benefits for VHD/X operations. In other words: faster creation and copying of VHDX files, particularly fixed VHDX files.

Reminder: while Microsoft continually tells us that dynamic VHD/Xs are just as fast as fixed VHDX files, we know from experience that the fixed alternative gives better application performance. Even some of Microsoft’s product groups refuse to support dynamic VHD/X files. But the benefit of Dynamic disks is that they start out as a small file that is extended as time requires, but fixed VHDX files take up space immediately. The big problem with fixed VHD/X files is that they take an age to create or extend because they must be zeroed out.

Those of you with a nice SAN have seen how ODX can speed up VHD/X operations, but the Microsoft world is moving (somewhat) to SMB 3.0 storage where there is no SAN for hardware offloading.

This is why Microsoft has added Accelerated VHDX Operations to ReFS. If you format your CSVs with ReFS then ReFS will speed up the creation and extension of the files for you. How much? Well this is why I built a test rig!

The back-end storage is a pair of physical servers that are SAS (6 Gb) connected to a shared DataON DNS-1640 JBOD with tiered storage (SSD and HDD); I built a WS2016 TPv3 Scale-Out File Server with 2 tiered virtual disks (64 KB interleave) using this gear. Each virtual disk is a CSV in the SOFS cluster. CSV1 is formatted with ReFS and CSV2 is formatted with NTFS, 64 KB allocation unit size on both. Each CSV has a file share, named after the CSV.

I had another WS2016 TPv3 physical server configured as a Hyper-V host. I used Switch Embedded Teaming to aggregate a pair of iWARP NICs (RDMA/SMB Direct, each offering 10 GbE connectivity to the SOFS) and created a pair of virtual NICs in the host for SMB Multichannel.

I ran a script on the host to create fixed VHDX files against each share on the SOFS, measuring the time it requires for each disk. The disks created are of the following sizes:

  • 1 GB
  • 10 GB
  • 100 GB
  • 500 GB

Using the share on the NTFS-formatted CSV, I had the following results:

image

A 500 GB VHDX file, nothing that unusual for most of us, took 40 minutes to create. Imagine you work for an IT service provider (which could be a hosting company or an IT department) and the customer (which can be your employer) says that they need a VM with a 500 GB disk to deal with an opportunity or a growing database. Are you going to say “let me get back to you in an hour”? Hmm … an hour might sound good to some but for the customer it’s pretty rubbish.

Let’s change it up. The next results are from using the share on the ReFS volume:

image

Whoah! Creating a 500 GB fixed VHDX now takes 13 seconds instead of 40 minutes. The CSVs are almost identical; the only difference is that one is formatted with ReFS (fast VHD/X operations) and the other is NTFS (unenhanced). Didier Van Hoye has also done some testing using direct CSV volumes (no SMB 3.0), comparing Compellent ODX and ReFS. What the heck is going on here?

The zero-ing out process that is done while creating a fixed VHDX has been converted into a metadata operation – this is how some SANs optimize the same process using ODX. So instead of writing out to the disk file, ReFS is updating metadata which effectively says “nothing to see here” to anything (such as Hyper-V) that reads those parts of the VHD/X.

Accelerated VHDX Operations also works in other subtle ways. Merging a checkpoint is now done without moving data around on the disk – another metadata operation. This means that merges should be quicker and use fewer IOPS. This is nice because:

  • Production Checkpoints (on by default) will lead to more checkpoint usage in DevOps
  • Backup uses checkpoints and this will make backups less disruptive

Does this feature totally replace ODX? No, I don’t think it does. Didier’s testing proves that ReFS’s metadata operation is even faster than the incredible performance of ODX on a Compellent. But, the SAN offers more. ReFS is limited to operations inside a single volume. Say you want to move storage from one LUN to another? Or maybe you want to provision a new VM from a VMM library? ODX can help in those scenarios, but ReFS cannot. I cannot say yet if the two technologies will be compatible (and stable together) at the time of GA (I suspect that they will, but SAN OEMs will have the biggest impact here!) and offer the best of both worlds.

This stuff is cool and it works without configuration out of the box!

Microsoft News – 7 September 2015

Here’s the recent news from the last few weeks in the Microsoft IT Pro world:

Hyper-V

Windows Server

Windows

System Center

Azure

Office 365

Intune

Events

  • Meet AzureCon: A virtual event on Azure on September 29th, starting at 9am Pacific time, 5pm UK/Irish time.

A Roundup of WS2016 TPv3 Links

I thought that I’d aggregate a bunch of links related to new things in the release of Windows Server 2016 Technical Preview 3 (TP3). I think this is pretty complete for Hyper-V folks – as you can see, there’s a lot of stuff in the networking stack.

FYI: it looks like Network Controller will require the DataCenter edition by RTM – it does in TPv3. And our feedback on offering the full installation during setup has forced a reversal.

Hyper-V

Administration

Containers

Networking

Storage

 

Nano Server

Failover Clustering

Remote Desktop Services

System Center

Windows Server 2016 – Switch Embedded Teaming and Virtual RDMA

WS2016 TPv3 (Technical Preview 3) includes a new feature called Switch Embedded Teaming (SET) that will allow you to converge RDMA (remote direct memory access) NICs and virtualize RDMA for the host. Yes, you’ll be able to converge SMB Direct networking!

In the below diagram you can see a host with WS2012 R2 networking and a similar host with WS2016 networking. See how:

  • There is no NIC team in WS2012: this is SET in action, providing teaming by aggregating the virtual switch uplinks.
  • RDMA is converged: DCB is enabled, as is recommended – it’s even recommended in iWarp where it is not required.
  • Management OS vNICs using RDMA: You can use converged networks to use SMB Direct.

Network architecture changes

 

Note, according to Microsoft:

In Windows Server 2016 Technical Preview, you can enable RDMA on network adapters that are bound to a Hyper-V Virtual Switch with or without Switch Embedded Teaming (SET).

Right now in TPv3, SET does not support Live Migration – which is confusing considering the above diagram.

What is SET?

SET is an alternative to NIC teaming. It allows you to converge between 1 and 8 physical networks using the virtual switch. The pNICs can be on the same or different physical switches. Obviously, the networking of the pNICs must be the same to allow link aggregation and failover.

No – SET does not span hosts.

Physical NIC Requirements

SET is much more fussy about NICs than NIC teaming (which continues as a Windows Server networking technology because SET requires a virtual switch, or Hyper-V). The NICs must be:

  1. On the HCL, aka “passed the Windows Hardware Qualification and Logo (WHQL) test in a SET team in Windows Server 2016 Technical Preview”.
  2. All NICs in a SET team must be identical: same manufacturer, same model, same firmware and driver.
  3. There can be between 1 and 8 NICs in a single SET team (same switch on a single host).

 

SET Compatibility

SET is compatible with the following networking technologies in Windows Server 2016 Technical Preview.

  • Datacenter bridging (DCB)
  • Hyper-V Network Virtualization – NV-GRE and VxLAN are both supported in Windows Server 2016 Technical Preview.
  • Receive-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if any of the SET team members support them.
  • Remote Direct Memory Access (RDMA)
  • SDN Quality of Service (QoS)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if all of the SET team members support them.
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scalaing (RSS)

SET is not compatible with the following networking technologies in Windows Server 2016 Technical Preview.

  • 802.1X authentication
  • IPsec Task Offload (IPsecTO)
  • QoS in host or native OSs
  • Receive side coalescing (RSC)
  • Receive side scaling (RSS)
  • Single root I/O virtualization (SR-IOV)
  • TCP Chimney Offload
  • Virtual Machine QoS (VM-QoS)

 

Configuring SET

There is no concept of a team name in SET; there is just the virtual switch which has uplinks. There is no standby pNIC; all pNICs are active. SET only operates in Switch Independent mode – nice and simple because the switch is completely unaware of the SET team and there’s no networking (no Googling for me).

All that you require is:

  • Member adapters: Pick the pNICs on the host. The benefit is that when VMQ is used because inbound traffic paths are predictable.
  • Load balancing mode: Hyper-V Port or Dynamic. Outbound traffic is hashed and balanced across the uplinks. Inbound traffic is the same as with Hyper-V mode.

Like with WS2012 R2, I expect Dynamic will be the normally recommended option.

VMQ

SET was designed to work well with VMQ. We’ll see how well NIC drivers and firmware behave with SET. As we’ve seen in the past, some manufacturers take up to a year (Emulex on blade servers) to fix issues. Test, test, test, and disable VMQ if you see Hyper-V network outages with SET deployed.

In terms of tuning, Microsoft says:

    • Ideally each NIC should have the *RssBaseProcNumber set to an even number greater than or equal to two (2). This is because the first physical processor, Core 0 (logical processors 0 and 1), typically does most of the system processing so the network processing should be steered away from this physical processor. (Some machine architectures don’t have two logical processors per physical processor so for such machines the base processor should be greater than or equal to 1. If in doubt assume your host is using a 2 logical processor per physical processor architecture.)
    • The team members’ processors should be, to the extent practical, non-overlapping. For example, in a 4-core host (8 logical processors) with a team of 2 10Gbps NICs, you could set the first one to use base processor of 2 and to use 4 cores; the second would be set to use base processor 6 and use 2 cores.

Creation and Management

You’ll hear all the usual guff about System Center and VMM. The 8% that can afford System Center can do that, if they can figure out the UI. PowerShell can be used to easily create and manage a SET virtual switch.

Summary

SET is a great first (or second behind vRSS in WS2012 R2) step:

  • Networking is simplified
  • RDMA can be converged
  • We get vRDMA to the host

We just need Live Migration support and stable physical NIC drivers and firmware.

Introducing Windows Server Containers

Technical Preview 3 of Windows Server 2016 is out and one of the headline feature additions to this build is Windows Server Containers. What are they? And how do they work? Why would you use them?

Background

Windows Server Containers is Microsoft’s implementation of an open source world technology that has been made famous by a company called Docker. In fact:

  • Microsoft’s work is a result of a partnership with Docker, one which was described to me as being “one of the fastest negotiated partnerships” and one that has had encouragement from CEO Satya Nadella.
  • Windows Server Containers will be compatible with Linux containers.
  • You can manage Windows Server Containers using Docker, which has a Windows command line client. Don’t worry – you won’t have to go down this route if you don’t want to install horrid prerequisites such as Oracle VirtualBox (!!!).

What are Containers?

Containers is around a while, but most of us that live outside of the Linux DevOps world won’t have had any interaction with them. The technology is a new kind of virtualisation to enable rapid (near instant) deployment of applications.

Like most virtualisation, Containers take advantage of the fact that most machines are over-resourced; we over-spec a machine, install software, and then the machine is under-utilized. 15 years ago, lots of people attempted to install more than one application per server. That bad idea usually ended up in p45’s (“pink slips”) being handed out (otherwise known as a “career ending event”. That because complex applications make poor neighbours on a single operating system with no inter-app isolation.

Machine virtualisation (vSphere, Hyper-V, etc) takes these big machines and uses software to carve the physical hosts into lots of virtual machines; each virtual machine has its own guest OS and this isolation provides a great place to install applications. The positives are we have rock solid boundaries, including security, between the VMs, but we have more OSs to manage. We can quickly provision a VM from a template, but then we have to install lots of pre-reqs and install the app afterwards. OK – we can have VM templates of various configs, but a hundred templates later, we have a very full library with lots of guest OSs that need to be managed, updated, etc.

Containers is a kind of virtualisation that resides one layer higher; it’s referred to as OS virtualization. The idea is that we provision a container on a machine (physical or virtual). The container is given a share of CPU, RAM, and a network connection. Into this container we can deploy a container OS image. And then onto that OS image we can install perquisites and an application. Here’s the cool bit: everything is really quick (typing the command takes longer than the deployment) and you can easily capture images to a repository.

How easy is it? It’s very easy – I recently got hands-on access to Windows Server Containers in a supervised lab and I was able to deploy and image stuff using a PowerShell module without any documentation and with very little assistance. It had helped that I’d watched a session on Containers from Microsoft Ignite.

How Do Containers Work?

There are a few terms you should get to know:

  • Windows Server Container: The Windows Server implementation of containers. It provides application isolation via OS virtualisation, but it does not create a security boundary between applications on the same host. Containers are stateless, so stateful data is stored elsewhere, e.g. SMB 3.0.
  • Hyper-V Container: This is a variation of the technology that uses Hyper-V virtualization to securely isolate containers from each other – this is why nested virtualisation was added to WS2016 Hyper-V.
  • Container OS Image: This is the OS that runs in the container.
  • Container Image: Customisations of a container (installing runtimes, services, etc) can be saved off for later reuse. This is the mechanism that makes containers so powerful.
  • Repository: This is a flat file structure that contains container OS images and container images.

Note: This is a high level concept post and is not a step-by-step instructional guide.

We start off with:

  • A container host: This machine will run containers. Note that a Hyper-V virtual switch is created to share the host’s network connection with containers, thus network-enabling those containers when they run.
  • A repository: Here we store container OS images and container images. This repository can be local (in TPv3) or can be an SMB 3.0 file share (not in TPv3, but hopefully in a later release).

image

The first step is to create a container. This is accomplished, natively, using a Containers PowerShell module, which from experience, is pretty logically laid out and easy to use. Alternatively you can use Docker. I guess System Center will add support too.

When you create the container you specify the name and can offer a few more details such as network connection to the host’s virtual switch (you can add this retrospectively), RAM and CPU.

You then have a blank and useless container. To make it useful you need to add a container OS image. This is retrieved from the Repository, which can be local (in a lab) or on an SMB 3.0 file share (real world). Note that an OS is not installed in the container. The container points at the repository and only differences are saved locally.

How long does it take to deploy the container OS image? You type the command, press return, and the OS is sitting there, waiting for you to start the container. Folks, Windows Server Containers are FAST – they are Vin Diesel parachuting a car from a plane fast.

image

Now you can use Enter-PSSession to log into a container using PowerShell and start installing and configuring stuff.

Let’s say you want to install PHP. You need to:

  1. Get the installer available to the container, maybe via the network
  2. Ensure that the installer either works silently (unattended) or works from command line

Install the program, e.g. PHP, and then configure it the way you want it (from command line).

image

Great, we now have PHP in the container. But there’s a good chance that I’ll need PHP in lots of future containers. We can create a container image from that PHP install. This process will capture the changes from the container as it was last deployed (the PHP install) and save those changes to the repository as a container image. The very quick process is:

  1. Stop the container
  2. Capture the container image

Note that container image now has a link to the guest OS image that it was installed on, i.e. there is a dependency link and I’ll come back to this.

Let’s deploy another container with a guest OS image called Container2.

image

For some insane reason, I want to install the malware gateway known as Java into this container.

image

Once again, I can shut down this new container and create a container image from this Java installation. This new container image also has a link to the required container OS image.

image

Right, let’s remove Container1 and Container2 – something that takes seconds. I now have a container OS image for Windows Server 2012 R2 and container images for Java and Linux. Let’s imagine that a developer needs to deploy an application that requires PHP. What do they need to do? It’s quite easy – they create a container from the PHP container image. Windows Server Containers knows that PHP requires the Windows Server container OS image, and that is deployed too.

The entire deployment is near instant because nothing is deployed; the container links to the images in the repository and saves changes locally.

image

Think about this for a second – we’ve just deployed a configured OS in little more time than it takes to type a command. We’ve also modelled a fairly simple application dependency. Let’s complicate things.

The developer installs WordPress into the new container.

image

The dev plans on creating multiple copies of their application (dev, test, and production) and like many test/dev environments, they need an easy way to reset, rebuild, and to spin up variations; there’s nothing like containers for this sort of work. The dev shuts down Container3 and then creates a new container image. This process captures the changes since the last deployment and saves a container image in the repository – the WordPress installation. Note that this container doesn’t include the contents of PHP or Windows Server but it does link to PHP and PHP links to Windows Server.

image

The dev is done and resets the environment. Now she wants to deploy 1 container for dev, 1 for test, and 1 for production. Simple! This requires 3 commands, each one that will create a new container from the WordPress container image, which logically uses the required PHP and PHP’s required Windows Server.

Nothing is actually deployed to the containers; each container links to the images in the repository and saves changes locally. Each container is isolated from the other to provide application stability (but not security – this is where Hyper-V Containers comes into play). And best of all – the dev has had the experience of:

  • Saying “I want three copies of WordPress”
  • Getting the OS and all WordPress pre-requisites
  • Getting them instantly
  • Getting 3 identical deployments

image

From the administrator’s perspective, they’ve not had to be involved in the deployment, and the repository is pretty simple. There’s no need for a VM with Windows Server, another with Windows Server & PHP, and another with Windows Server, PHP & WordPress. Instead, there is an image for Windows Server, and image for PHP and an image for WordPress, with links providing the dependencies.

And yes, the repository is a flat file structure so there’s no accidental DBA stuff to see here.

Why Would You Use Containers?

If you operate in the SME space then keep moving, and don’t bother with Containers unless they’re in an exam you need to pass to satisfy the HR drones. Containers are aimed at larger environments where there is application sprawl and repetitive installations.

Is this similar to what SCVMM 2012 introduced with Server App-V and service templates? At a very high level, yes, but Windows Server Containers is easy to use and probably a heck of a lot more stable.

Note that Containers are best suited for stateless workloads. If you want to save data then save it elsewhere, e.g. SMB 3.0. What about MySQL and SQL Server? Based on what was stated at Ignite, then there’s a solution (or one in the works); they are probably using SMB 3.0 to save the databases outside of the container. This might require more digging, but I wonder if databases would really be a good fit for containers. And I wonder, much like with Azure VMs, if there will be a later revision that brings us stateful containers.

I don’t imagine that my market at work (SMEs) will use Windows Server Containers, but if I was back working as an admin in a large enterprise then I would definitely start checking out this technology. If I worked in a software development environment then I would also check out containers for a way to rapidly provision new test and dev labs that are easy to deploy and space efficient.

[Update]

Here is a link to the Windows Server containers page on the TechNet Library.

We won’t see Hyper-V containers in TPv3 – that will come in a later release, I believe later in 2015.

Microsoft News – 16 July 2015

It’s been a busy week with WPC driving announcements that affect partners.

Hyper-V

Windows Server

Windows Client

Azure

clip_image001

System Center

  • Datazen Enterprise Server: Datazen Enterprise Server is a collection of web applications and Windows services. Acts as a repository for storing and sharing dashboards and KPIs.

Office 365

Licensing

Miscellaneous

MS15-068–SERIOUS Hyper- V Security Vulnerability

This is one of those rare occasions where I’m going to say: put aside everything you are doing, test this MS15-068 patch now, and deploy it as soon as possible.

The vulnerabilities could allow remote code execution in a host context if a specially crafted application is run by an authenticated and privileged user on a guest virtual machine hosted by Hyper-V. An attacker must have valid logon credentials for a guest virtual machine to exploit this vulnerability.

This security update is rated Critical for Windows Hyper-V on Windows Server 2008, Windows Server 2008 R2, Windows 8 and Windows Server 2012, and Windows 8.1 and Windows Server 2012 R2. For more information, see the Affected Software section.

The security update addresses the vulnerabilities by correcting how Hyper-V initializes system data structures in guest virtual machines.

I don’t know if this is definitely what we would call a “breakout attack” (I’m awaiting confirmation), one where a hacker in a compromised VM can reach out to the host, but it sure reads like it. This makes it the first one of these that I’ve heard of in the life of Hyper-V (since beta of W2008) – VMware fanboys, you’ve had a few of these so be quiet.

Note:

Microsoft received information about this vulnerability through coordinated vulnerability disclosure. When this security bulletin was originally issued Microsoft had not received any information to indicate that this vulnerability had been publicly used to attack customers.

It sounds like a reasonable organization found and privately disclosed this bug, thus allowing Microsoft to protect their customers before it became public knowledge. Google could learn something here.

So once again:

  1. Test the patch quickly
  2. Push it out to secure hosts and other VMs

[Update]

Some digging by Flemming Riis (MVP) discover that credit goes to Thomas Garnier, Senior Security Software Development Engineer at Microsoft (a specialty in kernel, hypervisor, hardware, cloud and network security), and currently working on Azure OS (hence the Hyper-V interest, I guess). He is co-author of Sysinternals Sysmon with Mark Russinovich.

image

Software-Defined Storage Calculator and Design Considerations Guide

Microsoft has launched an Excel-based sizing tool to help you plan Storage Spaces (Scale-Out File Server) and guidance on how to design your Storage Spaces deployments.

Here’s the sizing for a very big SOFS that will require 4 x SOFS server nodes and 4 x 60 disk JBODs:

image The considerations guide will walk you through using the sizing tool.

Some updates are required – some newer disk sizes aren’t included – but this is a great starting point for a design process.