Microsoft News – 29 June 2015

As you might expect, there’s lots of Azure news. Surprisingly, there is still not much substantial content on Windows 10.

Hyper-V

Windows Server

Windows Client

clip_image001_thumb.png

Azure

Office 365

EMS

Misc

Microsoft News 02-June-2015

The big news of the last 24 hours is that Windows 10 will be released on July 29th. I posted before The Verge, etc, that I will be away and not reporting on the release on that date.

Hyper-V

Windows Server

Windows Client

Azure

Miscellaneous

clip_image001

Microsoft News 28-May-2015

Very little to cover here, except one possibly controversial article on Hyper-V that you long-time readers might expect me to write an angry response to …

Hyper-V

Windows Server

Azure

Office 365

Living with & Paying for Azure VM Backup

This site is running on an Azure Basic A2 VM with 127 GB of storage. I back it up in two ways:

  • There is an Azure Backup (AB) agent installed in the guest OS, and that backs up an export of MySQL and the IIS content.
  • I use the (preview) feature that allows you to grab a daily backup of a VM. This is what I want to focus on.

I have deployed a GRS backup vault. The usage summary is:

image

The storage cost of the backup this month will be around €2.5776 (72 * €0.0358 per GB) and the instance cost will be €7.447 (The VM size falls into 50-500 GB).

There is a daily backup with 4 weeks of retention. Right now, there are 29 days of history:

image

Backup can be slow (ranges from 47 minutes to 4 hours and 13 minutes), but I haven’t had any issues.

image

I haven’t had to do a restore, but so far, so good.

Technorati Tags: ,,

Microsoft News – 25-May-2015

It’s taken me nearly all day to fast-read through this lot. Here’s a dump of info from Build, Ignite, and since Ignite. Have a nice weekend!

Hyper-V

Windows Server

Windows Client

System Center

Azure

Office 365

Intune

  • Announcing support for Windows 10 management with Microsoft Intune: Microsoft announced that Intune now supports the management of Windows 10. All existing Intune features for managing Windows 8.1 and Windows Phone 8.1 will work for Windows 10.
  • Announcing the Mobile Device Management Design Considerations Guide: If you’re an IT Architect or IT Professional and you need to design a mobile device management (MDM) solution for your organization, there are many questions that you have to answer prior to recommending the best solution for the problem that you are trying to solve. Microsoft has many new options available to manage mobile devices that can match your business and technical requirements.
  • Mobile Application Distribution Capabilities in Microsoft Intune: Microsoft Intune allows you to upload and deploy mobile applications to iOS, Android, Windows, and Windows Phone devices. In this post, Microsoft will show you how to publish iOS apps, select the users who can download them, and also show you how people in your organization can download these apps on their iOS devices.
  • Microsoft Intune App Wrapping Tool for Android: Use the Microsoft Intune App Wrapping Tool for Android to modify the behavior of your existing line-of-business (LOB) Android apps. You will then be able to manage certain app features using Intune without requiring code changes to the original application.

Licensing

Miscellaneous

Give Your Feedback on Future Windows Server Releases

You have probably already heard about Windows Insider, a program for providing feedback and shaping the future of Windows on client devices – not that I did not say “Windows 10” because the Insiders program will live beyond the RTM of Windows 10 this summer.

Similarly, the Windows Server group has launched a feedback forum. Here you can:

  • Search for or browse for feedback
  • Comment on and vote for existing feedback
  • Submit your own unique ideas

Now let’s be realistic – not everything will be done:

  • Some ideas are daft 🙂
  • You’ll find a few things that are already in the TPv2 release of WS2016
  • Some things won’t suit Microsoft’s strategy
  • And some things will take more time than is available – but maybe planning for future releases will be impacted

Here’s what I’ve voted for, commented on or submitted so far:

  • Remember Domain Logins: I find it really annoying that the TPv2 release won’t remember previous domain logons and I have to type my domain\username over and over and over and …
  • Storage Replica Requirement of Datacenter Edition: Microsoft is planning to only include SR in the Datacenter edition of WS2016. Most of the storage machines I see are physical and licensed with Standard or Storage Server editions. It’ll probably be cheaper to go with 3rd party software than DC edition 🙁
  • Storage Spaces advanced tiering: I like the idea of bringing a cloud tier to Windows Server, instead of reserving it in the silly StorSimple appliance. I don’t agree with restricting it to Storage Spaces.
  • Create a Hyper-V Cluster without AD: Imagine a HYPER-V world (don’t let the SQL heads muddy the waters) without Kerberos!!! Simple SOFS, simple Live Migration, and yes, System Center would need to catch up.
  • VM Placement Without System Center: Even those who can afford or want to deploy SCVMM often choose not to enable Dynamic Optimization. Let’s bring this feature into Windows Server, where it belongs.
  • New integrated UI for Hyper-V: Let’s replace Hyper-V Manager, Failover Cluster Manager, and SCVMM with one integrated Hyper-V tool that is a part of Windows Server. The cloud folks can use Azure Stack. SCVMM is broken, and the experience is fragmented. Everyone agrees except fanboys and SCVMM team members.
  • Change how Hyper-V Manager creates VM folder structure: Sarah, Ben & Taylor – if you fix this, I guarantee a round of applause at the next Ignite. This is the CMD prompt CTRL+V of Hyper-V.

This is your opportunity to shape Windows Server. I’ve had that privilege as an MVP – it’s not always immediate but there are headline things in WS2016 that I’ve contributed some feedback for and it feels damned good to see them presented on stage. You can feel that too. If you choose to stay silent, then please stay that way when you’re unhappy.

Hyper-V Amigos Chatting At Microsoft Ignite 2015

Didier Van Hoye, myself and Carsten Rachfahl (all Hyper-V MVPs) were at Microsoft Ignite last week and we met up at the end to record a chat between the 3 of us, where we discussed some of our highlights from the conference. You can catch this video on the Hyper-V Amigos site.

image

Oh yeah, it was painful watching myself in this video 🙂 That was the last time Carsten will let me hold a microphone!

Setting Up WS2016 Storage Spaces Direct SOFS

In this post I will show you how to set up a Scale-Out File Server using Windows Server 2016 Storage Spaces Direct (S2D). Note that:

  • I’m assuming you have done all your networking. Each of my 4 nodes has 4 NICs: 2 for a management NIC team called Management and 2 un-teamed 10 GbE NICs. The two un-teamed NICs will be used for cluster traffic and SMB 3.0 traffic (inter-cluster and from Hyper-V hosts). The un-teamed networks do not have to be routed, and do not need the ability to talk to DCs; they do need to be able to talk to the Hyper-V hosts’ equivalent 2 * storage/clustering rNICs.
  • You have read my notes from Ignite 2015
  • This post is based on WS2016 TPv2

Also note that:

  • I’m building this using 4 x Hyper-V Generation 2 VMs. In each VM SCSI 0 has just the OS disk and SCSI 1 has 4 x 200 GB data disks.
  • I cannot virtualize RDMA. Ideally the S2D SOFS is using rNICs.

Deploy Nodes

Deploy at least 4 identical storage servers with WS2016. My lab consists of machines that have 4 DAS SAS disks. You can tier storage using SSD or NVMe, and your scalable/slow tier can be SAS or SATA HDD. There can be a max of tiers only: SSD/NVMe and SAS/SATA HDD.

Configure the IP addressing of the hosts. Place the two storage/cluster network into two different VLANs/subnets.

My nodes are Demo-S2D1, Demo-S2D2, Demo-S2D3, and Demo-S2D4.

Install Roles & Features

You will need:

  • File Services
  • Failover Clustering
  • Failover Clustering Manager if you plan to manage the machines locally.

Here’s the PowerShell to do this:

Add-WindowsFeature –Name File-Services, Failover-Clustering –IncludeManagementTools

You can use -ComputerName <computer-name> to speed up deployment by doing this remotely.

Validate the Cluster

It is good practice to do this … so do it. Here’s the PoSH code to validate a new S2D cluster:

image

Create your new cluster

You can use the GUI, but it’s a lot quicker to use PowerShell. You are implementing Storage Spaces so DO NOT ADD ELGIBLE DISKS. My cluster will be called Demo-S2DC1 and have an IP of 172.16.1.70.

New-Cluster -Name Demo-S2DC1 -Node Demo-S2D1, Demo-S2D2, Demo-S2D3, Demo-S2D4 -NoStorage -StaticAddress 172.16.1.70

There will be a warning that you can ignore:

There were issues while creating the clustered role that may prevent it from starting. For more information view the report file below.

What about Quorum?

You will probably use the default of dynamic quorum. You can either use a cloud witness (a storage account in Azure) or a file share witness, but realistically, Dynamic Quorum with 4 nodes and multiple data copies across nodes (fault domains) should do the trick.

Enable Client Communications

The two cluster networks in my design will also be used for storage communications with the Hyper-V hosts. Therefore I need to configure these IPs for Client communications:

image

Doing this will also enable each server in the S2D SOFS to register it’s A record of with the cluster/storage NIC IP addresses, and not just the management NIC.

Enable Storage Spaces Direct

This is not on by default. You enable it using PowerShell:

(Get-Cluster).DASModeEnabled=1

Browsing Around FCM

Open up FCM and connect to the cluster. You’ll notice lots of stuff in there now. Note the new Enclosures node, and how each server is listed as an enclosure. You can browse the Storage Spaces eligible disks in each server/enclosure.

image

Creating Virtual Disks and CSVs

I then create a pool called Pool1 on the cluster Demo-S2DC1 using PowerShell – this is because there are more options available to me than in the UI:

New-StoragePool  -StorageSubSystemName Demo-S2DC1.demo.internal -FriendlyName Pool1 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem  -Name Demo-S2DC1.demo.internal | Get-PhysicalDisk)

Get-StoragePool Pool1 | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal

The you create the CSVs that will be used to store file shares in the SOFS. Rules of thumb:

  • 1 share per CSV
  • At least 1 CSV per node in the SOFS to optimize flow of data: SMB redirection and redirected IO for mirrored/clustered storage spaces

Using this PoSH you will lash out your CSVs in no time:

$CSVNumber = “4”
$CSVName = “CSV”
$CSV = “$CSVName$CSVNumber”

New-Volume -StoragePoolFriendlyName Pool1 -FriendlyName $CSV -PhysicalDiskRedundancy 2 -FileSystem CSVFS_REFS –Size 200GB
Set-FileIntegrity “C:\ClusterStorage\Volume$CSVNumber” –Enable $false

The last line disables ReFS integrity streams to support the storage of Hyper-V VMs on the volumes. You’ll see from the screenshot what my 4 node S2D SOFS looks like, and that I like to rename things:

image

Note how each CSV is load balanced. SMB redirection will redirect Hyper-V hosts to the owner of a CSV when the host is accessing files for a VM that is stored on that CSV. This is done for each VM connection by the host using SMB 3.0, and ensures optimal flow of data with minimized/no redirected IO.

There are some warnings from Microsoft about these volumes:

  • They are likely to become inaccessible on later Technical Preview releases.
  • Resizing of these volumes is not supported. 

    Oops! This is a technical preview and this should be pure lab work that your willing to lose.

    Create a Scale-Out File Server

    The purpose of this post is to create a SOFS from the S2D cluster, with the sole purpose of the cluster being to store Hyper-V VMs that are accessed by Hyper-V hosts via SMB 3.0. If you are building a hyperconverged cluster (not supported by the current TPv2 preview release) then you stop here and proceed no further.

    Each of the S2D cluster nodes and the cluster account object should be in an OU just for the S2D cluster. Edit the advanced security of the OU and grand the cluster account object create computer object and delete compute object rights. If you don’t do this then the SOFS role will not start after this next step.

    Next, I am going to create an SOFS role on the S2D cluster, and call it Demo-S2DSOFS1.

    New-StorageFileServer  -StorageSubSystemName Demo-S2DC1.demo.internal -FriendlyName Demo-S2DSOFS1 -HostName Demo-S2DSOFS1Create a  -Protocols SMB

    Create and Permission Shares

    Create 1 share per CSV. If you need more shares then create more CSVs. Each share needs the following permissions:

    • Each Hyper-V host
    • Each Hyper-V cluster
    • The Hyper-V administrators

    You can use the following PoSH to create and permission your shares. I name the share folder and share name after the CSV that it is stored on, so simply change the $ShareName variable to create lots of shares, and change the permissions as appropriate.

    $ShareName = “CSV1”
    $SharePath = “$RootPath\$ShareName\$ShareName”

    md $SharePath
    New-SmbShare -Name $ShareName -Path $SharePath -FullAccess Demo-Host1$, Demo-Host2$, Demo-HVC1$, “Demo\Hyper-V Admins”
    Set-SmbPathAcl -ShareName $ShareName

    Create Hyper-V VMs

    On your hosts/clusters create VMs that store all of their files on the path of the SOFS, e.g. \\Demo-S2DSOFS1\CSV1\VM01, \\Demo-S2DSOFS1\CSV1\VM02, etc.

    Remember that this is a Preview Release

    This post was written not long after the release of TPv2:

    • Expect bugs – I am experiencing at least one bad one by the looks of it
    • Don’t expect support for a rolling upgrade of this cluster
    • Bad things probably will happen
    • Things are subject to change over the next year

Ignite 2015 – Windows Server Containers

Here are my notes from the recording of Microsoft’s New Windows Server Containers, presented by Taylor Brown and Arno Mihm. IMO, this is an unusual tech because it is focused on DevOps – it spans both IT pro and dev worlds. FYI, it took me twice as long as normal to get through this video. This is new stuff and it is heavy going.

Objectives

  • You will now enough about containers to be dangerous 🙂
  • Learn where containers are the right fit
  • Understand what Microsoft is doing with containers in Windows Server 2016.

Purpose of Containers

  • We used to deploy 1 application per OS per physical server. VERY slow to deploy.
  • Then we got more agility and cost efficiencies by running 1 application per VM, with many VMs per physical server. This is faster than physical deployment, but developers still wait on VMs to deploy.

Containers move towards a “many applications per server” model, where that server is either physical or virtual. This is the fastest way to deploy applications.

Container Ecosystem

An operating system virtualization layer is placed onto the OS (physical or virtual) of the machine that will run the containers. This lives between the user and kernel modes, creating boundaries in which you can run an application. Many of these applications can run side by side without impacting each other. Images, containing functionality, are run on top of the OS and create aggregations of functionality. An image repository enables image sharing and reuse.

image

When you create a container, a sandbox area is created to capture writes; the original image is read only. The Windows container sees Windows and thinks it’s regular Windows. A framework is installed into the container, and this write is only stored in the sandbox, not the original image. The sandbox contents can be preserved, turning the sandbox into a new read-only image, which can be shared in the repository. When you deploy this new image as a new container, it contains the framework and has the same view of Windows beneath, and the container has a new empty sandbox to redirect writes to.

You might install an application into this new container, the sandbox captures the associated writes. Once again, you can preserve the modified sandbox as an image in the repository.

What you get is layered images in a repository, which are possible to deploy independently from each other, but with the obvious pre-requisites. This creates very granular reuse of the individual layers, e.g. the framework image can be deployed over and over into new containers.

Demo:

A VM is running Docker, the tool for managing containers. A Windows machine has the Docker management utility installed. There is a command-line UI.

Docker Images < list the images in the repository.

There is an image called windowsservercore. He runs:

docker run –rm –it windowsservercore cmd

Note:

  • –rm (two hyphens): Remove the sandbox afterwards
  • –it: give me an interactive console
  • cmd: the program he wants the container to run

A container with a new view of Windows starts up a few seconds later and a command prompt (the desired program) appears. This is much faster than deploying a Windows guest OS VM on any hypervisor.  He starts a second one. On the first, he deletes files from C: and deletes HKLM from the registry, and the host machine and second container are unaffected – all changes are written to the sandbox of the first container. Closing the command prompt of the first container erases all traces of it (–rm).

Development Process Using Containers

The image repository can be local to a machine (local repository) or shared to the company (central repository).

First step: what application framework is required for the project … .Net, node.js, PHP, etc? Go to the repository and pull that image over; any dependencies are described in the image and are deployed automatically to the new container. So if I deploy .NET a Windows Server image will be deployed automatically as a dependency.

The coding process is the same as usual for the devs, with the same tools as before. A new container image is created from the created program and installed into the container. A new “immutable image” is created. You can allow selected people or anyone to use this image in their containers, and the application is now very easy and quick to deploy; deploying the application image to a container automatically deploys the dependencies, e.g. runtime and the OS image. Remember – future containers can be deployed with –rm making it easy to remove and reset – great for stateless deployments such as unit testing. Every deployment of this application will be identical – great for distributed testing or operations deployment.

You can run versions of images, meaning that it’s easy to rollback a service to a previous version if there’s an issue.

Demo:

There is a simple “hello world” program installed in a container. There is a docker file, and this is a text file with a set of directions for building a new container image.

The prereqs are listed with FROM; here you see the previously mentioned windowsservercore image.

WORKDIR sets the baseline path in the OS for installing the program, in this case, the root of C:.

Then commands are run to install the software, and then run (what will run by default when the resulting container starts) the software. As you can see, this is a pretty simple example.

image

He then runs:

docker build -t demoapp:1 < which creates an image called demoapp with a version of 1. -t tags the image.

Running docker images shows the new image in the repository. Executing the below will deploy the required windowsservercore image and the version 1 demoapp image, and execute demoapp.exe – no need to specity the command because the docker file specified a default executable.

docker run –rm -it demoapp:1

He goes back to the demoapp source code, compiles it and installs it into a container. He rebuilds it as version 2:

docker build -t demoapp:2

And then he runs version 2 of the app:

docker run –rm -it demoapp:2

And it fails – that’s because he deliberately put a bug in the code – a missing dependent DLL from Visual Studio. It’s easy to blow the version 2 container away (–rm) and deploy version 1 in a few seconds.

What Containers Offer

  • Very fast code iteration: You’re using the same code in dev/test, unit test, pilot and production.
  • There are container resource controls that we are used to: CPU, bandwidth, IOPS, etc. This enables co-hosting of applications in a single OS with predictable levels of performance (SLAs).
  • Rapid deployment: layering of containers for automated dependency deployment, and the sheer speed of containers means applications will go from dev to production very quickly, and rollback is also near instant. Infrastructure no longer slows down deployment or change.
  • Defined state separation: Each layer is immutable and isolated from the layers above and below it in the container. Each layer is just differences.
  • Immutability: You get predictable functionality and behaviour from each layer for every deployment.

Things that Containers are Ideal For

  • Distributed compute
  • Databases: The database service can be in a container, with the data outside the container.
  • Web
  • Scale-out
  • Tasks

Note that you’ll have to store data in and access it from somewhere that is persistent.

Container Operating System Environments

  • Nano-Server: Highly optimized, and for born-in-the-cloud applications.
  • Server Core: Highly compatible, and for traditional applications.

Microsoft-Provided Runtimes

Two will be provided by Microsoft:

  • Windows Server Container: Hosting, highly automated, secure, scalable & elastic, efficient, trusted multi-tenancy. This uses a shared-kernel model – the containers run on the same machine OS.
  • Hyper-V Container: Shared hosting, regulate workloads, highly automated, secure, scalable and elastic, efficient, public multi-tenancy. Containers are placed into a “Hyper-V partition wrap”, meaning that there is no sharing of the machine OS.

Both runtimes use the same image formats. Choosing one or the other is a deployment-time decision, with one flag making the difference.

Here’s how you can run both kinds of containers on a physical machine:

image

And you can run both kinds of containers in a virtual machines. Hyper-V containers can be run in virtual machine that is running the Hyper-V role. The physical host must be running virtualization that supports virtualization of the VT instruction sets (ah, now things get interesting, eh?). The virtual machine is a Hyper-V host … hmm …

image

Choosing the Right Tools

You can run containers in:

  • Azure
  • On-premises
  • With a service provider

The container technologies can be:

  • Windows Server Containers
  • Linux: You can do this right now in Azure

Management tools:

  • PowerShell support will be coming
  • Docker
  • Others

I think I read previously that System Center would add support. Visual Studio was demonstrated at Build recently. And lots of dev languages and runtimes are supported. Coders don’t have to write with new SDKs; what’s more important is that

Azure Service Fabric will allow you to upload your code and it will handle the containers.

Virtual machines are going nowhere. They will be one deployment option. Sometimes containers are the right choice, and sometimes VMs are. Note: you don’t join containers to AD. It’s a bit of a weird thing to do, because the containers are exact clones with duplicate SIDs. So you need to use a different form of authentication for services.

When can You Play With Containers?

  • Preview of Windows Server Containers: coming this summer
  • Preview of Hyper-V Containers: planned for this year

Containers will be in the final RTM of WS2016. You will be able to learn more on the Windows Server Containers site when content is added.

Demos

Taylor Brown, who ran all the demos, finished up the session with a series of demos.

docker history <name of image> < how was the image built – looks like the dockerfile contents in reverse order. Note that passwords that are used in this file to install software appears to be legible in the image.

He tries to run a GUI tool from a container console – no joy. Instead, you can remote desktop into the container (get the IP of the container instance) and then run the tool in the Remote Desktop session. The tool run is Process Explorer.

If you run a system tool in the container, e.g. Process Explorer, then you only see things within the container. If you run a tool on the machine, then you have a global view of all processes.

If you run Task Manager, go to Details and add the session column, you can see which processes are owned by the host machine and which are owned by containers. Session 0 is the machine.

Runs docker run -it windowsservercore cmd < does not put in –rm which means we want to keep the sandbox when the container is closed. Typing exit in the container’s CMD will end the container but the sandbox is kept.

Running ps -a shows the container ID and when the container was created/exited.

Running docker commit with the container ID and a name converts the sandbox into an image … all changes to the container are stored in the new image.

Other notes:

The IP of the container is injected in, and is not the result of a setup. A directory can be mapped into a container. This is how things like databases are split into stateless and stateful; the container runs the services and the database/config files are injected into the container. Maybe SMB 3.0 databases would be good here?

Questions

  • How big are containers on the disk? The images are in the repository. There is no local copy – they are referred to over the network. The footprint of the container on the machine is the running state (memory, CPU, network, and sandbox), the size of which is dictated by your application.
  • There is no plan to build HA tech into containers. Build HA into the application. Containers are stateless. Or you can deploy containers in HA VMs via Hyper-V.
  • Is a full OS running in the container? They have a view of a full OS. The image of Core that Microsoft will ship is almost a full image of Windows … but remember that the image is referenced from the repository, not copied.
  • Is this Server App-V? No. Conceptually at a really really high level they are similar, but Containers offer a much greater level of isolation and the cross-platform/cloud/runtime support is much greater too.
  • Each container can have its own IP and MAC address> It can use the Hyper-V virtual switch. NATing will also be possible as an alternative at the virtual switch. Lots of other virtualization features available too.
  • Behind the scenes, the image is an exploded set of files in the repository. No container can peek into the directory of another container.
  • Microsoft are still looking at which of their own products will be support by them in Containers. High priority examples are SQL and IIS.
  • Memory scale: It depends on the services/applications running the containers. There is some kind of memory de-duplication technology here too for the common memory set. There is common memory set reuse, and further optimizations will be introduced over time.
  • There is work being done to make sure you pull down the right OS image for the OS on your machine.
  • If you reboot a container host what happens? Container orchestration tools stop the containers on the host, and create new instances on other hosts. The application layer needs to deal with this. The containers on the patched host stop/disappear from the original host during the patching/reboot – remember; they are stateless.
  • SMB 3.0 is mentioned as a way to present stateful data to stateless containers.
  • Microsoft is working with Docker and 3 containerization orchestration vendors: Docker Swarm, Kubernetes and Mesosphere.
  • Coding: The bottom edge of Docker Engine has Linux drivers for compute, storage, and network. Microsoft is contributing Windows drivers. The upper levels of Docker Engine are common. The goal is to have common tooling to manage Windows Containers and Linux containers.
  • Can you do some kind of IPC between containers? Networking is the main way to share data, instead of IPC.

Lesson: run your applications in normal VMs if:

  • They are stateful and that state cannot be separated
  • You cannot handle HA at the application layer

Personal Opinion

Containers are quite interesting, especially for a nerd like me that likes to understand how new techs like this work under the covers. Containers fit perfectly into the “treat them like cattle” model and therefore, in my opinion, have a small market of very large deployments of stateless applications. I could be wrong, but I don’t see Containers fitting into more normal situations. I expect Containers to power lots of public cloud task -based stuff. I can see large customers using it in the cloud, public or private. But it’s not a tech for SMEs or legacy apps. That’s why Hyper-V is important.

But … nested virtualization, not that it was specifically mentioned, oh that would be very interesting 🙂

I wonder how containers will be licensed and revealed via SKUs?

My Microsoft Ignite 2015 Session Content

Microsoft recorded and shared a video of my session, The Hidden Treasures of Windows Server 2012 R2 Hyper-V, along with the slides.

My second session, End to-End Azure Site Recovery Solutions for Small-Medium Enterprises in one of the community theatres, was not recorded so I have placed the slides up on slideshare.