Ignite 2016 – Discover What’s New In Windows Server 2016 Virtualization

This post is a collection of my notes from the Ben Armstrong’s (Principal Program Manager Lead in Hyper-V) session (original here) on the features of WS2016 Hyper-V. The session is an overview of the features that are new, why they’re there, and what they do. There’s no deep-dives.

A Summary of New Features

Here is a summary of what was introduced in the last 2 versions of Hyper-V. A lot of this stuff still cannot be found in vSphere.

image

And we can compare that with what’s new in WS2016 Hyper-V (in blue at the bottom). There’s as much new stuff in this 1 release as there were in the last 2!

image

Security

The first area that Ben will cover is security. The number of attack vectors is up, attacks are on the rise, and the sophistication of those attacks is increasing. Microsoft wants Windows Server to be the best platform. Cloud is a big deal for customers – some are worried about industry and government regulations preventing adoption of the cloud. Microsoft wants to fix that with WS2016.

Shielded Virtual Machines

Two basic concepts:

  • A VM can only run on a trusted & healthy host – a rogue admin/attacker cannot start the VM elsewhere. A highly secured Host Guardian Service must authorize the hosts.
  • A VM is encrypted by the customer/tenant using BitLocker – a rogue admin/attacker/government agency cannot inspect the VM’s contents by mounting the disk(s).

image

There are levels of shielding, so it’s not an all or nothing.

Key Storage Drive for Generation 1 VMs

Shielding, as above, required Generation 2 VMs. You can also offer some security for Generation 1 virtual machines: Key Storage Drive. Not as secure as shielded virtual machines or virtual TPM, but it does give us a safe way to use BitLocker inside a Generation 1 virtual machine – required for older applications that depend on older operating systems (older OSs cannot be used in Generation 2 virtual machines).

 

image

Virtual Secure Mode (VSM)

We also have Guest Virtual Secure Mode:

  • Credential Guard: protecting ID against pass-the-hash by hiding LSASS in a secured VM (called VSM) … in a VM with a Windows 10 or Windows Server 2016 guest OS! Malware running with admin rights cannot steal your credentials in a VM.
  • Device Guard: Protect the critical kernel parts of the guest OS against rogue s/w, again, by hiding them in a VSM in a Windows 10 or Windows Server 2016 guest OS.

image

Secure Boot for Linux Guests

Secure boot was already there for Windows in Generation 2 virtual machines. It’s now there for Linux guest OSs, protecting the boot loader and kernel against root kits.

image

Host Resource Protection (HRP)

Ben hopes you never see this next feature in action in the field Smile This is because Host Resource Protection is there to protect hosts/VMs from a DOS attack against a host by someone inside a VM. The scenario: you have an online application running in a VM. An attacker compromises the application (example: SQL injection) and gets into the guest OS of the VM. They’re isolated from other VMs by the hypervisor and hardware/DEP, so they attack the host using DOS, and consume resources.

A new feature, from Azure, called HRP will determine that the VM is aggressively using resources using certain patterns, and start to starve it of resources, thus slowing down the DOS attack to the point of being pointless. This feature will be of particular interest to:

  • Companies hosting external facing services on Hyper-V/Windows Azure Pack/Azure Stack
  • Hosting companies using Hyper-V/Windows Azure Pack/Azure Stack

image

This is another great example of on-prem customers getting the benefits of Azure, even if they don’t use Azure. Microsoft developed this solution to protect against the many unsuccessful DOS attacks from Azure VMs, and we get it for free for our on-prem or hosted Hyper-V hosts. If you see this happening, the status of the VM will switch to Host Resource Protection.

Security Demos

Ben starts with virtual TPM. The Windows 10 VM has a virtual TPM enabled and we see that the C: drive is encrypted. He shuts down the VM to show us the TPM settings of the VM. We can optionally encrypt the state and live migration traffic of the VM – that means a VM is encrypted at rest and in transit. There is a “performance impact” for this optional protection, which is why it’s not on by default. Ben also enables shielding – and he loses console access to the VM – the only way to connect to the machine is to remote desktop/SSH to it.

Note: if he was running the full host guardian service (HGS) infrastructure then he would have had no control over shielding as a normal admin – only the HGS admins would have had control. And even the HGS admins have no control over BitLocker.

He switches to a Generation 1 virtual machine with Key Storage Drive enabled. BitLocker is running. In the VM settings (Generation 1) we see Security > Key Storage Drive Enabled. Under the hood, an extra virtual hard disk is attached to the VM (not visible in the normal storage controller settings, but visible in Disk Management in the guest OS). It’s a small 41 MB NTFS volume. The BitLocker keys are stored there instead of a TPM – virtual TPM is only in Generation 2, but it’s using the same sorts of tech/encryption/methods to secure the contents in the Key Storage Drive, but it cannot be as secure as virtual TPM, but it is better than not having BitLocker. Microsoft can make the same promises with data at rest encryption for Generation 1 VMs, but it’s still not as good as a Generation 2 VM with vTPM or even a shielded VM (requires Generation 2).

Availability

The next section is all about keeping services up and running in Hyper-V, whether it’s caused by upgrades or infrastructure issues. Everyone has outages and Microsoft wants to reduce the impact of these. Microsoft studied the common causes, and started to tackle them in WS2016

Cluster OS Rolling Upgrades

Microsoft is planning 2-3 updates per year for Nano Server, plus there’ll be other OS upgrades in the future. You cannot upgrade a cluster node. And in the past we could only do cluster-cluster migrations to adopt new versions of Windows Server/Hyper-V. Now, we can:

  1. Remove cluster node 1
  2. Rebuild cluster node 1 with the new version of Windows Server/Hyper-V
  3. Add cluster node 1 to the old cluster – the cluster runs happily in mixed-mode for a short period of time (weeks), with failover and Live Migration between the old/new OS versions.
  4. Repeat steps 1-3 until all nodes are up to date
  5. Upgrade the cluster functional level – Update-ClusterFunctionalLevel (see below for “Emulex incident”)
  6. Upgrade the VMs’ version level

Zero VM downtime, zero new hardware – 2 node cluster, all the way to a 64 node cluster.

If you have System Center:

  1. Upgrade to SCVMM 2016.
  2. Let it orchestrate the cluster upgrade (above)

Supports starts with WS2012 R2 to WS2016. Re-read that statement: there is no support for W2008/W2008 R2/WS2012. Re-read that last statement. No need for any questions now Smile

image

To avoid an “Emulex incident” (you upgrade your hosts – and a driver/firmware fails even though it is certified, and the vendor is going to take 9 months to fix the issue) then you can actually:

  1. Do the node upgrades.
  2. Delay the upgrade to the cluster functional level for a week or two
  3. Test your hosts/cluster for driver/firmware stability
  4. Rollback the cluster nodes to the older OS if there is an issue –> only possible if the cluster functional level is on the older version.

And there’s no downtime because it’s all leveraging Live Migration.

Virtual Machine Upgrades

This was done automatically when you moved a VM from version X to version X+1. Now you control it (for the above to work). Version 8 is WS2016 host support.

image

Failover Clustering

Microsoft identified two top causes of outages in customer environments:

  • Brief storage “outages” – crashing the guest OS of a VM when an IO failed. In WS2016, when an IO fails, the VM is put in a paused-critical state (for up to 24 hours, by default). The VM will resume as soon as the storage resumes.
  • Transient network errors – clustered hosts being isolated causing unnecessary VM failover (reboot), even if the VM was still on the network. A very common 30 seconds network outage will cause a Hyper-V cluster to panic up to and including WS2012 R2 – attempted failovers on every node and/or quorum craziness! That’s fixed in WS2016 – the VMs will stay on the host (in an unmonitored state) if they are still networked (see network protection from WS2012 R2). Clustering will wait (by default) for 4 minutes before doing a failover of that VM. If a host glitches 3 times in an hour it will be automatically quarantined, after resuming from the 3rd glitch, (VMs are then live migrated to other nodes) for 2 hours, allowing operator inspection.

image

Guest Clustering with Shared VHDX

Version 1 of this in WS2012 R2 was limited – supported guest clusters but we couldn’t do Live Migration, replication, or backup of the VMs/shared VHDX files. Nice idea, but it couldn’t really be used in production (it was supported, but functionally incomplete) instead of virtual fibre channel or guest iSCSI.

WS2016 has a new abstracted form of Shared VHDX – it’s even a new file format. It supports:

  • Backup of the VMs at the host level
  • Online resizing
  • Hyper-V Replica (which should lead to ASR support) – if the workload is important enough to cluster, then it’s important enough to replicate for DR!

image

One feature that does not work (yet) is Storage Live Migration. Checkpoint can be done “if you know what you are doing” – be careful!!!

Replica Support for Hot-Add VHDX

We could hot-add a VHDX file to a VM, but we could not add that to replication if the VM was already being replicated. We had to re-replicate the VM! That changes in WS2016, thanks to the concept of replica sets. A new VHDX is added to a “not-replicated” set and we can move it to the replicated set for that VM.

image

Hot-Add Remove VM Components

We can hot-add and hot-remove vNICs to/from running VMs. Generation 2 VMs only, with any supported Windows or Linux guest OS.

We can also hot-add or hot-remove RAM to/from a VM, assuming:

  • There is free RAM on the host to add to the VM
  • There is unused RAM in the VM to remove from the VM

This is great for those VMs that cannot use Dynamic Memory:

  • No support by the workload
  • A large RAM VM that will benefit from guest-aware NUMA

A nice GUI side-effect is that guest OS memory demand is now reported in Hyper-V Manager for all VMs.

Production Checkpoints

Referring to what used to be called (Hyper-V) snapshots, but were renamed to checkpoints to stop dumb people from getting confused with SAN and VSS snapshots – yes, people really are that stupid – I’ve met them.

Checkpoints (what are now called Standard Checkpoints) were not supported by many applications in a guest OS because they lead to application inconsistency. WS2016 adds a new default checkpoint type called a Production Checkpoint. This basically uses backup technology (and IT IS STILL NOT A BACKUP!) to create an application consistent checkpoint of a VM. If you apply (restore) the checkpoint the VM:

  • The VM will not boot up automatically
  • The VM will boot up as if it was restoring from a backup (hey dumbass, checkpoints are STILL NOT A BACKUP!)

For the stupid people, if you want to backup VMs, use a backup product. Altaro goes from free to quite affordable. Veeam is excellent. And Azure Backup Server gives you OPEX based local backup plus cloud storage for the price of just the cloud component. And there are many other BACKUP solutions for Hyper-V.

Now with production checkpoints, MSFT is OK with you using checkpoints with production workloads …. BUT NOT FOR BACKUP!

image

Demos

Ben does some demos of the above. His demo rig is based on nested virtualization. He comments that:

  • The impact of CPU/RAM is negligible
  • There is around a 25% impact on storage IO

Storage

The foundation of virtualization/cloud that makes or breaks a deployment.

Storage Quality of Service (QOS)

We had a basic system in WS2012 R2:

  • Set max IOPS rules per VM
  • Set min IOPS alerts per VM that were damned hard to get info from (WMI)

And virtually no-one used the system. Now we get storage QoS that’s trickled down from Azure.

In WS2016:

  • We can set reserves (that are applied) and limits on IOPS
  • Available for Scale-Out File Server and block storage (via CSV)
  • Metrics rules for VHD, VM, host, volume
  • Rules for VHD, VM, service, or tenant
  • Distributed rule application – fair usage, managed at storage level (applied in partnership by the host)
  • PoSH management in WS2016, and SCVMM/SCOM GUI image

You can do single-instance or multi-instance policies:

  • Single-instance: IOPS are shared by a set of VMs, e.g. a service or a cluster, or this department only gets 20,000 IOPS.
  • Multi-instance: the same rule is applied to a group of VMs, the same rule for a large set of VMs, e.g. Azure guarantees at least X IOPS to each Standard storage VHD.

image

Discrete Device Assignment – NVME Storage

DDA allows a virtual machine to connect directly to a device. An example is a VM connects directly to extremely fast NVME flash storage.

Note: we lose Live Migration and checkpoints when we use DDA with a VM.

image

Evolving Hyper-V Backup

Lots of work done here. WS2016 has it’s only block change tracking (Resilient Change Tracking) so we don’t need a buggy 3rd party filter driver running in the kernel of the host to do incremental backups of Hyper-V VMs. This should speed up the support of new Hyper-V versions by the backup vendors (except for you-know-who-yellow-box-backup-to-tape-vendor-X, obviously!).

Large clusters had scalability problems with backup. VSS dependencies have been lessened to allow reliable backups of 64 node clusters.

Microsoft has also removed the need for hardware VSS snapshots (a big source of bugs), but you can still make use of hardware features that a SAN can offer.

ReFS Accelerated VHDX Operations

Re-FS is the preferred file system for storing VMs in WS2016. ReFS works using metadata which links to data blocks. This abstraction allows very fast operations:

  • Fixed VHD/X creation (seconds instead of hours)
  • Dynamic VHD/X expansion
  • Checkpoint merge, which impacts VM backup

Note, you’ll have to reformat WS2012 R2 ReFS to get the new version of ReFS.

Graphics

A lot of people use Hyper-V (directly or in Azure) for RDS/Citrix.

RemoteFX Improvements

image

The AVC444 thing is a lossless codec – lossless 3D rendering, apparently … that’s gobbledegook to me.

DDA Features and GPU Capabilities

We can also use DDA to connect VMs directly to CPUs … this is what the Azure N-Series VMs are doing with high-end NVIDIA GFX cards.

  • DirectX, OpenGL, OpenCL, CUDA
  • Guest OS: Server 2012 R2, Server 2016, Windows 10, Linux

The h/w requirements are very specific and detailed. For example, I have a laptop that I can do RemoteFX with, but I cannot use for DDA (SRIOV not supported on my machine).

Headless Virtual Machine

A VM can be booted without display devices. Reduces the memory footprint, and simulates a headless server.

Operational Efficiency

Once again, Microsoft is improving the administration experience.

PowerShell Direct

You can now to remote PowerShell into a VM via the VMbus on the host – this means you do not need any network access or domain join. You can do either:

  • Enter-PSSession for an interactive session
  • Invoke-Command for a once-off instruction

Supports:

  • Host: Windows 10/WS2016
  • Guest: Windows 10/WS2016

You do need credentials for the guest OS, and you need to do it via the host, so it is secure.

This is one of Ben’s favourite WS2016 features – I know he uses it a lot to build demo rigs and during demos. I love it too for the same reasons.

PowerShell Direct – JEA and Sessions

The following are extensions of PowerShell Direct and PowerShell remoting:

  • Just Enough Administration (JEA): An admin has no rights with their normal account to a remote server. They use a JEA config when connecting to the server that grants them just enough rights to do their work. Their elevated rights are limited to that machine via a temporary user that is deleted when their session ends. Really limits what malware/attacker can target.
  • Justin-Time Administration (JITA): An admin can request rights for a short amount of time from MIM. They must enter a justification, and company can enforce management approval in the process.

vNIC Identification

Name the vNICs and make that name visible in the guest OS. Really useful for VMs with more than 1 vNIC because Hyper-V does not have consistent device naming.

image

Hyper-V Manager Improvements

Yes, it’s the same MMC-based Hyper-V Manager that we got in W2008, but with more bells and whistles.

  • Support for alternative credentials
  • Connect to a host IP address
  • Connect via WinRM
  • Support for high-DPI monitors
  • Manage WS2012, WS2012 R2 and WS2016 from one HVM – HVM in Win10 Anniversary Update (The big Redstone 1 update in Summer 2016) has this functionality.

VM Servicing

MS found that the vast majority of customers never updated the Integration services/components (ICs) in the guest OS of VMs. It was a horrible manual process – or one that was painful to automate. So customers ran with older/buggy versions of ICs, and VMs often lacked features that the host supported!

ICs are updated in the guest OS via Windows Update on WS2016. Problem sorted, assuming proper testing and correct packaging!

MSFT plans to release IC updates via Windows Update to WS2012 R2 in a month, preparing those VMs for migration to WS2016. Nice!

Core Platform

Ben was running out of time here!

Delivering the Best Hyper-V Host Ever

This was the Nano Server push. Honestly – I’m not sold. Too difficult to troubleshoot and a nightmare to deploy without SCVMM.

I do use Nano in the lab. Later, Ben does a demo. I’d not seen VM status in the Nano console before, which Ben shows – the only time I’ve used the console is to verify network settings that I set remotely using PoSH Smile There is also an ability to delete a virtual switch on the console.

Nested Virtualization

Yay! Ben admits that nested virtualization was done for Hyper-V Containers on Azure, but we people requiring labs or training environments can now run multiple working hosts & clusters on a single machine!

VM Configuration File

Short story: it’s binary instead of XML, improving performance on dense hosts. Two files:

  • .VMCX: Configuration
  • .VMRS: Run state

Power Management

Client Hyper-V was impacted badly by Windows 8 era power management features like Connected Standby. That included Surface devices. That’s sorted now.

Development Stuff

This looks like a seed for the future (and I like the idea of what it might lead to, and I won’t say what that might be!). There is now a single WMI (Root\HyperVCluster\v2) view of the entire Hyper-V cluster – you see a cluster as one big Hyper-V server. It really doesn’t do much now.

And there’s also something new called Hyper-V sockets for Microsoft partners to develop on. An extension of the Windows Socket API for “fast, efficient communication between the host and the guest”.

Scale Limits

The numbers are “Top Gear stats” but, according to a session earlier in the week, these are driven by Azure (Hyper-V’s biggest customer). Ben says that the numbers are nuts and we normals won’t ever have this hardware, but Azure came to Hyper-V and asked for bigger numbers for “massive scale”. Apparently some customers want massive super computer scale “for a few months” and Azure wants to give them an OPEX offering so those customers don’t need to buy that h/w.

Note Ben highlights a typo in max RAM per VM: it should say 12 TB max for a VM … what’s 4 TB between friends?!?!

image

Ben wraps up with a few demos.

Ignite 2016 – Extend the Microsoft RDS platform in Azure through Citrix solutions

This post is my set of notes from the session that shows us how Citrix are extending Azure functionality, including the 1st public demo of Citrix Express, which will replace Azure RemoteApp in 2017.

The speakers are:

  • Scott Manchester (main presenter), Principal Group Program Manager, Microsoft
  • Jitendra Deshpande, Citrix
  • Kireeti Valicherla, Citrix

RDS

A MSFT-only solution with multiple goals:

image

Two on-prem solutions:

  • Session-based computing
  • VDI

In the cloud:

  • Session-based computing: RDS in VMs or the deprecated Azure RemoteApp
  • VDI “on Windows 10” … Manchester alludes to some licensing change to allow Enterprise edition of the desktop to be used in cloud-based VDI, which is not possible in any way with a desktop OS right now (plenty do it, breaking licensing rules, and some “do it” using a Server OS with GUI).

RDS Improvements in WS2016

  • Increased performance
  • Enhanced scale in the broker
  • Optimized for the cloud – make it easier to deploy it – some is Azure, some RDS, some licensing.

Azure N-Series

There are a set of VMs that are ideal for graphics intensive RDS/Citrix workloads. They use physical NVIDIA GPUs that are presented to the VM directly using Hyper-V DDA (as in WS2016 Hyper-V).

I skip some of the other stuff that is covered in other sessions.

Citrix

Kiritee from Citrix XenApp/XenDesktop takes the stage. He’s focused on XenApp Express, a new from-Azure service that will be out in 2017.

XenApp 7.11 has Day 1 support for WS2016:

  • Host WS2016 workloads
  • Host XenApp and XenDesktop infrastructure
  • Workload provisioning on ARM
  • Deliver new universal apps to any device
  • Accelerate app migration with AppDNA

XenApp/XenDesktop For N-Series VMs

HDX can be used with N-Series Azure VMs. This includes graphics professionals and designers on “single user Windows 10 CBB VMs” with multi-monitor NVENC H.264 hardware encoding.

Options for Azure Migration

Jitendra of Citrix takes over. He works on XenApp cloud and XenApp Express.

image

You can extend workloads to Azure, host workloads in Azure, or  run on a Citrix-managed service in Azure. In the latter, the management is in Citrix, and your workload runs in Azure. Citrix seamlessly update the management pieces and you just use them without doing upgrades.

These are the Citrix/Azure offerings today and in the future:

image

Back to Kireeti.

Next Generation Service for Remoting Apps

XenApp Express, out of the Azure Marketplace, will be the successor to Azure RemoteApp.

image

Citrix Cloud will provide the management – it’s actually hosted on Azure. You bring your own Windows Server Images into XenApp Express, much like we do with Azure RemoteApp – it an image with the apps pre-installed.

Bad news: The customer must have RDS CALs with Software Assurance (Volume Licensing, and yes, SA is required for cloud usage) or RDS SALs (SPLA). The cost of Azure Remote included the monthly cost of RDS licensing.

The VMs that are deployed are run in your Azure subscription and consume credit/billing there.

Management is done via another portal in Citrix Cloud. Yes, you’ll need to use Azure Portal and the Citrix Cloud portal.

image

Here is the release timeline. A technical preview will be some time in Q4 of this year.

image

Next up, a demo, by Jitendra (I think – we cannot see the presenters in the video). The demo is with a dev build, which will likely change before the tech preview is launched.

  1. You “buy” Citrix XenApp Express in the Azure Marketplace – this limits transactions to certain kinds of subscriptions, e.g. EA but not CSP.
  2. You start by creating an App Collection – similar to Azure RemoteApp. You can make it domain-joined or not-domain joined. A domain should be available from your Azure VNet.
  3. Add your Azure subscription details – subscription, resource group (region), VNET, subnet.
  4. Enter your domain join details – very similar to Azure RemoteApp – domain, OU, computer account domain-join account name/password.
  5. You can use a Citrix image or upload your own image. Here you also select a VM series/size, configure power settings, etc, to control performance/scale/pricing.
  6. You can set your expected max number of simultaneous users.
  7. The end of the wizard shows an estimated cost calculator for your Azure subscription.
  8. You click Start Deployment
  9. Citrix reaches into your subscription and creates the VMs.
  10. Afterwards, you’ll need to publish apps in your app collection.
  11. Then you assign users from your domain – no mention if this is from a DC or from Azure AD.
  12. The user uses Citrix Receiver or the HTML 5 client to sign into the app collection and use the published apps.

The Best Way To Deliver Windows 10 Desktop From The Cloud

Cloud-based VDI using a desktop OS – not allowed up to now under Windows desktop OS (DESKTOP OS) licensing.

There are “new licensing changes” to move Windows 10 workloads to Azure. Citrix XenDesktop will be based on this.

image

  • XenDesktop for Windows 10 on Azure is managed from Citrix Cloud (as above). You manage and provision the service from here, managing what is hosted in Azure.
  • Windows 10 Enterprise CBB licensing is brought by the customer. The customer’s Azure subscription hosts the VDI VMs and your credit is consumed or you pay the Azure bill. They say it must be EA/SA, but that’s unclear. Is that EA with SA only? Can an Open customer with SA do this? Can a customer getting the Windows 10 E3 license via CSP do this? We do not know.

Timeline – GA in Q4 of this year:

image

Next up, a demo.

  1. They are logged into Citrix Cloud, which is first purchased via the Azure Marketplace – limited to a small set of Azure subscriptions, e.g. EA but not CSP at the moment.
  2. A hosting connection to an Azure subscription is set up already.
  3. They create a “machine catalog” – a bunch of machines.
  4. The wizard allows you to only do a desktop OS (this is a Windows 10 service). The wizard allows pooled/dedicated VMs, and you can configure how user changes are saved (local disk, virtual disk, discarded). You then select the VHD master image, which you supply to Citrix. You can use Standard (HDD) or Premium (SSD) storage in Azure for storing the VM. And then you select the quantity of VMs to create and the series/size (from Azure) to use – this will include the N-Series VMs when they are available. There’s more – like VM networking & domain join that you can do (they don’t show this).
  5. He signs into a Windows 10 Azure VM from a Mac, brokered by Citrix Cloud.

That’s all folks!

Microsoft News – 30 September 2015

Microsoft announced a lot of stuff at AzureCon last night so there’s lots of “launch” posts to describe the features. I also found a glut of 2012 R2 Hyper-V related KB articles & hotfixes from the last month or so.

Hyper-V

Windows Server

Azure

Office 365

EMS

Microsoft News – 28 September 2015

Wow, the year is flying by fast. There’s a bunch of stuff to read here. Microsoft has stepped up the amount of information being released on WS2016 Hyper-V (and related) features. EMS is growing in terms of features and functionality. And Azure IaaS continues to release lots of new features.

Hyper-V

Windows Client

Azure

System Center

Office 365

EMS

Security

Miscellaneous

Microsoft News – 7 September 2015

Here’s the recent news from the last few weeks in the Microsoft IT Pro world:

Hyper-V

Windows Server

Windows

System Center

Azure

Office 365

Intune

Events

  • Meet AzureCon: A virtual event on Azure on September 29th, starting at 9am Pacific time, 5pm UK/Irish time.

Microsoft News – 29 June 2015

As you might expect, there’s lots of Azure news. Surprisingly, there is still not much substantial content on Windows 10.

Hyper-V

Windows Server

Windows Client

clip_image001_thumb.png

Azure

Office 365

EMS

Misc

Microsoft News – 8 April 2015

There’s a lot of stuff happening now. The Windows Server vNext Preview expires on April 15th and Microsoft is promising a fix … the next preview isn’t out until May (maybe with Ignite on?). There’s rumours of Windows vNext vNext. And there’s talk of open sourcing Windows – which I would hate. Here’s the rest of what’s going on:

Hyper-V

Windows Server

Windows Client

Azure

Guest on RunAs Radio Talking Azure RemoteApp

Richard Campbell, the host of the RunAs Radio podcast, saw a tweet from me talking about Azure AD, and thought he should ask me back on to have a chat. I had been working on developing training materials on RemoteApp. The use-case that caught my interest was the ability to use RemoteApp to remove the effect of latency. We talk for half an hour about this.

image

The “design” I talk about in the podcast (recorded a few weeks ago) works, and I’ve presented using it. I’ve written some posts on Petri.com about my experiences:

In the design, virtual DCs, file server, and application servers run as VMs in an Azure network. RemoteApp publishes applications on another network. A VNET to VNET VPN connects the server and RA networks, enabling the RA session hosts to join the domain. Users log into RemoteApp, and then it’s all normal RDS at that point:

  • GPO applies
  • Log in script runs
  • Published applications have fast access to application servers
  • Users save data in the company’s Azure VMs

It’s a nice solution!

Microsoft News – 13 March 2015

Quite bit of stuff to read since my last aggregation post on the 3rd.

Windows Server

Hyper-V

Windows Client

Azure

Office 365

Intune

Miscellaneous

Microsoft News – 6 January 2015

A few little nuggets to get you back in the swing of things. And yes, I have completely ignored the US-only version 1.2 Azure Pricing Tool that suffers from “The Curse of Zune”.

Hyper-V

Windows Server

System Center

Windows Client

Azure