Microsoft Ignite 2018–Azure Service Fabric Mesh: The Serverless Microservices Platform

Speakers: Chacko Daniel and Deep Kapur.

This is a true dev session … but I’m here and I haven’t written an original line of programming since 1998. Why am I here? Because Service Fabric is cool and it fascinates me. If I wrote code, Service Fabric (along with functions for atomic trigger/action pieces and app services for interface) would be my choice.

Introduction to Service Fabric

  • Mission critical workloads
  • Used for Azure SQL, Power BI, Cosmos DB, IoT Hub, Event Hub, Skype, Cortana, and more.

Offerings

  • Service Fabric on Windows/Linux – bring your own infrastructure
  • Azure Service Fabric – runs on dedicated VM scale sets
  • Azure Service Fabric Mesh – serverless

Future of Application Development

  • Polyglob services connected by L7 networks
  • Multi OS environments
  • Deploy anything in a container
  • Bring your own network to connect to to your other services
  • State management and other stuff

Service Fabric Mesh (Public Preview Currently)

  • Focus on applications
  • Microservice and container orchestration
  • Pay for only what you use
  • Intelligent traffic routing
  • Azure manages all infrastructure
  • Auto-scaling on demand
  • Security and compliance
  • Health and monitoring

Mesh Resource Provider Architecture

Inventory Manager takes your input. Cluster allocator finds resources to run your code.

WIN_20180926_10_52_55_Pro

What Can You Use It For?

Ideal for cloud-native applications

  • Any language, any framework
  • Libraries to integrate with your favourite languages
  • Easy H/A state storage with reliable collections
  • Intelligent traffic routing and connectivity

Enable app modernization:

  • Deploy anything and everything in a container
  • Bring your own network
  • More

Demo

An app runs on a SF cluster. Each app is made up of 1+ services. A service can be made HA by running it on many nodes in the SF cluster (replicas or load balanced).

There is a mesh application resource. In the summary we see the services that make up the app, and how many replicas there are of each service. He opnes one service and we see the replica(s), numbered normally as 0,1,2,etc. The status shows a summary of recent events. In Details we see the physical consumption of the service, the ports (endpoints) it listens to, environment variables. In Logs we can see a screen output of app log data.

Service Fabric Resource Model

  • Applications and services
  • Networks
  • Gateways
  • Secrets
  • Volumes
  • Routing rules

Simple declarative way to define an application.

Applications and Services Resouces

Services describe how a set of containers run:

  • Container image, environment variables, CPU/MEMory, etc
  • And more

Gateway and Networks

Connecting two networks together:

  • L4: TCP
  • L7: HTTP/S

It’s a way of connecting the outside world, Internet or another network you own, to the isolated network of the SF cluster.

This is a service fabric gateway, not a VNet gateway.

Secrets Resource

Bad way: environment variable.

Better way: Use KeyVault.

Inline is in the public preview today, e.g. connection strings. Secrets by reference (key vault) is coming.

Volume Resource

General purpose file storage.The container can attach volumes. Read and write files using normal disk I/O file APIs. Backed by Azure File storage or Service Fabric volume disk. The SF volume disk is on the cluster and is faster – it is replicated to nodes where your service has a replica (stateful service).

Demo Application Architecture

Cloud based polyglob application demo that they have built. All built on Linux contianers

  • Front End – reactive.
  • Backend: .NET Core and Node.js.
  • Work gets dropped into a queue.
  • A Worker picks up the queue and stores data in persistent storage

Overview over.

They show us a JSON that is used to deploy the SF mesh application: Microsoft.ServiceFabricMesh/applications. Azure Files is being used as file storage. Secrets are being stored inline. A volume disk is also being used for file storage and they define a mount path in the Linux containers of /app/data. There are front end (1), backend (2) and worker services (3) in the application.

Auto Scaling

Horizontal scaling of services based on:

  • CPU
  • Memory
  • Application provided custom metrics (later)

Application Upgrade

He uses on-PC Azure CLI (PowerShell also available) to push a code upgrade to the SF application.

Routing Rules Resource

  • Services talk to each other inside the application by hostname.
  • They do not implement platform-specific discovery APIs
  • Not not deal with network level details.
  • Are unaware of the implementation details of other services

Intelligent traffic routing:

  • Done using “Envoy”
  • Advanced HTTP/S traffic routing with load balancing
  • Proxy handles partition resolution and key hashing

Diagnostics and Monitoring

  • Use your favourite APM platform to monitor apps inside containers, e.g. Azure Application Insights
  • Containers write out stdout/stderr logs to a data volume – can be sucked up by Application Insights
  • Azure Monitor for platform events and container metrics

Reliable Collections – Low Latency Storage

Reliable collections allow you to persist state with failover. Uses transactional storage. Storage on a network introduces a “cost”, e.g. latency. Low latency storage is often preferred.

Demo: Scale-Out

Dumps a load of pictures of cats & dogs. Worker numbers increase from 1 to 40 in seconds for 3 services (120 containers). The pictures are categorized and tagged on the fly.

Pricing

You pay for what you use. Container compute duration:

  • Cores per second
  • Memory in GB per second

Costs depend on the region. Container costs are the same in Azure, irrespective of the Azure offering you get them from. So you choose a container offering based on suitability, not price.

Stateful resources:

  • Volume disk: disk size, Max IOPS/Throughput per disk). Paid for per month.

Reliable collections:

  • Biller per hour based on: size of the reliable collection and the amount of provisioned IOPS.

Recap

What they see: Gaming, media sharing, mission critical business SaaS, IoT data processing for millions of devices, low latency storage applications.

Roadmap

  • Managed service ID
  • Secrets from key vault
  • Routing rules to/from applications
  • Applications across availability zones
  • Persisted state via reliable collections and volume drives
  • Bring your own network to connect to other systems
  • Tooling integration

GA is planned for early next year – probably Build 2019. The preview is free to use.

Go live licenses will be given to early adopters.

Online Windows Server Mini-Conference – June 26th

Microsoft wants to remind you that they have this product called Windows Server, and that it has a Windows Server 2016 release, a cool new administration console, and a future (Windows Server 2019). In order to do that, Microsoft will be hosting an online conference on June 26th with some of the big names behind the creation of Windows Server called the Windows Server Summit.

This event will have a keynote featuring Erin Chapple, Director of Program Management, Cloud + AI (which includes Windows Server). Then the event will break out into a number of tracks with multiple sessions each, covering things like:

  • Hybrid scenarios with Azure
  • Security
  • Hyper-converged infrastructure (Storage Spaces Direct/S2D)
  • Application platform (containers on Windows Server)

The event, on June 26th, starts at 5pm UK/Irish time and runs for 4 hours (12:00 EST). Don’t worry if this time doesn’t suit; the sessions will be available to stream afterwards. Those who tune in live will also have the opportunity to participate in Q&A.

Windows Server 2019 Announced for H2 2018

Last night, Microsoft announced that Windows Server 2019 would be released, generally available, in the second half of 2018. I suspect that the big bash will be Ignite in Orlando at the end of September, possibly with a release that week, but maybe in October – that’s been the pattern lately.

LTSC

Microsoft is referring to WS2019 as a “long term servicing channel release”. When Microsoft started the semi-annual channel, a Server Core build of Windows Server released every 6 months to Software Assurance customers that opt into the program, they promised that the normal builds would continue every 3 years. These LTSC releases would be approximately the sum of the previous semi-annual channel releases plus whatever new stuff they cooked up before the launch.

First, let’s kill some myths that I know are being spread by “someone I know that’s connected to Microsoft” … it’s always “someone I know” that is “connected to Microsoft” and it’s always BS:

  • The GUI is not dead. The semi-annual channel release is Server Core, but Nano is containers only since last year, and the GUI is an essential element of the LTSC.
  • This is not the last LTSC release. Microsoft views (and recommends) LTSC for non-cloud-optimised application workloads such as SQL Server.
  • No – Windows Server is not dead. Yes, Azure plays a huge role in the future, but Azure Stack and Azure are both powered by Windows, and hundreds of thousands, if not millions, of companies still are powered by Windows Server.

Let’s talk features now …

I’m not sure what’s NDA and what is not, so I’m going to stick with what Microsoft has publicly discussed. Sorry!

Project Honolulu

For those of you who don’t keep up with the tech news (that’s most IT people), then Project Honolulu is a huge effort by MS to replace the Remote Server Administration Toolkit (RSAT) that you might know as “Administrative Tools” on Windows Server or on an admin PC. These ancient tools were built on MMC.EXE, which was deprecated with the release of W2008!

Honolulu is a whole new toolset built on HTML5 for today and the future. It’s not finished – being built with cloud practices, it never will be – but but’s getting there!

Hybrid Scenarios

Don’t share this secret with anyone … Microsoft wants more people to use Azure. Shh!

Some of the features we (at work) see people adopt first in the cloud are the hybrid services, such as Azure Backup (cloud or hybrid cloud backup), Azure Site Recovery (disaster recovery), and soon I think Azure File Sync (seamless tiered storage for file servers) will be a hot item. Microsoft wants it to be easier for customers to use these services, so they will be baked into Project Honolulu. I think that’s a good idea, but I hope it’s not a repeat of what was done with WS2016 Essentials.

ASR needs more than just “replicate me to the cloud” enabled on the server; that’s the easy part of the deployment that I teach in the first couple of hours in a 2-day ASR class. The real magic is building a DR site, knowing what can be replicated and what cannot (see domain controllers & USN rollback, clustered/replicating databases & getting fired), orchestration, automation, and how to access things after a failover.

Backup is pretty easy, especially if it’s just MARS. I’d like MARS to add backup-to-local storage so it could completely replace Windows Server Backup. For companies with Hyper-V, there’s more to be done with Azure Backup Server (MABS) than just download an installer.

Azure File Sync also requires some thought and planning, but if they can come up with some magic, I’m all for it!

Security

In Hyper-V:

  • Linux will be supported with Shielded VMs.
  • VMConnect supported is being added to Shielded VMs for support reasons – it’s hard to fix a VM if you cannot log into it via “console” access.
  • Encrypted Network Segments can be turned on with a “flip of a switch” for secure comms – that could be interesting in Azure!

Windows Defender ATP (Advanced Threat Protection) is a Windows 10 Enterprise feature that’s coming to WS2019 to help stop zero-day threats.

DevOps

The big bet on Containers continues:

  • The Server Core base image will be reduced from 5GB by (they hope) 72% to speed up deployment time of new instances/apps.
  • Kubernetes orchestration will be natively supported – the container orchestrator that orginated in Google appears to be the industry winner versus Docker and Mesos.

In the heterogeneous world, Linux admins will be getting Windows Subsystem on Linux (WSL) for a unified scripting/admin experience.

Hyper-Converged Infrastructure (HCI)

Storage Spaces Direct (S2D) has been improved and more changes will be coming to mature the platform in WS2019. In case you don’t know, S2D is a way to use local (internal) disks in 2+ (preferably 4+) Hyper-V hosts across a high speed network (virtual SAS bus) to create a single cluster with fault tolerance at the storage and server levels. By using internal disks, they can use cheaper SATA disks, as well as new flash formats don’t natively don’t support sharing, such as NVME.

The platform is maturing in WS2019, and Project Honolulu will add a new day-to-day management UI for S2D that is natively lacking in WS2016.

The Pricing

As usual, I will not be answering any licensing/pricing questions. Talk to the people you pay to answer those questions, i.e. the reseller or distributor that you buy from.

OK; let’s get to the messy stuff. Nothing has been announced other than:

It is highly likely we will increase pricing for Windows Server Client Access Licensing (CAL). We will provide more details when available.

So it appears that User CALs will increase in pricing. That is probably good news for anyone licensing Windows Server via processor (don’t confuse this with Core licensing).

When you acquire Windows Server through volume licensing, you pay for every pair of cores in a server (with a minimum of 16, which matched the pricing of WS2012 R2), PLUS you buy User CALs for every user authenticating against the server(s).

When you acquire Windows Server via Azure or through a hosting/leasing (SPLA) program, you pay for Windows Server based only on how many cores that the machine has. For example, when I run an Azure virtual machine with Windows Server, the per-minute cost of the VM includes the cost of Windows Server, and I do not need any Windows Server CALs to use it (RDS is a different matter).

If CALs are going up in price, then it’s probably good news for SPLA (hosting/leasing) resellers (hosting companies) and Azure where Server CALs are not a factor.

The Bits

So you want to play with WS2019? The first preview build (17623) is available as of last night through the Windows Server Insider Preview program. Anyone can sign up.

image

Would You Like To Learn About Azure Infrastructure?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Windows Server Fall Release (1709) Technical Foundation

Speaker: Jeff Woolsey, Principal Program Manager

WS2016 Recap

Design points

  • Layered security for emerging threats:  Jeff has been affected by 4 of the big, well publicised hacks. CEOs are being fired because of this stuff now.
  • Build the software-defined data centre
  • Create a cloud-optimized application platform

Security in WS2016

  • Long laundry list of features: Defender, Control Flow Guard, Devices Guard, Credential Guard, Remote Credential Guard.
  • Shielded VMs – you don’t trust the operators
  • vTPM – encrypt the disks
  • JIT Administration

Software-Defined

  • Compute: rolling upgrades with no downtime, hot/add remove, more resilient to transient storage, compute, network issues.
  • Network: Azure code brought to Windows Server 2016: SDN scale and simplicity. L4 load balancer, distributed data centre firewall.

He tells a very funny story on RAM support: 24 TB physical, and 12 TB RAM in Hyper-V VMs.

  • Storage: Hyper-Converged, Storage Replica, cluster wide QoS
  • RDS: Lots there too.

Hyper-Converged Infrastructure

Built into WS2016 Datacenter edition: Storage Spaces Direct (S2D). Uses SATA, SAS, SSD, and NVME, Working with storage industry to add new flash types.

  • Cloud design points: used in Azure Stack
  • RDMA at the core for performance and latency benefits.
  • Simplifying the datacenter: Add servers to add compute and storage capacity. No more SAN network. Storage controller is s/w.

Working on adding NVDIMMS: Intel Persistent Memory. Not as fast as real memory, but you can add lots of it in, e.g. 100 TB of “RAM”. Supported in WS2016 and SQL Server 2017 and later.

SATADOM is supported in WS2016 and later. It’s flash but its attached to a SATA connector (see image below). The idea is to do the “boot from USB” to free up a drive bay. This tiny drive plugs directly onto the SATA controller on the motherboard. Faster than USB/SD boot and more reliable.

Cloud Ready Application Platform

  • Windows Server Containers: The next generation of compute, following virtualization. Both are different techs, and going forward, both will probably exist. But containers will be the tech of choice for deploying applications: speed, ease of deployment, better densities, and more performance.
  • Nano Server: Ideal for the microkernal in Hyper-V Containers
  • Automation: PowerShell 5.0 and DSC

Now on to the new stuff

Azure File Sync

Klaas Langhout comes on stage.

I’ve covered this in depth already.

Back to Jeff. He asks Klaas if customers access the shares any differently on prem. Nope – it’s the same old file share and any Azure connectivity/tiering/sync is hidden.

Windows Defender Advanced Threat Protection (WDATP)

Using cloud intelligence to protect Windows.

  • Built into Windows Server
  • Behaviour-based, cloud-powered breach detection
  • Best of breed investigation experience
  • And more

You can sign into the Windows Defender Security Center to analyse activity to do forensics on an attack or suspicious activity, and learn how to remediate the attack.

Modern, Remote Management for Windows Server

I covered Project Honolulu earlier today.

Honolulu will remain a free download outside of Windows Server – expect updates every month.

FAQ on Honolulu

  • Price: Free
  • Edge, Chrome, Safari on Mac and more to be tested.
  • Installs on WS2012 R2 and later, Windows 10.
  • Manages Hyper-V Server 2012 and later and WS2012 and later.
  • Azure is not required.
  • AD is not required either.
  • Security: HTTS LAPS, Delegation
  • Configuration: No IIS, Agents not required. SQL not required. If you are pre-2016. you have to install WMF 5.1.
  • Positioning: Evolution of “in-box” tools. Does not replace System Center. Complementary to SycCtr, OMS, RSAT. Hopefully will eventually replace MMC-based RSAT.
  • Feedback: Via Windows Server UserVoice.
  • Extensions: It’s plugable, with alpha SDK today.

1709

On to the next release of Windows Server, coming in October.

Application Innovation

  • Container-optimized Nano Server image increase container density and performance.
  • .NET Core 2.0
  • SMB Support for containers
  • Linux Containers with hyper-V isolation
  • Windows Subsystem for Linux – to manage the above primarily

Where to Start With Containers

  • Containerize suitable existing applications. GUI-based apps aren’t suitable.
  • Transform monoliths into microservices, with new code and transforming existing code.
  • Accelerate new applications with cloud-app development.

What’s Next

Windows Server Insiders is a program to beta test and learn the new stuff in the semi-annual channel.

Post 1709 Improvements

Compute:

  • Honolulu integration
  • Shielded Linux VMs
  • Guest RDMA

Network:

  • Honolulu integration
  • Encrypted virtual networks
  • NTLM no longer required
  • SMB1 Disabled by default
  • and more

Software-Defined:

  • S2D Support for NVMe
  • S2D support for NV-DIMMs
  • Dedupe for ReFS
  • Cluster Sets to enable large scale HVI
  • Storage Replica test failover
  • Scoped volumes
  • Something on multi-resilient volumes

Windows Server – What’s New & What’s Next

Speakers:

  • Erin Chapple, General Manger Windows Server
  • Chris Van Wesep, Director Product Marketing

Erin Chapple starts things. Today they’ll talk about what’s new in Windows Server, what’s the future, and the hybrid/migration opportunities.

WS2016 Looking Back

Most cloud-ready OS:

  • Built-in security: Protection of identity (Credential Guard), secure the virtualization platform (shielded VMs, vTPM), and built-in layers of security (VSM, etc)
  • Azure-inspired infrastructure: Storage Spaces Direct, Network Controller, learnings from hyper-scale, affordable.
  • Hybrid application platform: Support for containers, built-for-purpose OS, Azure Hybrid Benefit for SA/Azure transition

Some customer case studies come up. Rackspace used Shielded VMs, Nano Server for applications (woops!) for hosting. A “large investigative government agency” needed to preserve lots of seized data (PB + per case). They used Storage Spaces Direct (S2D) on 8-node clusters, with data in VMs to isolate one investigation from another. biBERK used containers to deploy 22 apps on WS2016 Containers with Docker in less than 1 week.

The key for software-defined is the hardware. They leverage offloads so much that hardware must be more reliable. There is a Windows Server Software Defined Program (WSSD) and the site with all the info is http://docs.microsoft.com/en-us/windows-server/sddc.

Supporting You Wherever You Are

WS2016 is the basis of on-premises, Azure, and Azure Stack (hybrid). 80% of enterprises see themselves operating in a hybrid mode for the foreseeable future. 55% have a hybrid strategy in place as of a year ago. 87% are planning to integrate on-premises datacentres with public cloud.

Hybrid is not about a network connection. It’s about consistency right down to the API level: unified development, VMs, storage, data, identity, and much more.

Will Gries – Azure File Sync

This is a new hybrid service that is a part of Azure Files. Centralize storage in Azure Files, but without giving up the file server. You effectively cache data locally on file servers for fast local performance. The cloud enables sync between site, centralized backup, and easy DR.

He starts a demo. The file sync agent is installed on a WS2016 file server. It is syncing to Azure. He proves this by changing & deleting things on Azure and it syncs to the cloud. It’s all near realtime, using change notifications on file server to ensure that sync happens very quickly. Cloud Tiering enables the “cache” feature. The greyed files with an O attribute have a disk size of 0 bytes because they are stored in Azure. If he opens the file, it’s recalled from Azure Files seamlessly. Files that are able to do partial reads/writes can stream from Azure – he opens a video and we can see in the UI that it is streaming from Azure. In file properties, we can see it has downloaded the blocks via the stream, optimizing the download to only required blocks, thanks to streaming.

Back to Erin.

Windows Server Cadence

Industry is moving incredibly fast. Industries in that fast lane need server improvements faster. There will be two channels of Windows Server:

  • Semi-annual channel. An opt-in for SA or Azure customers, releasing every spring/autumn. Each release is supported for 18 months, so you can choose to skip every second release. Build = approx year/month, e.g. 1709 will be released in month 10 of 2017.
  • Long-term Servicing Channel: For everyone outside of SA/Azure or not wanting to upgrade every 6-12 months. Typical 5+5 years support program and in all channels. Name = Windows Server + Year.

Many companies will use a mix of both channels, selecting the channel based on demands of an application/service.

Windows Server Insiders will give you a sneak peek of semi-annual channel releases.

The date of the next LTSC release is not announced, but it’s going to be after 2018.

Introducing Server Core to Semi-Annual Channel

Server Core is replacing Nano Server for infrastructure and VM roles. Nano Server adoption was very low in these areas. In 1709, Nano Server is completely focused on containers. It is much smaller for containers by stripping out the infrastructure pieces. Server Core should be a “soft landing” for moving applications from Nano Server. Server Core is the MS recommended choice for infrastructure roles.

Note by me: I will continue to recommend full installations for infrastructure roles. The full GUI is not in the semi-annual channel. So if you want rapid upgrades, you better learn some PowerShell to troubleshoot your networking and drivers/firmware.

What’s New in 1709

Hybrid Application platform and Modern Management

Jeff Woolsey

Jeff tells us that containers are the same journey that we went through with virtualization. Containers will happen, but they won’t kill virtualization – they work together. We’re at the beginning of the next 10 year journey with containers. Jeff says that cloud admins, hybrid admins, IT pros, must learn containerization.

Hybrid Application Platform

  • Nano Server just wasn’t right for virtualization: drivers, installation, patching, etc. So they switched the focus entirely to containers to make it faster to deploy/update, and to get higher levels of density & performance.
  • .NET Core 2.0 and SMB support was added for containers … allows containers to store data on SMB 3.0 storage.
  • Linux containers with Hyper-V Isolation enables a cross-platform to run all kinds of containers but in a secure way (each container running real Linux kernels n a Hyper-V child partition), and Windows Subsystem for Linux. When Win10 added WSL, Microsoft wasn’t planning to do it for Windows Server. With Linux Containers, the case for Bash management on the host made this a viable option.

Telemetry shows that most people using Windows Server containers are choosing the Hyper-V model for security.

All of this is wrapped up in Modern Management.

Demo: Enabling Cloud Apps with Nano Server & Containers

This is the next generation P2V … moving applications (Docker Convert) from VMs to containers. In the demo, Jeff uses Docker to deploy a Hyper-V container in a container. It runs SQL Server & IIS. The Docker tools on GitHub converted the app to an image in less than 1 hour. Now the image is a container image which is easy to deploy. When running in a container, it uses a fraction of the resources that were used by VMs.

Next he deploys a Linux container image with Tomcat Server, on the same Windows Server host as the Windows container.

Nano Server

The base image for WS2016 Nano Server was 383 MB. In 1709 is 78 MB. With .Net it went from 413 MB to 107 MB. Those are the compressed numbers.

Uncompressed: the base image wen from 1.05 GB to 195 MB, and with .NET it went from 1.15 GB to 262 MB.

Management Re-Imagined

  • This is next-generation of “in-box” tooling.
  • Simplified, integrated and secure.
  • Extensible

Required for Server Core in the real world. The UI is HTML5 and touch friendly. It has to manage the h/w, the local VMs, and VMs in Azure.

Today we use Task Manager, MMC based tools like Hyper-V Manager, Perfmon, Device Manager, etc, CMD.EXE, PowerShell, Serer Manager, etc. Jeff mentions lots more tools Smile

Project Honolulu

A HTML5-based touch-friendly UI. It’s running on Jeff’s laptop against 4 servers under his desk back in the office. He opens the Overview (Task Manager info). Computer name and domain join are there. Environment variables, RDP are here. Restart/shutdown are here.

Roles and Features is next. No more need for Server Manager (yay!). Roles & features easily installed remotely. Events shows all the event viewer info. Note that filtering UI is much better here than in the MMC. Files allows you to browse and edit the file system on a managed server. Virtual machines allows Hyper-V VM management.

The system is agentless. Honolulu is a 30 MB MSI download to a management node which you browse to. It even works on Safari on Mac.

Honolulu will be a free download when it goes GA.

Back to Erin

What’s Next For Project Honolulu

A peek into the pipeline … things they are exploring and experimenting with.

Azure Backup in Honolulu – a wizard to set up the Azure bits and start backing up items/system state. They show some mockups of it all being driven from Honolulu instead of the Azure Portal.

The Azure Connection

Chris comes on stage to talk about Hybrid scenarios.

He starts off by talking about Software Assurance. Highlighted features:

  • Required for Semi-Annual releases
  • Hybrid Use Benefit to move to Azure  – up to 40% savings on the cost of Windows Server Azure VMs

Premium Assurance add-on adds 6 years of support to the normal 5+5 model (16 years total) for applications that cannot stay up to date, but can continue to get security updates.

If you watch this session, please note that Chris over-simplifies (a lot) the Hybrid Use benefit. It’s actually quite complex, regarding moving & co-using licenses and core counts.

End of Support

W2008/R2 end of support is Jan 2020 – 1/3 of servers fall into this space. SQL 2008/R2 end of support is July 2019.  For larger companies, they should look at cloud and/or containerization, or even re-development in serverless cloud.

Questions

  • Honolulu can manage all the way back to Ws2012
  • Not every app can/should be containerized – key thing is that you need remote management because containers don’t have a GUI.
  • Where is Honolulu installed. Can be on a PC, on the managed server, or on a centrally dedicated management server. Honolulu uses WMI and PowerShell to talk to the managed servers.

Ignite 2016 – Introducing Windows Server and System Center 2016

This session (original here) introduces WS2016 and SysCtr 2016 at a high level. The speakers were:

  • Mike Neil: Corporate VP, Enterprise Cloud Group at Microsoft
  • Erin Chapple: General Manager, Windows Server at Microsoft

A selection of other people will come on stage to do demos.

20 Years Old

Windows Server is 20 years old. Here’s how it has evolved:

image

The 2008 release brought us the first version of Hyper-V. Server 2012 brought us the same Hyper-V that was running in Azure. And Windows Server 2016 brings us the cloud on our terms.

The Foundation of Our Cloud

The investment that Microsoft made in Azure is being returned to us. Lots of what’s in WS2016 came from Azure, and combined with Azure Stack, we can run Azure on-prem or in hosted clouds.

There are over 100 data centers in Azure over 24 regions. Windows Server is the platform that is used for Azure across all that capacity.

IT is Being Pulled in Two Directions – Creating Stresses

  • Provide secure, controlled IT resources (on prem)
  • Support business agility and innovation (cloud / shadow IT)

By 2017, 50% of IT spending will be outside of the organization.

Stress points:

  • Security
  • Data centre efficiency
  • Modernizing applications

Microsoft’s solution is to use unified management to:

  • Advanced multi-layer security
  • Azure-inspired, software-defined,
  • Cloud-read application platform

Security

Mike shows a number of security breach headlines. IT security is a CEO issue – costs to a business of a breach are shown. And S*1t rolls downhill.

Multi-layer security:

  • Protect identity
  • Secure virtual machines
  • Protect the OS on-prem or in the cloud

Challenges in Protecting Credentials

Attack vectors:

  1. Social engineering is the one they see the most
  2. Pass the hash
  3. Admin = unlimited rights. Too many rights given to too many people for too long.

To protect against compromised admin credentials:

image

  • Credential Guard will protect ID in the guest OS
  • JEA limits rights to just enough to get the job done
  • JITA limits the time that an admin can have those rights

The solution closes the door on admin ID vulnerabilities.

Ryan Puffer comes on stage to do a demo of JEA and JITA. The demo is based on PowerShell:

  1. He runs Enter-PSSession to log into a domain controller (DNS server). Local logon rights normally mean domain admin.
  2. He cannot connect to the DC, because his current logon doesn’t have DC rights, so it fails.
  3. He tries again, but adding –ConfiguratinName to add a JEA config to Enter-PSSession, and he can get in. The JEA config was set up by a more trusted admin. The JEA authentication is done using a temporary virtual local account on the DC that resides nowhere else. This account exists only for the duration of the login session. Malware cannot use this account because it has limited rights (to this machine) and will disappear quickly.
  4. The JEA configuration has also limited rights – he can do DNS stuff but he cannot browse the file system, create users/groups, etc. His ISE session only shows DNS Get- cmdlets.
  5. He needs some modify rights. He browses to a Microsoft Identity Manager (MIM) portal and has some JITA roles that he can request – one of these will give his JEA temp account more rights so he can modify DNS (via a group membership). He selects one and has to enter details to justify the request. He puts in a time-out of 30 minutes – 31 minutes later he will return to having just DNS viewer rights. MFA via Azure can be used to verify the user, and manager approval can be required.
  6. He logs in again using Enter-PSSession with the JEA config. Now he has DNS modify rights. Note: you can whitelist and blacklist cmdlets in a role.

Back to Mike.

Challenges Protecting Virtual Machines

VMs are files:

  • Easy to modify/copy
  • Too many admins have access

Someone can mount a VMs disks or copy a VM to gain access to the data. Microsoft believes that attackers (internal and external) are interested in attacking the host OS to gain access to VMs, so they want to prevent this.

This is why Shielded Virtual Machines was invented – secure the guest OS by default:

  • The VM is encrypted at rest and in transit
  • The VM can only boot on authorised hosts

Azure-Inspired, Software-Defined

Erin Chapple comes on stage.

This is a journey that has been going on for several releases of Windows Server. Microsoft has learned a lot from Azure, and is bringing that learning to WS2016.

Increase Reliability with Cluster Enhancements

  • Cloud means more updates, with feature improvements. OS upgrades weren’t possible in a cluster. In WS2016, we get cluster rolling upgrades. This allows us to rebuild a cluster node within a cluster, and run the cluster temproarily in mixed-version mode. Now we can introduce changes without buying new cluster h/w or VM downtime. Risk isn’t an upgrade blocker.
  • VM resiliency deals with transient errors in storage, meaning a brief storage outage pauses a VM instead of crashing it.
  • Fault domain-aware clusters allows us to control how errors affect a cluster. You can spread a cluster across fault domains (racks) just like Azure does. This means your services can be spread across fault domains, so a rack outage doesn’t bring down a HA service.

image

24 TB of RAM on a physical host and 12 TB RAM in a guest OS are supported. 512 physical LPs on a host, and 240 virtual processors in a VM. This is “driven by Azure” not by customer feedback.

Complete Software-Defined Storage Solution

Evolving Storage Spaces from WS2012/R2. Storage Spaces Direct (S2D) takes DAS and uses it as replicated/shared storage across servers in a cluster, that can either be:

  • Shared over SMB 3 with another tier of compute (Hyper-V) nodes
  • Used in a single tier (CSV, no SMB 3) of hyper-converged infrastructure (HCI)

image

Storage Replica introduces per-volume sync/async block-level beneath-the-file system replication to Windows Server, not caring about what the source/destination storage is/are (can be different in both sites) as long as it is cluster-supported.

Storage QoS guarantees an SLA with min and max rules, managed from a central point:

  • Tenant
  • VM
  • Disk

The owner of S2D, Claus Joergensen, comes on stage to do an S2D demo.

  1. The demo uses latest Intel CPUs and all-Intel flash storage on 16 nodes in a HCI configuration (compute and storage on a single cluster, shared across all nodes).
  2. There are 704 VMs run using an open source tool called VMFleet.
  3. They run a profile similar to Azure P10 storage (each VHD has 500 IOPS). That’s 350,000 IOPS – which is trivial for this system.
  4. They change this to Azure P20: now each disk has 2,300 IOPS, summing 1.6 million IOPS in the system – it’s 70% read and 30% write. Each S2D cluster node (all 16 of them) is hitting over 100,000 IOPS, which is about the max that most HCI solutions claim.
  5. Clause changes the QoS rules on the cluster to unlimited – each VM will take whatever IOPS the storage system can give it.
  6. Now we see a total of 2.7 million IOPS across the cluster, with each node hitting 157,000 to 182,000 IOPS, at least 50% more than the HCI vendors claim.

Note the CPU usage for the host, which is modest. That’s under 10% utilization per node to run the infrastructure at max speed! Thank Storage Spaces and SMB Direct (RDMA) for that!

image

  1. Now he switches the demo over to read IO only.
  2. The stress test hits 6.6 million read IOPS, with each node offering between 393,000 and 433,000 IOPS – that’s 16 servers, no SAN!
  3. The CPU still stays under 10% per node.
  4. Throughput numbers will be shown later in the week.

If you want to know where to get certified S2D hardware, then you can get DataON from MicroWarehouse in Dublin (www.mwh.ie):

image

Nano Server

Nano Server is not an edition – it is an installation option. You can install a deeply stripped down version of WS2016, that can only run a subset of roles, and has no UI of any kind, other than a very basic network troubleshooting console.

It consumes just 460 MB disk space, compared to 5.4 GB of Server Core (command prompt only). It boots in less than 10 seconds and a smaller attack surface. Ideal scenario: born in the cloud applications.

Nano Server is not launched in Current Branch for Business. If you install Nano Server, then you are forced into installing updates as Microsoft releases them, which they expect to do 2-3 times per year. Nano will be the basis of Microsoft’s cloud infrastructure going forward.

Azure-Inspired Software-Defined Networking

A lot of stuff from Azure here. The goal is that you can provision new networks in minutes instead of days, and have predictable/secure/stable platforms for connecting users/apps/data that can scale – the opposite of VLANs.

Three innovations:

  • Network Controller: From Azure, a fabric management solution
  • VXLAN support: Added to NVGRE, making the underlying transport less important and focusing more on the virtual networks
  • Virtual network functions: Also from Azure, getting firewall, load balancing and more built into the fabric (no, it’s not NLB or Windows Firewall – see what Azure does)

Greg Cusanza comes on stage – Greg has a history with SDN in SCVMM and WS2012/R2. He’s going to deploy the following:

image

That’s a virtual network with a private address space (NAT) with 3 subnets that can route and an external connection for end user access to a web application. Each tier of the service (file and web) has load balancers with VIPs, and AD in the back end will sync with Azure AD. This is all familiar if you’ve done networking in Azure Resource Manager (ARM).

  1. A bunch of VMs have been created with no network connections.
  2. He opens a PoSH script that will run against the network controller – note that you’ll use Azure Stack in the real world.
  3. The script runs in just over 29 seconds – all the stuff in the screenshot is deploy and the VMs are networked and have Internet connectivity – He can browse the Net from a VM, and can browse the web app from the Internet – he proves that load balancing (virtual network function) is working.

Now an unexpected twist:

  1. Greg browses a site and enters a username and password – he has been phished by a hacker and now pretends to be the attacker.
  2. He has discovered that the application can be connected to using remote desktop and attempts to sign in used the phished credentials. He signs into one of the web VMs.
  3. He uploads a script to do stuff on the network. He browses shares on the domain network. He copies ntds.dit from a DC and uploads it to OneDrive for a brute force attack. Woops!

This leads us to dynamic security (network security groups or firewall rules) in SDN – more stuff that ARM admins will be familiar with. He’ll also add a network virtual appliance (a specialised VM that acts as a network device, such as an app-aware firewall) from a gallery – which we know that Microsoft Azure Stack will be able to syndicate from :

image

 

  1. Back in PoSH, he runs another script to configure network security groups, to filter traffic on a TCP/UDP port level.
  2. Now he repeats the attack – and it fails. He cannot RDP to the web servers, he couldn’t browse shared folders if he did, and he prevented outbound traffic from the web servers anyway (stateful inspection).

The virtual appliance is a network device that runs a customized Linux.

  1. He launches SCVMM.
  2. We can see the network in Network Service – so System Center is able to deploy/manage the Network Controller.

Erin finished by mentioning the free WS2016 Datacenter license offer for retiring vSphere hosts “a free Datacenter license for every vSphere host that is retired”, good until June 30, 2017 – see www.microsoft.com/vmwareshift

Cloud-Ready Application Platform

Back to Mike Neil. We now have a diverse set of infrastructure that we can run applications one:

image

WS2016 adds new capabilities for cloud-based applications. Containers was a huge thing for MSFT.

A container virtualizes the OS, not the machine. A single OS can run multiple Windows Server Containers – 1 container per app. So that’s a single shared kernel – that’s great for internal & trusted apps, similar to containers that are available on Linux. Deployment is fast and you can get great app density. But if you need security, you can deploy compatible Hyper-V Containers. The same container images can be used. Each container has a stripped down mini-kernal (see Nano) isolated by a Hyper-V partition, meaning that untrusted or external apps can be run safely, isolated from each other and the container host (either physical or a VM – we have nested Hyper-V now!). Another benefit of Hyper-V Containers is staggered servicing. Normal (Windows Server) Containers share the kernal with the container host – if you service the host then you have to service all of the containers at the same time. Because they are partitioned/isolated, you can stagger the servicing of Hyper-V Containers.

Taylor Brown (ex- of Hyper-V and now Principal Program Manager of Containers) comes on stage to do a demo.

image

  1. He has a VM running a simple website – a sample ASP.NET site in Visual Studio.
  2. In IIS Manager, he does a Deploy > Export Application, and exports a .ZIP.
  3. He copies that to a WS2016 machine, currently using 1.5 GB RAM.
  4. He shows us a “Docker File” (above) to configure a new container. Note how EXPOSE publishes TCP ports for external access to the container on TCP 80 (HTTP) and TCP 8172 (management). A PowerShell snap-in will run webdeploy and it will restore the exported ZIP package.
  5. He runs Docker Build –t mysite  … with the location of the docker file.
  6. A few seconds later a new container is built.
  7. He starts the container and maps the ports.
  8. And the container is up and running in seconds – the .NET site takes a few seconds to compile (as it always does in IIS) and the thing can be browsed.
  9. He deploys another 2 instances of the container in seconds. Now there are 3 websites and only .5 GB extra RAM is consumed.
  10. He uses docker run -isolation=hyperv to get an additional Hyper-V Container. The same image is started … it takes an extra second or two because of “cloning technology that’s used to optimize deployment of Hyper-V Containers”.
  11. Two Hyper-V containers and 3 normal containers (that’s 5 unique instances of IIS) are running in a couple of minutes, and the machine has gone from using 1.5 GB RAM to 2.8 GB RAM.

Microsoft has been a significant contributor to the Docker open source project and one MS engineer is a maintainer of the project now. There’s a reminder that Docker’s enterprise management tools will be available to WS2016 customers free of charge.

On to management.

Enterprise-Class Data Centre Management

System Center 2016:

  • 1st choice for Windows Server 2016
  • Control across hybrid cloud with Azure integrations (see SCOM/OMS)

SCOM Monitoring:

  • Best of breed Windows monitoring and cross-platform support
  • N/w monitoring and cloud infrastructure health
  • Best-practice for workload configuration

Mahesh Narayanan, Principal Program Manager, comes on stage to do a demo of SCOM. IT pros struggle with alert noise. That’s the first thing he wants to show us – it’s really a way to find what needs to be overriden or customized.

  1. Tune Management Packs allows you to see how many alerts are coming from each management pack. You can filter this by time.
  2. He click Tune Alerts action. We see the alerts, and a count of each. You can then do an override (object or group of objects).

Maintenance cycles create a lot of alerts. We expect monitoring to suppress these alerts – but it hasn’t yet! This is fixed in SCOM 2016:

  1. You can schedule maintenance in advance (yay!). You could match this to a patching cycle so WSUS/SCCM patch deployments don’t break your heart on at 3am on a Saturday morning.
  2. Your objects/assets will automatically go into maintenance mode and have a not-monitored status according to your schedules.

All those MacGuyver solutions we’ve cobbled together for stopping alerts while patching can be thrown out!

That was all for System Center? I am very surprised!

PowerShell

PowerShell is now open source.

  • DevOps-oriented tooling in PoSH 5.1 in WS2016
  • vNext Alpha on Windows, macOS, and Linux
  • Community supported releases

Joey Aiello, Program Manager, comes up to do a demo. I lose interest here. The session wraps up with a marketing video.

Russinovich on Hyper-V Containers

We’ve known since Ignite 2015 that Microsoft was going to have two kinds of containers in Windows Server 2016 (WS2016):

  • Windows Server Containers: Providing OS and resource virtualization and isolation.
  • Hyper-V Containers: The hypervisor adds security isolation to machine & resource isolation.

Beyond that general description, we knew almost nothing about Hyper-V Containers, other than expect them in preview during Q4 of 2015 – Technical Preview 4 (TPv4), and that it is the primary motivation for Microsoft to give us nested virtualization.

That also means that nested virtualization will come to Windows Server 2016 Hyper-V in TPv4.

We have remained in the dark since then, but Mark Russinovich appeared on Microsoft Mechanics (a YouTube webcast by Microsoft) and he explained a little more about Hyper-V Containers and he also did a short demo.

Some background first. Normally, a machine has a single user mode running on top of kernel mode. This is what restricts us to the “one app per OS” best practice/requirement, depending on the app. When you enable Containers on WS2016, an enlightenment in the kernel allows multiple user modes. This gives us isolation:

  • Namespace isolation: Each container sees it’s own file system and registry (the hives in the containers hosted files).
  • Resource isolation: How much process, memory, and CPU a container can use.

Kernel mode is already running when you start a new container, which improves the time to start up a container, and thus it’s service(s). This is great for deploying and scaling out apps because a containerised app can be deployed and started in seconds from a container image with no long term commitment, versus minutes for an app in a virtual machine with a longer term commitment.

image

But Russinovich goes on to say that while containers are great for some things that Microsoft wants to do in Azure, they also have to host “hostile multi-tenant code” – code uploaded by Microsoft customers that Microsoft cannot trust and that could be harmful or risky to other tenants. Windows Server Containers, like their Linux container cousins, do not provide security isolation.

In the past, Microsoft has placed such code into Hyper-V (Azure) virtual machines, but that comes with a management and direct cost overhead. Ideally, Microsoft wants to use lightweight containers with the security isolation of machine virtualization. And this is why Microsoft created Hyper-V Containers.

Hyper-V provides excellent security isolation (far fewer vulnerabilities found than vSphere) that leverages hardware isolation. DEP is a requirement. WS2016 is introducing IOMMU support, VSM, and Shielded Virtual Machines, with a newly hardened hypervisor architecture.

Hyper-V containers use the exact same code or container images as Windows Server Containers.That makes your code interchangeable – Russinovich shows a Windows Server Container being switched into a Hyper-V container by using PowerShell to change the run type (container attribute RuntimeType).

The big difference between the two types, other than the presence f Hyper-V, is that Hyper-V Containers get their own optimized instance of Windows running inside of them, as the host for the single container that they run.

image

The Hyper-V Container is not a virtual machine – Russinovich demonstrates this by searching for VMs with Get-VM. It is a container, and is manageable by the same commands as a Windows Server Container.

In his demos he switches a Windows Server Container to a Hyper-V Container by running:

Set-Container -Name <Container Name> -RuntimeType HyperV

And then he queries the container with:

Get-Container -Name <Container Name> | fl Name, State, RuntimeType

So the images and the commands are common across Hyper-V Containers and Windows Server Containers. Excellent.

It looked to me that starting this Hyper-V Container is a slower operation than starting a Windows Server Container. That would make sense because the Hyper-V container requires it’s own operating system.

I’m guessing that Hyper-V Containers either require or work best with Nano Server. And you can see why nested virtualization is required. A physical host will run many VM hosts. A VM host might need to run Hyper-V containers – therefore the VM Host needs to run Hyper-V and must have virtualized VT-x instructions.

Russinovich demonstrates the security isolation. Earlier in the video he queries the processes running in a Windows Server Container. There is a single CSRSS process in the container. He shows that this process instance is also visible on the VM host (same process ID). He then does the same test with a Hyper-V Container – the container’s CSRSS process is not visible on the VM host because it is contained and isolated by the child boundary of Hyper-V.

What about Azure? Microsoft wants Azure to be the best place to run containers – he didn’t limit this statement to Windows Server or Hyper-V, because Microsoft wants you to run Linux containers in Azure too. Microsoft announced the Azure Container Service, with investments in Docker and Mesospehere for deployment and automation of Linux, Windows Server, and Hyper-V containers. Russinovich mentions that Azure Automation and Machine Learning will leverage containers – this makes sense because it will allow Microsoft to scale out services very quickly, in a secure manner, but with less resource and management overhead.

That was a good video, and I recommend that you watch it.

 

Microsoft News – 30 September 2015

Microsoft announced a lot of stuff at AzureCon last night so there’s lots of “launch” posts to describe the features. I also found a glut of 2012 R2 Hyper-V related KB articles & hotfixes from the last month or so.

Hyper-V

Windows Server

Azure

Office 365

EMS

Configuring Windows Server Containers To Use DHCP Instead Of NAT

Read on if you want to learn how to connect Windows Server containers to an external virtual switch so that you don’t use NAT, and the containers actually talk directly to the LAN via DHCP assigned addresses. You’ll also see why a DHCP enabled container fails to get and address and ends up with a 169.254.x.x APIPA IPv4 configuration.

If you use Microsoft’s setup scripts for Windows Server 2016 (WS2016) Technical Preview 3 (TPv3), the default configuration for container networking is that each VM host will have virtual switch (in the VM), connected the VM’s vNIC. The virtual switch works in NAT mode, and uses a private network range to dynamically address containers that connect to the virtual switch. This set up requires each container to have NAT rules on the VM host so that external clients can connect to the services running in the containers. That … could be messy. In some terms, it could allow for huge network scalability (with tens of thousands of possible ports per VM host) but in others, it could be a nightmare to orchestrate.

What if you wanted your containers to talk directly on the LAN. In other words: no NAT. Yes, your containers can do this, and it’s known as a DHCP configuration – your containers are stateless so it’s pointless assigning them static IP addresses; instead the containers will get their addressing from DHCP services on the LAN.

Remember that there are two scripts that we can run to set up a VM host.

  • Method 1: You download New-ContainerHost.ps1 and run it. This downloads a bunch of stuff, creates a VM host, and then runs Install-ContainerHost.ps1. By default, this will configure the VM host with NAT networking.
  • Method 2: You create your own VM, download and run Install-ContainerHost.ps1. By default, you’ll get NAT networking.

But …

Install-ContainerHost.ps1 includes the option for a flag:

image

If you use method 2 then you could run Install-ContainerHost in the new VM host with the -UseDHCP flag set to $true; the behaviour of the script will change. By default it creates the VM host’s virtual switch in NAT mode. But enabling this flag creates an external virtual switch.

In my lab, I like to create my VM hosts using New-ContainerHost because it’s very quick (thanks to the use of differencing disks) and automates the entire setup. But New-ContainerHost doesn’t include the option for UseDHCP. You could edit any call of Install-ContainerHost from New-ContainerHost, but I do it another way.

Instead I edit Install-ContainerHost. One small change will do the trick. Not far from the top is where the parameters are set as script variables. Look for a line that reads:

$UseDHCP,

Modify this line so it reads:

$UseDHCP = $true,

image

Now every time I either run Install-ContainerHost or New-ContainerHost I’ll get the DHCP networking configuration instead of NATing.

So try this to create/configure a VM host, create a container, use Enter-PSSession to connect to the container, run IPConfig and … viola, you’ll have no DHCP address. Say what?

I was stumped. I tried it again. Nothing. I asked for help and by the time I got home, I got a tip from one of the folks in Redmond. It proved to be my “I’m a moron” moment of the day. If I’d thought about it, DHCP is all about broadcasts and MAC addresses. I have a single VLAN set up in the lab so broadcasts wasn’t the issue. What’s going on with MACs? A VM host has a MAC for itself. And then each container on the VM host that connects to the virtual switch has it’s own MAC address … but the network sees only one interface. Have you figured it out yet?

By default, Hyper-V has MAC spoofing disabled on every virtual NIC – a virtual NIC can only have 1 MAC address. What I needed to do was, at the host level, run the following to enable MAC spoofing on the VM host’s virtual NIC:

Get-VMNetworkAdapter -VMName containers3 | Set-vmNetworkAdapter -MacAddressSpoofing On

Now everything works Smile

Windows Server Containers – “Enter-PSSession : The term ‘Measure-Object’ Is Not Recognized”

If you’ve been working with Windows Server Containers in Windows Server 2016 (WS2016) Technical Preview 3 (TPv3) then you’ve probably experienced something like this:

  1. You create a new container
  2. Then start the container
  3. And try to create a PowerShell session into the container using Enter-PSSession

And then there’s lots of red on the screen:

enter-pssession : The term ‘Measure-Object’ is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.
At line:1 char:1
+ enter-pssession -ContainerId $container.ContainerId
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (Measure-Object:String) [Enter-PSSession], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

image

Strangely, I have not been able to recreate this with Invoke-Command, so it appears to be unique to how Enter-PSSession sets up session in the container.

So how do you solve the issue? It’s simple – you rushed from starting your container to trying to log into it. Wait a few seconds and then try again.