Generation 2 Virtual Machines Make Their First Public Appearance in Microsoft Azure

Microsoft has revealed that the new preview series of confidential computing virtual machines, the DC-Series, which went into public preview overnight are based on Generation 2 (Gen 2) Hyper-V virtual machines. This is the first time that a non-Generation 1 (Gen 1) VM has been available in Azure.

Note that ASR allows you to migrate/replicate Generation 2 machines into Azure by converting them into Generation 1 at the time of failover.

These confidential compute VMs use hardware features of the Intel chipset to provide secure enclaves to isolate the processing of sensitive data.

The creation process for a DC-Series is a little different than usual – you have to look for Confidential Compute VM Deployment in the Marketplace and then you work through a (legacy blade-based) customised deployment that is not as complete as a normal virtual machine deployment. In the end a machine appears.

I’ve taken a screenshot from a normal Azure VM including a view of Device Manager from Windows Server 2016 with the OS disk.

image

Note that both the OS disk and the Temp Drive are IDE drives on a Virtual HD ATA controller. This is typical a Generation 1 virtual machine. Also note the IDE/ATA controller?

Now have a look at a DC-Series machine:

image

Note how the OS disk and the Temp Drive are listed as Microsoft Virtual Disk on SCSI controllers? Ah – definitely a Generation 2 virtual machine! Also do you see the IDE/ATA controller is missing from the device listing? If you expand System Devices you will find that the list is much smaller. For example, the Hyper-V S3 Cap PCI bus video controller (explained here by Didier Van Hoye) of Generation 1 is gone.

Did you Find This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Frankfurt on December 3-4, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Windows Server 2019 Did Not RTM – And Why That Matters

I will start this article by saying there is a lot in Windows Server 2019 to like. There are good reasons to want to upgrade to it or deploy it – if I was still in the on-premises server business I would have been downloading the bits as soon as they were shared.

As you probably know Microsoft has changed the way that they develop software. It’s done in sprints and the goal is to produce software and get it into the hands of customers quickly. It doesn’t matter if it’s Azure, Office 365, Windows 10, or Windows Server, the aim is the same.

This release of Windows Server is the very first to go through this process. When Microsoft announced the general availability of Windows Server 2019 on October 2nd, they shared those bits with everyone at the same time. Everyone – including hardware manufacturers. There was no “release to manufacturing” or RTM.

In the past, Microsoft would do something like this:

  1. Microsoft: Finish core development.
  2. Microsoft: RTM – share the bits privately with the manufacturers.
  3. Microsoft: Continue quality work on the bits.
  4. Manufacturing: Test & update drivers, firmware, and software.
  5. Microsoft & Manufacturing: Test & certify hardware, drivers & firmware for the Windows Server Catalog, aka the hardware compatibility list or HCL.
  6. Microsoft: 1-3 months after RTM, announce general availability or GA
  7. Microsoft: Immediately release a quality update via Windows Update

This year, Microsoft has gone straight to step 6 from the above to get the bits out to the application layer as quickly as possible. The OEMs got the bits the same day that you could have. This means that the Windows Server Catalog, the official listing of all certified hardware, is pretty empty. When I looked on the morning of Oct 3, there was not even an entry for Windows Server 2019 on it! Today (October 4th) there are a handful of certified components and 1 server from an OEM I don’t know:

image

So my advice is, sure, go ahead and download the bits to see what Microsoft has done. Try out the new pieces and see what they offer. But hold off on production deployments until your hardware appears on this list.

I want to be clear here – I am not bashing anyone. I want you to have a QUALITY Windows Server experience. Too often in the past, I have seen people blame Windows/Hyper-V for issues when the issues were caused by components – maybe some of you remember the year of blue screens that Emulex caused for blade server customers running Windows Server 2012 R2 because of bad handling of VMQ in their converged NICs driver & firmware?

In fact, if you try out the software-defined features, Network Controller and Storage Spaces Direct (S2D), you will be told that you can’t try them out without opening a free support call to get a registry key – which someone will eventually share online. This is because those teams realize how dependent they are on hardware/driver/firmware quality and don’t want you judging their work by the problems of the hardware. The S2D team things the first wave of certified “WSSD” hardware will start arriving in January.

Note: VMware, etc, should be considered as hardware. Don’t go assuming that Windows Server 2019 is certified on it yet – wait for word from your hypervisor’s manufacturer.

Why would Microsoft do this? They want to get their software into application developers hands as quickly as possible. Container images based on Windows Server will be smaller than ever before – but they’re probably on the semi-annual channel so WS2019 doesn’t mean much to them. Really, this is for people running Windows Server in a cloud to get them the best application platform there is. Don’t start the conspiracy theories – if Microsoft had done the above process then none of us would be seeing any bits maybe until January! What they’ve effectively done is accelerate public availability while the Windows Server Catalog gets populated.

Have fun playing with the new bits, but be careful!

Microsoft Ignite 2018: Implement Cloud Backup & Disaster Recovery At Scale in Azure

Speakers: Trinadh Kotturu, Senthuran Sivananthan, & Rochak Mittal

Site Recovery At Scale

Senthuran Sivananthan

WIN_20180927_14_18_30_Pro

Real Solutions for Real Problems

Customer example: Finastra.

  1. BCP process: Define RPO/RTO. Document DR failover triggers and approvals.
  2. Access control: Assign clear roles and ownership. Levarage ASR built-in roles for RBAC. Different RS vault for different BU/tenants. They deployed 1 RSV per app to do this.
  3. Plan your DR site: Leveraged region pairs – useful for matching GRS replication of storage. Site connectivity needs to be planned. Pick the primary/secondary regions to align service availability and quota availability – change the quotas now, not later when you invoke the BCP.
  4. Monitor: Monitor replication health. Track configuration changes in environment – might affect recovery plans or require replication changes.
  5. DR drills: Periodically do test failovers.

Journey to Scale

  • Automation: Do things at scale
  • Azure Policy: Ensure protection
  • Reporting: Holistic view and application breakdown
  • Pre- & Post- Scripts: Lower RTO as much as possible and eliminate human error

Demos – ASR

Rochak for demos of recent features. Azure Policies coming soon.

WIN_20180927_14_33_20_Pro

Will assess if VMs are being replicated or not and display non-compliance.

Expanding the monitoring solution.

Demo – Azure Backup & Azure Policy

Trinadh creates an Azure Policy and assigns it to a subscription. He picks the Azure Backup policy definition. He selects a resource group of the vault, selects the vault, and selects the backup policy from the vault. The result is that any VM within the scope of the policy will automatically be backed up to the selected RSV with the selected policy.

Azure Backup & Security

Supports Azure Disk Encryption. KEK and BEK are backed up automatically.

AES 256 protects the backup blobs.

Compliance

  • HIPAA
  • ISO
  • CSA
  • GDPR
  • PCI-DSS
  • Many more

Built-in Roles

Cumulative:

  • Backup reader – see only
  • Backup Operator: Enable backup & restore
  • Backup contributor: Policy management and Delete-Stop Backup

Protect the Roles

PIM can be used to guard the roles – protect against rogue admins.

  • JIT access
  • MFA
  • Multi-user approval

Data Security

  • PIN protection for critical actions, e.g. delete
  • Alert: Notification on critical actions
  • Recovery: Data kept for 14 days after delete. Working on blob soft delete

Backup Center Demo

Being built at the moment. Starting with VMs now but will include all backup items eventually.

WIN_20180927_15_06_47_Pro

All RSVs in the tenant (doh!) managed in a central place.

Aimed at the large enterprise.

They also have Log Analytics monitoring if you like that sort of thing. I’m not a fan of LA – I much prefer Azure Monitor.

Reporting using Power BI

Trinadh demos a Power BI reporting solution that unifies backup data from multiple tenants into a single report.

Microsoft Ignite–Building Enterprise Grade Applications With Azure Networking’s Delivery Suite

Speakers: Daniel Grickholm & Amit Srivastava

I arrived late to this session after talking to some product group people in the expo hall.

Application Gateway Demo

We see the number of instances dynamically increase and cool down – I think there was an app on Kubernetes in the background.

Application Gateway

Application gateway ingress controller for AKS v2.

  • Attach WAG to AKS clusters.
  • Load balance from the Internet to pods
  • Supports features of K8s ingress resource – TLS, multisite and path-based

Demo: we see a K8s containers app published via the WAG. The backend pool is shown – IPs of containers. Deleting the app in K8s removes the backend pool registration from the WAG (this fails in the demo).

Web Application Firewall

WIN_20180927_13_08_49_Pro

WIN_20180927_13_10_55_Pro

Demo – WAF

App behind a firewall with no exclusion parameters. Backend pool is a simple PHP application. Second firewall is using the same backend VM as a backend pool – a scan exclusion is set up to ignore any field which matches a “comments” string. The second one allows a comment post, the other one does not.

WIN_20180927_13_18_03_Pro

WIN_20180927_13_19_03_Pro

WIN_20180927_13_19_41_Pro

WIN_20180927_13_20_03_Pro

Get performance closer to the customer. Runs in edge sites, not the azure data centers.

WIN_20180927_13_21_53_Pro

WIN_20180927_13_21_53_Pro

Once you hit an edge site via front door, you are on the Azure WAN.

WIN_20180927_13_25_42_Pro

ADN = application delivery network

WIN_20180927_13_25_42_Pro

Big focus on SLA HA and performance. Built for Office.

WIN_20180927_13_25_42_Pro

5 years old and mature.

Can work in conjunction with WAG, even if there is some overlap, e.g. SSL termination.

WIN_20180927_13_25_42_Pro

What will be in the next demo:

WIN_20180927_13_25_42_Pro

Has an app for USA in Central US. Another for UK deployed in UK South. Shows the front door creation – Name/resource group, Configuration screen during creation is a bit different for Azure. Create a global CName and session affinity in fron end hosts. Create backends – app service, gateways, etc. You can set up host headers for custom domains, priority, port translation, priority for failover, weight for load balancing. You can add health probes to the backend pools, to a URL path, HTTP/S, and set the interval. Finally you create a routing rule; this maps frontend hosts to backend pools. You can set if it should be HTTP and/or HTTPS.

Skips to one he created earlier. When he browses the two apps that are in it, he is sent to the closest instance – in central US. You can set up  rules to block certain countries.

You can implement rate limiting and policies for fairness.

You can implement URL rewrites to map to a different path on the web servers.

This is like traffic manager + WAG combined at the edges of the Azure WAN.

WIN_20180927_13_43_14_Pro

WIN_20180927_13_43_50_Pro

Front Door load balances between regions. WAG load balances inside the region – that’s why they work together.

WIN_20180927_13_43_50_Pro

Microsoft Ignite 2018: Office in Virtual Desktop Environments

Speakers: Gama Aguilar-Gamez & Sandeep Patnik

Goal: Make Office 365 Pro Plus a first class experience in virtualized environments.

Windows Virtual Desktop

  • The only mutli-user Windows 10 experience – note that this is RDmi and it also supports session hosts.
  • Optimized for Office 365 Pro Plus
  • Deploy and scale in minutes

Windows 10 Enterprise Multi-User

  • Scalable multi-user modern Windows user experience with Windows 10 Enterprise security
  • Windows 10
  • Multiple users
  • Win32, UWP
  • Office 365 Po Plus
  • Semi-Annual Channel

This is a middle ground between RDSH on Windows Server and VDI on Windows 10.

Demo

The presentation is actually being run from a WVD VM in the cloud. PowerPoint is a published application – we can see the little glyph in the taskbar icon.

User Profile Disks

High performance persistence of cached user profile data across all Office 365 apps and services.

  • Outlook OST/PST files – will be improved for GA of WVD. Support for UNC paths
  • OneDrive sync roots
  • OneNote notebook cache

Improving Outlook Start Up

  • Virtual environment friendly default settings
  • Sync Inbox before calendar for faster startup experience
  • Admin option to reduce calendar sync window
  • Reduce the number of folders that are synced by default
  • Windows Desktop Search is no per-user

See Exchange Account Settings to configure how much past email should be synced

Windows Desktop Search

  • Enables the full Outlook search experience that users expect
  • Per user index files are stored in the user profile for each roaming
  • No impact to CPU usage at steady state, minimal impact at sign in

With 100 users in a machine signing in simultaneously, enabling Windows Search has a 0.02% impact on the CPU.

Demo

Desktop of the remote machine is stretched across multiple displays – this demo is with a published desktop hosted in Windows 10 multi-user. Windows Desktop search is enabled. Instant search results in Outlook. OneDrive sync is working in a non-persistent machine – fully functional enabling the full collaboration experience in O365. Selective Sync works here too. Sync is cloud-cloud so the performance is awesome. In Task Manager, we see 3 users signed into a single Windows 10 VM via RDS.

OneDrive

  • Co-authoring and collaborative capabilities in wXP, powered by OneDrive.
  • OneDrive sync will run in non-persistent environments
  • Files on-demand capabilities
  • Automatically populate something

Support

  • Search products stay in sync with each other
  • Office 365 Pro Plus will always be supported with Win 10 SAC
  • Office 365 Pro Plus won Windows Server 2016 will be supported through October 2025

Best Practices

Outlook:

  • The OST file should be stored on local storage.
  • Outlook deployed with the primary mailbox in cached echange mode with the OST file stored on network storage, and an aggressive archiving strategy to an online archive mailbox
  • Outlook deploy in cached exchange mode with slider set to one month.

Office 365:

  • Licensing token roaming: Office 365 Pro-Plus 1704 or newer. You can configure the licensing token to roam with the users profile or be location on a shared folder on the network. This especially helpful  for non persistent VDI scenarios.
  • SSO recommended. We recommend using SSO for good and consistent user experience. SSO reduces how often the users are prompted to sign in for activation. With SSO configured, Office activates with the credentials the user uses to sign into Windows if the user is also licensed for O365 Pro Plus.
  • If you don’t use SSO, consider using roaming profiles.

Preview

Sign up: https://aka.ms/wvdpreview

Public preview later 2018.

GA early 2019.

Q&A

If you want to use RDSH on Windows Server 2019 then Office 365 Pro Plus is not supported. You would have to use persistent Office 2019 so you get a lesser product. The alternatives are RDSH on Windows Server 2016 or Windows 10 Multi User (Azure). 

Widows 10 Multi User is only available in Azure via Windows Virtual Desktop.

A lot of the above optimization, such as search indexing, rely on the user having a persistent profile on the latest version of Windows 10. So if that profile is a roaming profile or a UPD, then this works, in RDS or on physical,

Microsoft Ignite 2018–Functions Deep Dive

Functions v2.0 GA

  • New functions quick starts by language
  • Updated runtime built on .NET Core 2.1
  • .Net functions loading changes
  • New extensibility model
  • Run code form a package
  • Tooling updates: CLI, Visual Studio and VS Code
  • Durable functions GA

Differences From v1.0

There is a long list online:

  • Moved from .Net Framework 4.7.1 to .NET Core 2.1
  • Added assembly violation
  • Supports more Node.js
  • Languages are external to the host
  • Supports webhooks as triggers
  • Single language per function app instead of multiple
  • Use just application insights for observing code performance

Binding and Integrations

  • SDK functions: HTTP/Timer
  • Storage
  • Service Bus
  • Event Hubs
  • Cosmos DB
  • Event Grid
  • And more

And then lots of bullet point to explain architecture that didn’t really explain it. A picture tells a thousand words.

Planning Network Security For Your Mission-Critical Workloads With Virtual Networks

Speakers: Anitha Adusumilli and Mario Lopez

Networking ensure that data remains in your private space in the cloud. So it’s not just a VM thing.

Understanding Cloud Challenges

  • Dynamic, scalable workloads – no fixed network perimeter
  • Attack vectors based on application access patterns
  • Risk of data exposure to exploits, with a mix of IaaS, PaaS, and SaaS services

Cloud network security is evolving as the apps change!

Planning Network Security in Azure

  • Similar controls as on-premises.
  • Pick your network security offerings
  • Layer and scale
  • More flexible than on-premises – faster to deploy/tear down
  • Azure offers managed services

You can build a vNet and add subnets as security boundaries. You can add peered vNets locally and in other regions.  And you might have external connections via VPN/ExpressRoute.

There are a mixture of Azure-native and third-party security offerings.

Application access Patterns

Use these to decide what network security solution to pick. Probably will be a mixture of the below.

  • Service endpoints
  • NSGs
  • ASGs
  • User-defined routes
  • DDoS Protection
  • WAF
  • Azure Firewall
  • NVAs

Security with Azure Services

VMs don’t need public IPs. However, when you use Azure services, they have public IPs, e.g. Azure SQL. This might require you to allow outbound connections that you might not have done before. Anyone with rights for default deployments can access from anywhere. But if you add services to the VNet, via service endpoints, and apply services firewalls, e.g. Azure SQL, then you can restrict access to these platform services.

Two patterns:

  • Add services to a VNet where the VNet is all that can access the service
  • Add services to a VNet to allow private access, but public access is also possible.

Pattern 1: Deploy services into VNet

WIN_20180926_14_29_30_Pro

Example, App Services Environment (ASE) is deployed into a subnet.

Security:

  • NSGs
  • NVAs
  • User-defined routing can control direction of traffic, e.g.a private deployment can only route via a gateway (forced tunnelling)
  • Services in Azure might require outbound access from your VNet. Use Service Tags to limit outbound traffic to local service.

New service tags:

WIN_20180926_14_35_13_Pro

Azure Webapps will be getting preview support soon – an alternative to P2S VPN.

Pattern 2: Service Endpoints

  • Extend VNet identity to the service
  • Secure your critical Azure resources to only your VNet
  • Traffic remains on the Microsoft backbone

WIN_20180926_14_38_42_Pro

How to Secure Your Resources Using Service Endpoints

Normal flow in new setup:

  1. Set endpoint on your endpoint
  2. Lock your service resource to your subnet

One-Time Migration:

  1. Step 1: Add VNet rule without endpoint
  2. Set endpoint on subnet
  3. Remove the public IP setting

All scenarios: Remove “Allow All Azure Services” or “Allow All” settings.

Service Endpoint: Scaling Security

  • Resource locked to a VNet: No access to other VNets or Internet or on-premises.
  • Permit more VNets: Turn on service endpoints on VNets and add under “virtual Networks” on resource
  • Permit on-premises: Add the on-prem NAT IPs under “firewall” on resource.

Careful – locking network access down can prevent Azure services, such as backup. There are docs for these workarounds – ask Anitha Adusumilli.

Stitching Services Together

  • Secure Azure resources to managed service subnets with endpoints
  • More

Securing VNet traffic: Services Tags in NSGs

  • Restrict network access to just the azure services your use.
  • Maintenance of IP addresses for each tag provided by Azure (Service Tags)
  • Support for global and regional tags (varies by service)

Service endpoints: Data-Exfiltration Risk

  • NSG service tags not enough to prevent data exfiltration from VNet
  • Access to unauthorized accounts possible

Option 1: filtering with Azure Firewall or NVAs

  • Service endpoints bypass NVAs for service traffic, if set on originating subnet
  • Optionally, continue using NVAs for auditing/filtering service traffic
  • More

Service Endpoint Policies

  • Prevent unauthorized access to storage accounts
  • Restrict vnet access to specific azure storage accounts
  • Granular access control over service endpoints
  • West Central US and West US2 today

Demo: Service Endpoint Policies

She has a VNet with a subnet. Service endpoints is turned on for Storage (all) in the subnet. She only wants to allow access to a single storage account. Adds that storage account to the subnet’s service endpoint. Logs into VM in the subnet and runs Storage Explorer. Can access files in the configured storage account. Another storage account can also be accessed. Goes to Service Endpoint Policies – a top level resource like NSGs. Adds a new policy, adds it to resource group and names it. Sets a scope – all storage accounts, all accounts in resource group, or specific storage account – picks the allowed storage account. Associates the policy with the subnet – like NSG. Now in the VM, only the authorized storage account can be accessed in Storage Explorer.

Switch to Mario for part 2.

Securing Access From Internet

  • DDoS attacks
  • Web Application Vulnerabilities

New in DDoS Standard

  • Attack analysis
  • Rapid Response – Specialized rapid response team support during active attacks (via support ticket). Custom mitigation policy configuration.
  • Azure Security Center Integration – intelligent DDoS protection virtual network recommendation

New in WAF

WIN_20180926_15_07_06_Pro

WIN_20180926_15_08_26_Pro

WIN_20180926_15_08_26_Pro

WIN_20180926_15_15_40_Pro

WIN_20180926_15_17_22_Pro

WIN_20180926_15_19_01_Pro

They’re flattening the number of subnets using ASGs – tiers of app in one subnet but rules based on on ASGs instead of subnets. Subnets then deployed for Edge/DMZ and app. Using ASGs for micro-segmentation.

WIN_20180926_15_21_36_Pro

WIN_20180926_15_23_09_Pro

Putting it All Together

WIN_20180926_15_29_01_Pro

Microsoft Ignite 2018–Azure Service Fabric Mesh: The Serverless Microservices Platform

Speakers: Chacko Daniel and Deep Kapur.

This is a true dev session … but I’m here and I haven’t written an original line of programming since 1998. Why am I here? Because Service Fabric is cool and it fascinates me. If I wrote code, Service Fabric (along with functions for atomic trigger/action pieces and app services for interface) would be my choice.

Introduction to Service Fabric

  • Mission critical workloads
  • Used for Azure SQL, Power BI, Cosmos DB, IoT Hub, Event Hub, Skype, Cortana, and more.

Offerings

  • Service Fabric on Windows/Linux – bring your own infrastructure
  • Azure Service Fabric – runs on dedicated VM scale sets
  • Azure Service Fabric Mesh – serverless

Future of Application Development

  • Polyglob services connected by L7 networks
  • Multi OS environments
  • Deploy anything in a container
  • Bring your own network to connect to to your other services
  • State management and other stuff

Service Fabric Mesh (Public Preview Currently)

  • Focus on applications
  • Microservice and container orchestration
  • Pay for only what you use
  • Intelligent traffic routing
  • Azure manages all infrastructure
  • Auto-scaling on demand
  • Security and compliance
  • Health and monitoring

Mesh Resource Provider Architecture

Inventory Manager takes your input. Cluster allocator finds resources to run your code.

WIN_20180926_10_52_55_Pro

What Can You Use It For?

Ideal for cloud-native applications

  • Any language, any framework
  • Libraries to integrate with your favourite languages
  • Easy H/A state storage with reliable collections
  • Intelligent traffic routing and connectivity

Enable app modernization:

  • Deploy anything and everything in a container
  • Bring your own network
  • More

Demo

An app runs on a SF cluster. Each app is made up of 1+ services. A service can be made HA by running it on many nodes in the SF cluster (replicas or load balanced).

There is a mesh application resource. In the summary we see the services that make up the app, and how many replicas there are of each service. He opnes one service and we see the replica(s), numbered normally as 0,1,2,etc. The status shows a summary of recent events. In Details we see the physical consumption of the service, the ports (endpoints) it listens to, environment variables. In Logs we can see a screen output of app log data.

Service Fabric Resource Model

  • Applications and services
  • Networks
  • Gateways
  • Secrets
  • Volumes
  • Routing rules

Simple declarative way to define an application.

Applications and Services Resouces

Services describe how a set of containers run:

  • Container image, environment variables, CPU/MEMory, etc
  • And more

Gateway and Networks

Connecting two networks together:

  • L4: TCP
  • L7: HTTP/S

It’s a way of connecting the outside world, Internet or another network you own, to the isolated network of the SF cluster.

This is a service fabric gateway, not a VNet gateway.

Secrets Resource

Bad way: environment variable.

Better way: Use KeyVault.

Inline is in the public preview today, e.g. connection strings. Secrets by reference (key vault) is coming.

Volume Resource

General purpose file storage.The container can attach volumes. Read and write files using normal disk I/O file APIs. Backed by Azure File storage or Service Fabric volume disk. The SF volume disk is on the cluster and is faster – it is replicated to nodes where your service has a replica (stateful service).

Demo Application Architecture

Cloud based polyglob application demo that they have built. All built on Linux contianers

  • Front End – reactive.
  • Backend: .NET Core and Node.js.
  • Work gets dropped into a queue.
  • A Worker picks up the queue and stores data in persistent storage

Overview over.

They show us a JSON that is used to deploy the SF mesh application: Microsoft.ServiceFabricMesh/applications. Azure Files is being used as file storage. Secrets are being stored inline. A volume disk is also being used for file storage and they define a mount path in the Linux containers of /app/data. There are front end (1), backend (2) and worker services (3) in the application.

Auto Scaling

Horizontal scaling of services based on:

  • CPU
  • Memory
  • Application provided custom metrics (later)

Application Upgrade

He uses on-PC Azure CLI (PowerShell also available) to push a code upgrade to the SF application.

Routing Rules Resource

  • Services talk to each other inside the application by hostname.
  • They do not implement platform-specific discovery APIs
  • Not not deal with network level details.
  • Are unaware of the implementation details of other services

Intelligent traffic routing:

  • Done using “Envoy”
  • Advanced HTTP/S traffic routing with load balancing
  • Proxy handles partition resolution and key hashing

Diagnostics and Monitoring

  • Use your favourite APM platform to monitor apps inside containers, e.g. Azure Application Insights
  • Containers write out stdout/stderr logs to a data volume – can be sucked up by Application Insights
  • Azure Monitor for platform events and container metrics

Reliable Collections – Low Latency Storage

Reliable collections allow you to persist state with failover. Uses transactional storage. Storage on a network introduces a “cost”, e.g. latency. Low latency storage is often preferred.

Demo: Scale-Out

Dumps a load of pictures of cats & dogs. Worker numbers increase from 1 to 40 in seconds for 3 services (120 containers). The pictures are categorized and tagged on the fly.

Pricing

You pay for what you use. Container compute duration:

  • Cores per second
  • Memory in GB per second

Costs depend on the region. Container costs are the same in Azure, irrespective of the Azure offering you get them from. So you choose a container offering based on suitability, not price.

Stateful resources:

  • Volume disk: disk size, Max IOPS/Throughput per disk). Paid for per month.

Reliable collections:

  • Biller per hour based on: size of the reliable collection and the amount of provisioned IOPS.

Recap

What they see: Gaming, media sharing, mission critical business SaaS, IoT data processing for millions of devices, low latency storage applications.

Roadmap

  • Managed service ID
  • Secrets from key vault
  • Routing rules to/from applications
  • Applications across availability zones
  • Persisted state via reliable collections and volume drives
  • Bring your own network to connect to other systems
  • Tooling integration

GA is planned for early next year – probably Build 2019. The preview is free to use.

Go live licenses will be given to early adopters.

Microsoft Ignite 2018–Azure Migrate

I arrived late for this session because I was in a meeting. They were doing a demo of Azure Migrate.

Azure Migrate fo Discovery And Assessment

  • Agentless discovery
  • TCO calculation
  • Right-size and suitability
  • Azure Platform

The are “announcing” support for Hyper-V – it’s still in limited private preview.

Third Party Solutions

Cloudamize is just an assessment tool

  • Indepth performance analysis
  • Right-size compute and stoage options.
  • TCO calculations
  • Agentless
  • Assessments for migration to Azure SQL
  • Integrates into ASR to do the migration

Migration solutions:

  • ASR
  • Zerto
  • CloudEndure

Azure Site Recovery (ASR)

  • Easy to onboard – appliance wizard for VMware
  • Broad coverage for Windows and Linux
  • UEFI support for VMware and physical machines – converted to BIOS
  • W2008 32-bit support

They do a demo of Zerto for migrations. Then they demo CloudEndure.

Futures

They’re trying to simplify the process. Starting a limited private preview:

Assess > Migrate & modernize > optimize > secure & manage.

Going to use the new tabbed UI in the Azure Portal. You can import and assessment into a migration. Pick the ready machines that you want to migrate, optionally apply HUB and overrise VM sizing, OS disk, and availability set membership. This migration experience will ideally be used by the 3rd parties too.

Microsoft Ignite 2018–Microsoft Information Protection

Speakers:

Questions to Microsoft

  • My data is scattered. I might not even know where it is.
  • I cannoit create unified policies for my data security
  • How do I protect PII for GDPR, etc.

Microsoft Information Protection is a suite of solutions, designed from the ground up, to protect data no matter where it is.

750 regulatory bodies around the world making up to 200 new data security decisions every month.

2025 – 165 zetabytes of data to manage and secure.

Microsoft Information Protection

  • Discover
  • Classicy
  • Protect
  • Monitor

Across:

  • Devices
  • Apps
  • Cloud services
  • On-premises

MS Solution

  • Unified solution to discover, classify and label
  • Automatically apply policy-based actions
  • Proactive monitoring to identify risks
  • Broad coverage across locations

The Way The MS Solution Was

Point solutions in market today:

  • O365 information protection
  • Windows information protection
  • Azure information protection

An incomplete solution because they are point solutions.

MIP unifies these solutions. A new unified UI.

Specialised Workspaces

  • Microsoft 365 Security Center: security.microsoft.com
  • Microsoft 365 Compliance Center compliance.microsoft.com

Clients

Obvious support on Windows Office. Now on Office/Mac and coming to Office/Mobile. Should be GA on all clients by the end of the year.

SharePoint Online will be showing labels, etc when creating sites/groups. Can apply retention labels in SharePoint Online too – auto-classification will determine if a retention policy should be applied.

Beyond Office 365

Windows Information Protection is a Win10 feature. Difference between company and personal data. Can apply rules to company data. Data (since 1809) will understand MIP labels applied to a file. If you try to copy/paste info from a protected file to Twitter, for example, Windows 10 will prevent that. Or if you try to attach the file in Outlook personal, or Gmail, etc. It will also prevent a copy to USB – no more superglue!

Compatibility for Existing AIP Customers

  • New M365 E3 customer can configure labels using the SCC portal. Can try out MIP-enabled AIP add in on Windows. Support coming to Mac and Mobile.
  • M365 or existing AIP customer can use AIP portal.

Customers will be transitioned over time.

Azure Information Protection Scanner

Scan:

  • File server shares
  • SharePoint Server 2010, 2013, 2016

Can discover data and force labelling/protection of documents.

I got bored here – “demos” that were just screenshots on PowerPoint. Weak!