Microsoft Ignite 2018–Azure Compute

Speaker: Corey Sanders

95% of Fortune 500 building on Azure. Adobe is building on open source – one of the biggest PostgreSQL customers. NeuroIntiative using GPUs to simulate drug tests for treatments for Alzheimer’s.

There’s no one way to use Azure. Find the bits you want to use and deploy them in a good way that suits.

Infrastructure for Every Workload

54 announced regions. Availability Zones in US, Europe, and Asia, more regions coming soon.

New VM Portfolio

NDv2: 8 x NVIDIA Tesla V100 NVLINK GPUs, 40 Intel SkyLake cores, 672 GB RAM, AI, ML, and HPC workloads.

NVv2: Tesla M60 GPU. Premium SSD suppor, up to 448 GB RAM, CAD, Gaming 3D design

HB: Highest memory bandiwidth in the cloud. 60 AMD EPYC cores, 100 Gbps Infiniband. Memory bandwidth intensive HPC workloads.

HC: Up to 3.7 Ghz clock speed. 44 Intel SkyLake cores, 100 Gbps Infiniband. CPU intensive HPC workloads.

Storage

200 trillion objects. 160 trillion transactions per month.

Standard SSD is GA. Ultra SSD in preview – sub millisecond latency, up to 160,000 IOPS and 2,000 MB/s throughput.

A demo of Ultra SSD. Opens up an E64s_v3 VM with Ultra SSD. Run IOMETER. Gets nearly 80,000 IOPS and .6 millisecond latency. That’s a single disk! Now for demo 2 with a new VM type. Runs IOMETER. Gets 161,000 IOPS on a single ultra SSD without striping or caching – durable writes. Double the performance of the competition.

There will be a single VM SLA for VMs running Ultra SSD.

Networking

100,000 miles of fibre to connect the 54 regions with 130+ edge sites.

ExpressRoute Global Reach allows you to connect your connections together to use the MS WAN as your WAN. Virtual WAN is GA. Front Door uses those edge as a globally available secure entry point to web services in Azure. And Azure ExpressRoute Direct offers 100 Gbps connections to Azure.

SAP

24 TB RAM physical machines. 12 TB RAM VMs on the way. 20+ certified solution architectures on Azure.

Containers

Reasons:

  • Agility
  • Portability
  • Density
  • Rapid Scale

A new feature in Kubernetes (K8s) to allow burst capacity based on Azure Container Instances called Virtual Node. The node is a VM that can be loaded up with ACIs when demand spikes. You get per-second billing to deal with unusual loads.

Hybrid

Microsoft offers the only true consistent hybrid experience. Azure Stack, DevOps, data, AD, and security/management.

A key piece of this is Windows Server 2019, which has hybrid built in. Hybrid: Azure Backup, ASR, Storage Migration Services, Azure Network Adapter

Erin Chapple comes out to demo Windows Admin Center.

Windows Server 2008/R2

End of life coming January 2020, and for SQL Server on July 9, 2019. If you migrate these to Azure, you’ll get 3 years of free security fixes – you’ll have to pay if you stay on-premises.

Edge

Microsoft has announced availability of the first Azure Sphere dev kit.

Data Box Edge is also announced. You can pre-process data on-prem before moving it to the cloud. It has FPGAs (or whatever) built in.

Azure Stack will support more nodes in the coming weeks. Event hubs and Blockchain deployment coming in preview this year.

Security & Management

Starts with the physical and software security of Azure and extends out to the edge and on-premises. 1.2 billion devices and 750,000 user authentications offer a lot of data for analysis.

  • 85+ compliance offerings.
  • 40+ industry specific regulated offerings
  • Trusted, responsible, and inclusive cloud

New announcements:

  • Confidential computing is a new series of VMs – DC-Series. The data is protected even from Azure when being processed by the CPU.
  • Azure Firewall is GA.
  • Azure Security Center improvements.

Governance

Governance normally restricts and slows down. Azure Policy doesn’t slow you down. A new addition, Blueprints, plans out deployments that are known and trusted. DevOps can deploy a blueprint to stay within the guardrails. It’s ARM template + Policy, resource group(s), and RBAC.

In a demo, we see a new Azure Policy feature – the ability to remediate variance.

Migration

CTO of JB Hunt, Gary Downy comes on stage. A trucking company that also does last mile and rail transport. Facing disruptive technologies such as driver-less and a shortage of drivers. They had on-prem systems but they wouldn’t scale with the business. Now they use Azure DevOps, Git, and Kubernetes for most of their systems.

Start with assessment. Then migrate. Then optimize and transition into management & security (ownership).

Tools:

  • Azure Migrate now supports Hyper-V and VMware.
  • Azure Database Migration Service which does Azure SQL, MySQL, PostreSQL, and MongoDB.

 

Microsoft Ignite 2018–Azure Keynote

I am live blogging this session. Please press refresh to see more.

Speakers: Scott Guthrie, Julia White, and probably a cast of others.

Julia White

She repeats some stuff we’ve already heard.

Scott Guthrie

The man in charge of Azure and Windows Server comes out. You can use entry level, burstable machines. There are big machines for high end, and hardware-enhanced machines.

Storage

New disk option, Ultra SSD, delivers twice the performance of the next competitor, and scales up to 64,000 and 160,000 IOPS with sub-millisecond latency.

4500 peering locations and 10,000,000+ miles of fibre networking in the Microsoft WAN. ExpressRoute offering speeds up to 100 Gbps, which is industry leading. Front Door is a new service to give you a secure global entry point for globally distributed apps. It uses Azure’s intelligent routing and offers CDN. Azure WAN and Azure Firewall are also GA.

Azure Data Box, the secure ruggedized disk box is GA and has some new options. Heavy expands from the normal 100 TB to 1 PB capacity. Edge is a new category of appliance to use data locally with it being stored in the cloud. It has offline capabilities.

There is purpose-build hardware in Azure for NetApp, Cray, and SAP. 24 TB RAM bare-metal machines for SAP. 12 TB RAM VMs coming soon.

Windows Server 2019

Windows Server 2019 is announced. Bits coming in October. Native hybrid cloud features. New security enhancements including Defender Threat Protection. Better container images and app compat.Lower cost storage with Storage Spaces Direct.

Windows Virtual Desktop

Windows 10 multi-user support for this new MS hosted Windows 10 solution. It appears to be O365 licensed with Azure VM consumption. RI can be applied for discounts. In the end, it’s just RDS.

Azure Stack

On-prem Azure, with the usual explanations. Available in 92 countries via the 5 partners. Video and demo of a mini- ruggedized Azure Stack kit from Dell for disaster relief and similar touch-environment scenarios. Uses drones as recon devices with edge AI. Next up, an engineer with a HoloLens goes “on site”. A remote expert can see what she can see and help her do the work. It ties in with Dynamics to find and recommend an expert.

Linux

RedHat is one of their “deepest partners”. A joint OpenShift on Azure is coming to Azure Stack.

Security + Management

Azure Security Center is the central dashboard for managing security in Azure and in hybrid. A new Security Score is being shared through ASC to measure your deployments.

Blueprints are being announced. It’s a combination of ARM template, resource group, RBAC, and Policy in one “template” that you associate with a management group.

Julia White came back out with a Walmart guy to do chat show time. Yawn.

Azure Migrate

It now supports Hyper-V.

Azure Learn

Free step-by-step tutorials. Hands-on learning and coding environments. Knowledge checks and achievements.

Serverless Based Computing

Built in HA, security, parching by MS.

Jeff Hollan comes out to do a demo of app migration/modernization. In Visual Studio, he clicks a menu to add Docker support for an existing set of code. He can choose Windows or Azure. VS builds a docker container to run the application, and runs it on his PC to live debug it. He set a break point and while the container is running, he can step through the code in VS. Can be published to K8s from VS too. A dashboard that is built from 10s of thousands of data sources requires batch jobs. To make it live built, he’s going to use Functions v2.0 GA today in Azure. It offers twice the performance of Functions v1.0.  As serverless, it can scale to the 10s of thousands of instances that happen every second.

GitHub

Largest developer community in the world. 28 million unique developers and 85 million code repositories. Will be allowed to stay open and independent.

Azure DevOps

Next gen VSTS. Boards, Pipelines, Test Plans, Artifacts, Lab Services, and Repos – all in Azure.

Data & AI

SQL Server 2019 public preview available. Leverage hardware acceleration. Data classification and labelling, e.g. GDPR. Data masking and hardware encryption.

Azure SQL Managed Instances GA.

Cosmos DB is one of the fastest growing services in Azure. I am not surprised – the scale and HA are fantastic. All the APIx are GA. Multi-master write support (now GA) with anti-entropy makes it easier to built widely dispersed planet-scale apps. Reserved Capacity will reduce costs by up to 65%. A new lower cost entry option for smaller databases is announced.

Some new analytic service called Azure Data Explorer for exploration, and querying structured and unstructured data. Can query live data like log telemetry. Scale from GBs to PBs.

Two Weeks Of Learning Coming To An End

I’ve been on the road for the last 2 weeks. The first week I spent in Orlando at the Microsoft Ignite conference. The second week, I was one of 600 people to attend the “Intelligent Cloud Architect Boot Camp”, run by Microsoft in Bellevue, WA, in the US Pacific Northwest. It’s been tough to be away from my family for 2 straight weeks, not just on me, but more so on them, but we viewed it as an investment in our future.

I’ve made a career from learning, what Microsoft CEO Satya Nadella calls being a learn-it-all instead of a know-it-all. I tell people in my Azure VM training that I’ve never set up a VLAN, but I can network the sh1t out of Azure – except for BGP routing Smile That’s because I’ve studied, tried, learned, and re-learned. In fact, in this cloud era, I think Nadella’s phrase should be modified to relearn-it-all. This two weeks has taught me so much, and it’s going to be information that makes a difference to my employer and our customers. The folks who sit back and don’t learn – well they’re the walking bankrupts that outsource their services to their competitors disguised as their service providers, or who lose their customers to other more agile and aware companies. Times have changed. Sitting back and attending a briefing every 3-6 years won’t cut it. You have to learn to, not just stay ahead, but to keep up in this cloud era, and that’s just the way it is. We all need to adapt – I’ve never previously deployed a lot of the resource types that are used in this mostly-serverless web application that I got working in a hackathon:

image

I’m going through another shift in my career – no it’s not Azure. That happened over 3 years ago. No; I’m learning more from the Dev side. Last week I found myself learning about IoT, big data, and analytics. This week I was all in on containers, microservices, and serverless computing. I became aware of things like Mesos, Kubernetes, and Jenkins. I used Swagger to discover and test APIs for the first time.

One of the highlights of the past two weeks is talking to people who’ve read this blog, saw me speak, heard me in a podcast, or read my content on Petri.com. I get a bounce in my step when someone thanks me for something that I was able to help with, and some of the compliments were very flattering. Thank you! One of the “oh crap!” moments was when I was standing near some MS staff and I overheard one of them say “That’s the Aidan Finn?” – that’s when the fight or flight instinct kicks in! One of the cool aspects of this boot camp was the cross-learning that went on. We did group whiteboard sessions which were the best things we did all week. A table of 6-8 people given a challenge to design something that no one person knows fully. Sometimes you lead, sometimes you learned. And I learned loads, so thank you to those people who taught through collaboration.

To be honest, between jet lag, learning, 8am-6pm classes followed by re-writing training until 11pm, and being away from home for so long has me exhausted. I can’t wait to get on my plane home and hopefully sleep a long sleep on the redeye, and finally getting home to give my family a big hug.

On Monday, Azure training courses continue at the office, supplemented with new information. Then we dive into working with a new type of customer, and I cannot wait to show them the things that I have learned. And then there’s something else … something new … something that I’ll share soon Smile

Understanding Big Data on Azure–Structured, Unstructured, and Streaming

These are my notes from the recording of this Ignite 2017 session, BRK2293.

Speaker: Nishant Thacker, Technical Product Manager – Big Data

This Level-200 overview session is a tour of big data in Azure, it explains why the services were created, and what is their purpose. It is a foundation for the rest of the related sessions at Ignite. Interestingly, only about 30% of the audience had done any big data work in the past – I fall into the other 70%.

What is Big Data?

  • Definition: A term for data sets that are so large or complex that traditional data processing application s/w if inadequate to deal with it. Nishant stresses “complex”.
  • Challenges: Capturing (velocity) data, data storage, data analysis, search, sharing, visualization, querying, updating, and information privacy. So … there’s a few challenges Smile

The Azure Data Landscape

This slide is referred to for quite a while:

image

Data Ingestion

He starts with the left-top corner:

  • Azure Data Factory
  • Azure Import/Export Service
  • Azure CLI
  • Azure SDK

The first problem we have is data ingestion into the cloud or any system. How do you manage that? Azure can manage ingestion of data.

Azure Data Factory is a scheduling, orchestration, and ingestion service. It allows us to create sophisticated data pipelines from the ingestion of the data through to processing, through to storing, through to making it available to end users to access. It does not have compute power of it’s own; it taps into other Azure services to deliver any required compute.

The Azure Import/Export service can help bring incremental data on board. You can also use it to bulk load on Azure. If you have terabytes of data to upload, bandwidth might not be enough. You can securely courier data via disk to an Azure region.

The Azure CLI is designed for bulk uploads to happen in parallel. The SDKs can be put into your code, so you can generate the data in your application in the cloud, instead of uploading to the cloud.

Operational Database Services

  • Azure SQL DB
  • Azure Cosmos DB

The SQL database offers SQL Server, MySQL, and PostgreSQL.

Cosmos DB is the more interesting one – it’s NoSQL and offers global storage. It also supports 4 programming models: Mongo, Gremlin/Graph, SQL (DocumentDB), and Table. You have flexibility to bring in data in its native form, and data can be accessed in an operational environment. Cosmos DB has plugs into other aspects of Azure that make it more than just an operational database such as, Azure Functions or Spark (HDInsight).

Analytical Data Warehouse

  • Azure SQL Data Warehouse

When you want to do reporting and dashboards from data in operational databases then you will need an analytical data warehouse that aggregates data from many sources.

Traits of Azure SQL Data Warehouse:

  • Can grow, shrink, and pause in seconds – up to 1 Petabyte
  • Fill enterprise-class SQL Server – means you can migrate databases and bring your scripts with you. Independent scale of compute and storage in seconds
  • Seamless integration with Power BI, Azure Machine Learning, HDInsight, and Azure Data Factory

NoSQL Data

  • Azure Blob storage
  • Azure Data Lake Store

When your data doesn’t fit into the rows and columns structure of a traditional database then this is when you need specialized big data storages – capacity, unstructured sorting/reading.

Unstructured Data Compute Engines

  • Azure Data Lake Analytics
  • Azure HDInsight (Spark / Hadoop): managed clusters of Hadoop and Spark with enterprise-level SLAs with lower TCO than on-premises deployment.

When you get data into a big unstructured stores such as Blob or Data Lake then you need specialized compute engines for the complexity and volume of the data. This compute must be capable of scaling out because you cannot wait hours/days/months to analyse the data.

Ingest Streaming Data

  • Azure IoT Hub
  • Azure Event Hubs
  • Kafka on Azure HDInsight

How do you ingest this real-time data as it is generated? You can tap into event generators (e.g. devices) and buffer up data for your processing engines.

Stream Processing Engines

  • Azure Stream Analytics
  • Storm and Spark streaming on Azure HDInsight

These systems allow you to process streaming data on the fly. You have a choice of “easy” or “open source extensibility” with either of these solutions.

Reporting and Modelling

  • Azure Analysis Services
  • Power BI

You have cleansed and curated the data, but what do you do with it? Now you want some insights from it. Reporting & modelling is the first level of these insights.

Advanced Analytics

  • Azure Machine Learning
  • ML Server (R)

The basics of reporting and modelling are not new. Now we are getting into advanced analytics. Using data from these AI systems we can predict outcomes or prescribe recommended actions.

Deep Learning

  • Cognitive Services
  • Bot Service

Taking advanced analytics to a further level by using these toolkits.

Tracking Data

  • Azure Search
  • Azure Data Catalog

When you have such a large data estate you need ways to track what you have, and to be able to search it.

The Azure Platform

  • ExpressRoute
  • Azure AD
  • Network Security Groups
  • Azure Key Management Service
  • Operations Management Suite
  • Azure Functions (serverless compute)
  • Visual Studio

You need a platform with enterprise capabilities in the best ways possible in a compliant manner.

Big Data Services

Nishant says that that darker shaded services are the ones usually being talked about when they talk about Big Data:

image

To understand what all these services are doing as a whole, and why Microsoft has gotten into Big Data, we have to step all the way back. There are 3 high-level trends that are a kind of an industrial revolution, making data a commodity:

  • Cloud
  • Data
  • AI

We are on the cusp of an era where every action produces data.

The Modern Data Estate

image

There are 2 principles:

  • Data on-premises and
  • Data in the cloud

Few organizations are just 1 or the other; most span both locations. Data warehouses aggregate operational databases. Data Lakes store the data used for AI, and will be used to answer the questions that we don’t even know of today.

We need three capabilities for this AI functionality:

  • The ability to reason over this data from anywhere
  • You have the flexibility to choose – MS (simplicity & ease of use), open source (wider choice), programming models, etc.
  • Security & privacy, e.g. GDPR

Microsoft has offerings for both on-premises and in Azure, spanning MS code and open source, with AI built-in as a feature.

Evolution of the Data Warehouse

image

There are 3 core scenarios that use Big Data:

  • Modern DW: Modernizing the old concept of a DW to consume data from lots of sources, including complexity (big data)
  • Advanced Analytics: Make predictions from data using Deep Learning (AI)
  • IoT: Get real time insights from data produced by devices

Implementing Big Data & Data Warehousing in Azure

Here is a traditional DW

image

Data from operational databases are fed into a single DW. Some analysis is done and information is reported/visualized for users.

SQL Server Integration services, a part of the Azure Data Factory, can allow you to consume data from your multiple operational assets and aggregate them as a DW.

Azure Analysis Services allows you yo build tabular models for your BI needs, and Power BI can be used to report and visualize those models.

If you have existing huge repositories of data that you want to bring into a DW then you can use:

  • Azure CLI
  • Azure Data Factory
  • BCP Command Line Utility
  • SQL Server Integration Services

This traditional model breaks when some of your data is unstructured. For example:

image

Structured operational data is coming in from Azure SQL DB as before.

Log files and media files are coming into blob storage as unstructured data – the structure of queries is unknown and the capacity is enormous.  That unstructured data breaks your old system but you still need to ingest it because you know that there are insights in it.

Today, you might only know some questions that you’d like to ask of the unstructured data. But later on, you might have more queries that you’d like to create. The vast scale of economy of Azure storage makes this feasible.

ExpressRoute will be used to ingest data from an enterprise if:

  • You have security/compliance concerns
  • There is simply too much data for normal Internet connections

Back to the previous unstructured data scenario. If you are curating the data so it is filtered/clean/useful, then you can use Polybase to ingest it into the DW. Normally, that task of cleaning/filtering/curating is too huge for you to do on the fly.

image

HDInsight can tap into the unstructured blob storage to clean/curate/process it before it is ingested into the DW.

What does HDInsight allow you to do? You forget that the data was structured/unstructured/semi-structured. You forget the complexity of the analytical queries that you want to write. You forget the kinds of questions you would like to ask of the data. HDInsight allows you to add structure to the data using some of it’s tools. Once the data is structured, you can import it into the DW using Polybase.

Another option is to use Azure Functions instead of HDInsight:

image

This serverless option can suit if the required manipulation of the unstructured data is very simple. This cannot be sophisticated – why re-invent the wheel of HDInsight?

Back to HDInsight:

image

Analytical dashboards can tap into some of the compute engines directly, e.g. tap into raw data to identify a trend or do ad-hoc analytics using queries/dashboards.

Facilitating Advanced Analytics

So you’ve got a modern DW that aggregates structured and unstructured data. You can write queries to look for information – but we want deeper insights.

image

The compute engines (HDIsnight) enable you to use advanced analytics. Machine Learning can only be as good as the quality and quantity of data that you provide to it – the compute engine’s job. The more data machine learning has to learn from, the more accurate the analysis will be. If the data is clean, then garbage results won’t be produced. To do this with TBs or PBs of data, you will need the scale-out compute engine (HDInsight) – a VM just cannot do this.

Some organizations are so large or so specialized that they need even better engines to work with:

image

Azure Data Lake store replaces blob storage for greater scales. Azure Data Lake Analytics replaces HDInsight offers a developer-friendly T-SQL-like & C# environment. You can also write Python R models. Azure Data Lake Analytics is serverless – there are no clusters as there are in HDInsight. You can focus on your service instead of being distracted by monitoring.

Note that HDInsights works with interactive queries against streaming data. Azure Data Lake is based on batch jobs.

image

You have the flexibility of choice for your big data compute engines:

image

Returning to the HDInsight scenario:

image

HDInsight, via Spark, can integrate with Cosmos DB. Data can be stored in Cosmos DB for users to consume. Also, data that users are generating and storing in Cosmos DB can be consumed by HDInsight for processing by advanced analytics, with learnings being stored back in Cosmos DB.

Demo

He opens an app on an iPhone. It’s a shoe sales app. The service is (in theory) using social media, fashion trends, weather, customer location, and more to make a prediction about what shoes the customer wants. Those shoes are presented to the customer, with the hope that this will simplify the shopping experience and lead to a sale on this app. When you pick a shoe style, the app predicts your favourite colour. If you view a shoe, but don’t buy it., the app can automatically entice you with promotional offers – stock levels can be queried to see what kind of promotion is suitable – e.g. try shift less popular stock by giving you a discount to do an in-store pickup where stock levels are too high and it would cost the company money to ship stock back to the warehouse. The customer might also be tempted to buy some more stuff when in the shop.

He then switches to the dashboard that the marketing manager of the shoe sales company would use. There’s lots of data visualization from the Modern DW, combining structured and unstructured data – the latter can come from social media sentiment, geo locations, etc. This sentiment can be tied to product category sales/profits. Machine learning can use the data to recommend promotional campaigns. In this demo, choosing one of these campaigns triggers a workflow in Dynamics to launch the campaign.

Here’s the solution architecture:

image

There are 3 data sources:

  • Unstructured data from monitoring the social and app environments – Azure Data Factory
  • Structured data from CRM (I think) – Azure Data Factory
  • Product & customer profile data from Cosmos DB (Service Fabric in front of it servicing the mobile apps).

HDInsight is consuming that data and applying machine learning using R Server. Data is being written back out to:

  • Blob storage
  • Cosmos DB – Spark integration

The DW consumes two data sources:

  • The data produced by HDInsight from blob storage
  • The transactional data from the sales transactions (Azure SQL DB)

Azure Analysis Services then provides the ability to consume the information in the DW for the Marketing Manager.

Enabling Real-Time Processing

This is when we start getting in IoT data, e.g. sensors – another source of unstructured data that can come in big and fast. We need to capture the data, analyse it, derive insights, and potentially do machine learning analysis to take actions on those insights.

image

Event hubs can ingest this data and forward it to HDIngsights – stream analysis can be done using Spark Streaming or Storm. Data can be analysed by Machine Learning and reported in real-time to users.

So the IoT data is:

  • Fed into HDInsights for structuring
  • Fed into Machine Learning for live reporting
  • Stored in Blob Storage.
  • Consumed by the DW using Polybase for BI

There are alternatives to this IOT design.

image

You should use Azure IoT Hub if you want:

  • Device registration policies
  • Metadata about your devices to be stored

If you have some custom operations to perform, Azure HDInsight (Kafka) can scale up from millions of events per second. It can apply some custom logic that cannot be done by Event Hub or IoT Hub.

We also have flexibility of choice when it comes to processing.

image

Azure Stream Analytics gives you ease-of-use versus HDInsight. Instead of monitoring the health & performance of compute clusters, you can use Stream Analytics.

The Azure Platform

The platform of Azure wraps this package up:

  • ExpressRoute: Private SLA networking
  • Azure Data Factory: Orchestration of the data processing, not just ingestion.
  • Azure Key Vault: Securely storing secrets
  • Operations Management Suite: Monitoring & alerting

And now that your mind is warped, I’ll leave it there Smile I thought it was an excellent overview session.

Azure IaaS Design & Performance Considerations–Best Practices & Learnings From The Field

Speaker: Daniel Neumann, TSP – Azure Infrastructure, Microsoft (ex-MVP).

Selecting the Best VM Size

Performance of each Azure VM vCPU/core is rated using ACU, based on 100 for the Standard A-Series. E.g. D_v2 offers 210-250 per vCPU. H offers 290-300. Note that the D_v3 has lower speeds than D_v2 because it uses hyprethreading on the host – MS matched this by reducing costs accordingly. Probably not a big deal – DB workloads which are common on the D-family care more about thread count than GHz.

Network Performance

Documentation has been improved to show actual Gbps instead of low/medium/high. Higher-end machines can be created with Accelerated Networking (SR-IOV) which can offer very high speeds. Announced this week: the M128s the VM can hit 30 Gbps.

RSS

Is not always enabled by default for Windows VMs. It is on larger VMs, and it is for all Linux machines. Can greatly improve inbound data transfer performance for multi-core VMs.

Storage Throughput

Listed in the VM sizes. This varies between series, and increases as you go up through the sizes. Watch out when using Premium Storage – lower end machines might not be able to offer the potential of larger disks or storage pools of disks, so you might need a larger VM size to achieve the performance potential of the disks/pool.

Daniel uses a tool called PerfInsights from MS Downloads to demo storage throughput.

Why Use Managed Disks

Storage accounts are limited to 50,0000 IOPS since 20/9/2017. That limits the number of disks that you can have in a single storage account. If you put too many disks in a single storage account, you cannot get the performance potential of each disk.

Lots of reasons to use managed disks. In short:

  • No more storage accounts
  • Lots more management features
  • FYI: no support yet for Azure-to-Azure Site Recovery (replication to other regions)

If you use un-managed disks with availability sets, it can happen that all 3 copies of storage accounts are in the same fault domain. With managed disks, availability set alignment is mirrored by disk placement.

Storage Spaces

Do not use disk mirroring. Use simple virtual disks/LUNs.

Ensure that the column count = the number of disks for performance.

Daniel says to format the volume with 64KB allocation unit size. True, for almost everything except SQL Server. For normal transactional databases, stick with 64KB allocation unit size. For SQL Server data warehouess, go with 256KB allocation unit size – from the SQL Tiger team this week.

Networking

Daniel doesn’t appear to be a fan of micro-segmentation of a subnet using an NVA. Maybe the preview DPDK feature for NVA performance might change that.

He shows the NSG Security Group View in Network Watcher. It allows you to understand how L4 firewall rules are being applied by NSGs. In a VM you also have: effective routes and effective security rules.

Encryption Best Practices

Azure Disk Encryption requires that your key vault and VMs reside in the same Azure region and subscription.

Use the latest version of Azure PowerShell to configure Azure Disk Encryption.

You need an Azure AD Service Principal – the VM cannot talk directly to the key vault, so it goes via the service principal. Best practice is to have 1 service principal for each key vault.

Storage Service Encryption (managed disks) is easier. There is no BYOK at the moment so there’s no key vault function. The keys are managed by Azure and not visible to the customer.

The Test Tools Used In This Session

29-09-2017 09-33 Office Lens (1)

Comparing Performance with Encryption

There’s lots of charts in this section so best to watch the video on Channel 9/Ignite?YouTube.

In short, ADE encryption causes some throughput performance hits, depending on disk tier, size, and block size of data – CPU 3% utilization, no IOPS performance hit. SSE has no performance impact.

Azure Backup Best Practices

You need a recovery services vault in the same region/subscription as the VM you want to backup.

VMs using ADE encryption must have a Key Encryption Key (KEK).

Best case performance of Azure Backup backups:

  • Initial backup: 20 Mbps.
  • Incremental backup: 80 Mbps.

Best practices:

  • Do not schedule more than 40 VMs to backup at the same time.
  • Make sure you have Python 2.7 in Linux VMs that you are backing up.

Protect Your Data With Microsoft Azure Backup

Speakers:

  • Vijay Tandra Sistla, Principal PM Manager
  • Aruna Somendra, Senior Program Manager

Aruna is first to speak. It’s a demo-packed session. There was another session on AB during the week – that’s probably worth watching as well.

All the attendees are from diverse backgrounds, and we have one common denominator: data. We need to protect that data.

Impact of Data Loss

  • The impact can be direct, e.g. WannaCry hammering the UK’s NHS and patients.
  • It can impact a brand
  • It can impact your career

Azure Backup was built to:

  • Make backups simple
  • Keep data safe
  • Reduce costs

Single Solution

Azure Backup covers on-premises and Azure. It is one solution, with 1 pricing system no matter what you protect: instance size + storage consumed.

Protecting Azure Resources

A demo will show this in action, plus new features coming this year. They’ve built a website with some content on Azure Web Apps – images in Azure FIles and data in SQL in an IaaS VM. Vijay refreshes the site and the icons are ransomwared.

Azure Backup can support:

  • Azure IaaS VMs – the entire VM, disks, or file level recovery
  • Azure Files via Storage account snapshots (NEW)
  • SQL in an Azure IaaS VM (NEW)

Discovery of databases is easy. An agent in the guest OS is queried, and all SQL VMs are discovered. Then all databases are shown, and you back them up based on full / incremental / transaction log backups, using typical AB retention.

For Azure File Share, pick the storage account, select the file share, and then choose the backup/retention policy. It keeps up to 120 days in the preview, but longer term retention will be possible at GA.

When you create a new VM, the Enable Backup option is in the Settings blade. So you can enable backup during VM creation instead of trying to remember to do it later – no longer an afterthought.

Conventional Backup Approaches

What happens behind the scenes in AB. Instead of using on-prem SQL, file servers, you’re starting to use Azure Files and SQL in VMs. Instead of hacking backups into Azure storage (doesn’t scale, and messy) you enable Azure Backup which offers centralized management, In Azure, it is infrastructure-free. SQL is backed up using a backup extension, VM’s are backed up using a backup extension.

28-09-2017 14-34 Office Lens

Azure File Sync is supported too:

In preview, there is short-term retention using snpashots in the source storage account. After GA they will increase retention and enable backups to be storage in the RSV.

28-09-2017 14-38 Office Lens

Linux

When you backup a Linux VM, you can run a pre-script, do the backup, and then run a post-script. This can enable application-consistent backups in Linux VMs in Azure. Aruna logs into a Linux VM via SSH. There are Linux CLI commands in the guest OS, e.g. az backup. There is a JSON file that describes the pre-and post scripts. There’s some scripts by a company by a company called capside for MySQL. The pre-script creates database dumps and stops the databases.

28-09-2017 14-49 Office Lens

az backup recoverypoint list and some flags can be used to list the recovery points for the currently logged in VM. The results show if they are app or file consistent.

az backup restore files and some parameters can be used to mount the recovery point – you then copy files from the recovery point, and unmount the recovery point when done.

28-09-2017 14-45 Office Lens

Restore as a Service

28-09-2017 14-50 Office Lens

On-Premises

2/3 of customers keeping on-premises data.

Two solutions in AB for hybrid backup:

  • Microsoft Azure Backup Server (MABS) / DPM: Backup Hyper-V, VMware, SQL, SharePoint, Exchange, File Server & System State to local storage (short-term retention)  and to the cloud (long term retention)
  • MARS Agent: Files & Folders, and System State backed up directly to the cloud.

System State

Protects Active Directory, IIS metadata, file server metadata. registry, COM+ Cert Services, Cluster services info, AD, IIS metabase.

Went live in MARS agent last month.

In a demo, Vijay deletes users from AD. He restores system state files using MARS. Then you reboot the DC in AD restore mode. And then use the wbadmin tool to restore the system state. wbadmin start systemstaterecovery. You reboot again, and the users are restored.

Vijay shows MARS deployment, and shows the Project Honolulu implementation.

Next he talks about the ability to do an offline backup instead of an online full backup. This leverages the Azure storage import service, which can leverage the new Azure Data Box – a tamper proof storage solution of up to 100 TB.

Security

Using cloud isolates backup data from the production data. AB includes free multi-approval process to protect destructive operations to hybrid backups. All backup data is encrypted. RBAC offers governance and control over Azure Backup.

There are email alerts (if enabled) for destructive operations.

If data is deleted, it is retained for 14 days so you can still restore your data, just in case.

Hybrid Backup Encryption

Data is encrypted before it leaves the customer site.

Customers want:

  • To be able to change keys
  • Keep the key secret from MS

A passphrase is used to create they key. This is a key encryption key process. And MS never has your KEK.

Azure VM Disk Encryption

You still need to be able to backup your VMs. If a disk is encrypted using a KEK/BEK combination in the Key Vault, then Azure Backup includes the keys in the backup so you can restore from any point in time in your retention policy.

Isolation and Access Control

Two levels of authorization:

  • You can control access/roles to individual vaults for users.
  • There are permissions or roles within a vault that you can assign to users.

Monitoring & Reporting

Typical questions:

  • How much storage am I using?
  • Are my backups healthy?
  • Can I see the trends in my system?

Vijay does a tour of information in the RSV. Next he shows the new integration with OMS Log Analytics. This shows information from many RSVs in a single tenant. You can create alerts from events in Log Analytics – emails, webhooks, runbooks, or trigger an ITSM action. The OMS data model, for queries, is shared on docs.microsoft.com.

For longer term reporting, you can export your tenant’s data to an AB Content Pack in PowerBI – note that this is 1 tenant per content pack import, so a CSP reseller will need 100 imports of the content pack for 100 customers. Vijay shows a custom graphical report showing the trends of data sources over 3 months – it shows growth for all sources, except one which has gone down.

Power BI is free up to 1 GB of data, and then it’s a per-user monthly fee after that.

Roadmap

  • Backup of SQL in IaaS – preview
  • Backup of Azure file – preview
  • Azure CLI
  • Backup of encrypted VMs without KEK
  • Backup of VMs with storage ACLs
  • Backup of large disk VMs
  • Upgrade of classic Backup Vault to ARM RSV
  • Resource move across RG and subscription
  • Removal of vault limits
  • System State Backup

From IT Pros to IT Heros–With Azure DevTest Labs

Claude Remillard, Group Program Manager

How IT pros can make devs very happy!

Reason to Exist

50% or more of infrastructure is used for non-production environment. In an old job of mine, we have dev, test, and production versions of every system. That’s a lot of money! The required life of dev & test is up and down. The cloud offers on-demand capacity.

DevTest Labs

Solution for fast, easy, and agile dev-test environments in Azure:

  • Fast provisioning
  • Automation & self-service
  • Cost control and governance

Think of it as a controlled subset of Azure where the devs can roam free.

Test Environments

Typical pipeline:

  1. Check-in
  2. Build
  3. Test
  4. Release

You can pre-configure a lot of things to get a VM. Standardize images. Use an image factory.

Training / Education

A number of training companies are using DevTest environments. They can set a limit in the lab, and then let people do what they need to do in that lab.

Trials / Demos / Hackathons

Invite people in to try something out, experiment with designs/patterns, and do this in a short-lived and controlled environment.

Demo

DevTest Labs is just another Azure service. You create the lab, configure it, and assign users to it.

In Overview, you can see VMs you own, and VMs that you can claim. In virtual machines, you can see an environment alongside VMs; this is a collection of related resources. Claimable VMs are pre-created and shutdown. An IT pro could take a s/w build, deploy it overnight, and let devs/tests claim the machines the following morning.

When he goes into a VM, it has a tiny subset of the usual VM features. It has other things, like Auto-Start and Auto-Shutdown to reduce costs. You can create a custom image from a VM, which includes optionally running Sysprep. That image is then available to everyone in the lab to create VMs from. Images can be shared between labs.

Everything in the lab can be automated with APIs, PowerShell (and, thus, Automation).

He goes to create a VM. The new VM is build from a “base”. Bases can be custom/gallery images, ARM templates, or formulas. It sounds like the ARM template could be in a source control system and you could have multiple labs subscribe to those templates, or other artefacts.

If you select a VM base, there’s just one blade to create it. Name the machine, put in a guest OS username/password (can be saved as a reusable secret), choose disk type/size, select a VM series/size (restricted by admin), add other artefacts (additional s/w you can add to the VM at the time of creation, e.g. Chrome using Choclatey package manager, join an AD domain, etc), optionally do some advanced settings (network options, IP config, auto-delete the VM, number of instance, make the VM claimable), and click Create.

You can export a lab as a file, and use that file to spin up new labs.

Back in the lab, he goes to Configuration & Policies. Cost Tracking shows trends and resource specific costs. This is based on RRP costs – special deals with MS are not available to the DevTest Lab APIs. The goal here isn’t to do accounting– it’s to see spend trends and spikes.

Users: Devs should be “Lab Users”. You can share a lab with external users, e.g. consultants.

Policy Settings allows you to control:

  • Allowed virtual machines: You select which series/size can be deployed.
  • Virtual machines per user: You can limit the number of machines. You can limit the number of machines using Premium Disks. Enforced per user.
  • Virtual machines per lab: You can limit VMs and Premium VM disks per lab

Schedules:

  • Auto-Start
  • Auto-Stop

You can send emails and webhooks before auto-shutdown.

External Resources:

  • Repositories: Places where you pull artefacts from. Supports VSTS, GitHub and Git. The asure-devtestlab GitHub has lots of sample artefacts, scripts, and templates. This is the best way to share things between labs.
  • Virtual Networks: What networks will be available – should be pre-created by IT pros. You set up a default virtual network for new VMs, optionally with S2S VPN/ExpressRoute. You can control whether a VM can have a public IP or not.

Virtual Machine Bases:

  • Marketplace Images: What is available from the Marketplace: nothing / all / subset.
  • Custom images:
  • Formulas:

At this point I asked if Azure DevTest Labs is available in CSP. The speaker had never heard of the primary method for selling Azure by MS Partners. That’s pretty awful, IMO.

Image Factories

A way to build images that can be reused. It’s a bit more though – it’s a configuration that builds VMs with configurations automatically on a regular basis. This makes it possible to produce the latest VM images with bits baked in to your devs and testers.

That’s everything.

Application-Aware Disaster Recovery For VMware, Hyper-V, and Azure IaaS VMs with Azure Site Recovery

Speaker: Abhishek Hemrajani, Principal Lead Program Manger, Azure Site Recovery, Microsoft

There’s a session title!

The Impact of an Outage

The aviation industry has suffered massive outages over the last couple of years costing millions to billions. Big sites like GitHub have gone down. Only 18% of DR investors feel prepared (Forrester July 2017 The State of Business Technology Resiliency. Much of this is due to immature core planning and very limited testing.

Causes of Significant Disasters

  • Forrester says 56% of declared disasters are caused by h/w or s/w.
  • 38% are because of power failures.
  • Only 31% are caused by natural disasters.
  • 19% are because of cyber attacks.

Sourced from the above Forrester research.

Challenges to Business Continuity

  • Cost
  • Complexity
  • Compliance

How Can Azure Help?

The hyper-scale of Azure can help.

  • Reduced cost – OpEx utility computing and benefits of hyper-scale cloud.
  • Reduced complexity: Service-based solution that has weight of MS development behind it to simplify it.
  • Increased compliance: More certifications than anyone.

DR for Azure VMs

Something that AWS doesn’t have. Some mistakenly think that you don’t need DR in Azure. A region can go offline. People can still make mistakes. MS does not replicate your VMs unless you enable/pay for ASR for selected VMs. Is highly certified for compliance including PCI, EU Data Protection, ISO 27001, and many, many more.

  • Ensure compliance: No-impact DR testing. Test every quarter or, at least, every 6 months.
  • Meet RPO and RTO goals: Backup cannot do this.
  • Centralized monitoring and alerting

Cost effective:

  • “Infrastructure-less” DR sites.
  • Pay for what you consume.

Simple:

  • One-click replication
  • One-click application recovery (multiple VMs)

Demo: Typical SharePoint Application in Azure

3 tiers in availability sets:

  • SQL cluster – replicated to a SQL VM in a target region or DR site (async)
  • App – replicated by ASR – nothing running in DR site
  • Web – replicated by ASR – nothing running in DR site
  • Availability sets – built for you by ASR
  • Load balancers – built for you by ASR
  • Public IP & DNS – abstract DNS using Traffic Manager

One-Click Replication is new and announced this week. Disaster Recovery (Preview) is an option in the VM settings. All the pre-requisites of the VM are presented in a GUI. You click Enable Replication and all the bits are build and the VM is replicated. You can pick any region in a “geo-cluster”, rather than being restricted to the paired region.

For more than one VM, you might enable replication in the recovery services vault (RSV) and multi-select the VMs for configuration. The replication policy includes recovery point retention and app-consistent snapshots.

New: Multi-VM consistent groups. In preview now, up to 8 VMs. 16 at GA. VMs in a group do their application consistent snapshots at the same time. No other public cloud offers this.

Recovery Plans

Orchestrate failover. VMs can be grouped, and groups are failed over in order. You can also demand manual tasks to be done, and execute Azure Automation runbooks to do other things like creating load balancer NAT rules, re-configuring DNS abstraction in Traffic Manager, etc. You run the recovery plan to failover …. and to do test failovers.

DR for Hyper-V

You install the Microsoft Azure Recovery Services (MARS) agent on each host. That connects you to the Azure RSV and you can replicate any VM to that host. No on-prem infrastructure required. No connection broker required.

DR for VMware

You must deploy the ASR management appliance in the data centre. MS learned that the setup experience for this is complex. They had a lot of pre-reqs and configurations to install this in a Windows VM. MS will deliver this appliance as an OVF template from now on – familiar format for VMware admins, and the appliance is configured from the Azure Portal. Replicate Linux and Windows VMs to Azure, as with Hyper-V from then on.

Demo: OVF-Based ASR Management Appliance for VMware

A web portal is used to onboard the downloaded appliance:

  1. Verify the connection to Azure.
  2. Select a NIC for outbound replication.
  3. Choose a recovery services vault from your subscription.
  4. Install any required third-party software, e.g. PowerCLI or MySQL.
  5. Validate the configuration.
  6. Configure vCenter/ESXi credentials – this is never sent to Azure, it stays local. The name of the credential that you choose might appear in the Azure portal.
  7. Then you enter credentials for your Windows/Linux guest OS. This is required to install a mobility service in each VMware VM. This is because VMware doesn’t use VHD/X, it uses VMDK. Again, not sent to MS, but the name of the credential will appear in the Azure Portal when enabling VM replication so you can select the right credentials.
  8. Finalize configuration.

This will start rolling out next month in all regions.

Comprehensive DR for VMware

Hyper-V can support all Linux distros supported by Azure. On VMware they’re close to all. They’ve added Windows Server 2016, Ubuntu 14.04 and 16.04 , Debian 7/8, managed disks, 4 TB disk support.

Achieve Near-Zero Application Data Loss

Tips:

  • Periodic DR testing of recovery plans – leverage Azure Automation.
  • Invoke BCP before disasters if you know it’s coming, e.g. hurricane.
  • Take the app offline before the event if it’s a planned failover – minimize risks.
  • Failover to Azure.
  • Resume the app and validate.

Achieve 5x Improvement in Downtime

Minimize downtime: https://aka.ms/asr_RTO

He shows a slide. One VM took 11 minutes to failover. Others took around/less than 2 minutes using the above guidance.

Demo: Broad OS Coverage, Azure Features, UEFI Support

He shows Ubunu, CentOS, Windows Server, and Debian replicating from VMware to Azure. You can failover from VMware to Azure with UEFI VMs now – but you CANNOT failback. The process converts the VM to BIOS in Azure (Generation 1 VMs). OK if there’s no intention to failback, e.g. migration to Azure.

Customer Success Story – Accenture

They deployed ASR. Increased availability. 53% reduction in infrastructure cost. 3x improvement in RPO. Savings in work and personal time. Simpler solution and they developed new cloud skills.

They get a lot of alerts at the weekend when there’s any network glitches. Could be 500 email alerts.

Demo: New Dashboard & Comprehensive Monitoring

Brand new RSV experience for ASR. Lots more graphical info:

  • Replication health
  • Failover test success
  • Configuration issues
  • Recovery plans
  • Error summary
  • Graphical view of the infrastructure: Azure, VMware, Hyper-V. This shows the various pieces of the solution, and a line goes red when a connection has a failure.
  • Jobs summary

All of this is on one screen.

He clicks on an error and sees the hosts that are affected. He clicks on “Needs Attention” in one of the errors. A blade opens with much more information.

We can see replication charts for a VM and disk – useful to see if VM change is too much for the bandwidth or the target storage (standard VS premium). The disk level view might help you ID churn-heavy storage like a page file that can be excluded from replication.

A message digest will be sent out at the end of the day. This data can be fed into OMS.

Some guest speakers come up from Rackspace and CDW. I won’t be blogging this.

Questions

  • When are things out: News on the ASR blog in October
  • The Hyper-V Planner is out this week, and new cost planners for Hyper-V and VMware are out this week.
  • Failback of managed disks is there for VMware and will be out by end of year for Hyper-V.

Running Tier 1 Worklaods on SQL Server on Microsoft Azure Virtual Machines

Speaker: Ajay Jagannathan, Principal PM Manager, Microsoft Data Platform Group. He leads the @mssqltiger team.

I think that this is the first every SQL Server that I’ve attended in person at a TechEd/Ignite. I was going to a PaaS session instead, but I’ve got so many customers running SQL Server on Azure VMs, that I thought that this was important for me to see. I also thought it might be useful for a lot of readers.

Microsoft Data Platform

Starting with SQL 2016, the goal was to make the platform consistent on-premises, with Azure VMs, or in Azure SQL. With Azure, scaling is possible using VM features such as scale sets. You can offload database loads, so analytics can be on a different tier:

  • On-premises: SQL Server and SQL Server (DW) Reference architecture
  • IaaS: SQL Server in Azure VM with SQL Server (DW) in Azure VM.
  • PaaS: Azure SQL database with Azure SQL data warehouse

Common T-SQL surface area. Simple cloud migration. Single vendor for support. Develop once and deploy anywhere.

Azure VM

  • Azure load balancer routes traffic to the VM NIC.
  • The compute and storage are separate from the storage.
  • The virtual machine issues operations to the storage.

SQL Server in Azure VM – Deployment Options

  • Microsoft gallery images: SQL Server 2008 R2 – 2017, SQL Web, Std, Ent, Dev, Express. Windows Server 2008 R2 – WS2016. RHEL and Ubuntu.
  • SQL Licensing: PAYG based on number of cores and SQL edition. Pay per minute.
  • Bring your own license: Software Assurance required to move/license SQL to the cloud if not doing PAYG.
  • Creates in ~10 miuntes.
  • Connect via RDP, ADO, .NET, OLEDB, JBDC, PHO …
  • Manage via Portal, SSMS, owerShell, CLI, System Center …

It’s a VM so nothing really changes from on-premises VM in terms of management.

Everytime there’s a critical update or service pack, they update the gallery images.

VM Sizes

The recommend DS__V2- or FS-Series with Premium Storage. For larger loads, they recommend the GS- and LS-Series.

For other options, there’s the ES_v2 series (memory optimized DS_v3), and the M-Series for huge RAM amounts.

VM Availability

Availability sets distribute VMs across fault and update domains in a single cluster/data centre. You get a 99.95% SLA on the service for valid configurations. Use this for SQL clusters.

Managed disks offer easier IOPS management, particularly with Premium Disks (storage account has a limit of 20,000 IOPS). Disks are distributed to different storage stamps when the VM is in an availability set – better isolation for SQL HA or AlwaysOn.

High Availability

Provision a domain controller replica in a different availability set to your SQL VMs. This can be in the same domain as your on-prem domain (ExpressRoute or site-to-site VPN).

Use (Get-Cluster).SameSubnetThreshold = 20 to relax Windows Cluster failure detection for transient network failure.

Configure the cluster to ignore storage. They recommend AlwaysOn. There is no shared storage in Azure. New-Cluster –Name $ClusterName –NoStorage –Node $LocalMachineName

Configure Azure load balancer and backend pool. Register the IP address of listener.

There are step-by-step instructions on MS documentation.

SQL Server Disaster Recovery

Store database backups in geo-replicated readable storage. Restore backups in a remote region (~30 min).

Availability group options:

  • Configure Azure as remote region for on-premise
  • Configure On-prem as DR for Azure
  • Replicate in Azure Remote region – failover to remove in ~30s. Offload remote reads.

Automated Configuration

Some of these are provided by MS in the portal wizard:

  • Optimization to a target workload: OLTP/DW
  • Automated patching and shutdown – latter is very new, and to reduce costs for new dev/test workloads to reduce costs at the end of the workday.
  • Automated backup to a storage account, including user and system databases. Useful for a few databases, but there’s another option coming for larger collections.

Storage Options

The recommend LRS only to keep write performance to a maximum. GRS storage is slower, and could lead to database file being written/replicated before log storage.

Premium Storage: high IOPS and low latency. Use Storage Spaces to increase capacity and performance. Enable host-based read caching in data disks for better IOPS/latency.

Backup to Premium Storage is 6x faster. Restore is 30x faster.

Azure VM Connectivity

  • Over the Internet.
  • Over site-site tunnel: VPN or ExpressRoute
  • Apps can connect transparently via a listener, e.g. Load Balancer.

Demo: Deployment

The speaker shows a PowerShell script. Not much point in blogging this. I refer JSON anyway.

http://aka.ms/tigertoolbox is the script/tools/demos repository.

Security

  • Physical security of the datacenter
  • Infrastructure security: virtual network isolation, and storage encryption including bring-your-own-key self-service encryption with Key Vault. Best practices and monitoring by Security Center.
  • Many certifications
  • SQL Security: auto-patching, database/backup encryption, and more.

VM Configuration for SQL Server

  • Use D-Series or higher.
  • Use Storage Spaces for performance of disks. Use Simple disks: the number of columns should equal the number of disks. For OLTP use 64KB interleave and use 256KB for data warehouse.
  • Do not use the system drive.
  • Put TempDB, logs, and databases on different volumes because of their different write patterns.
  • 64K allocation unit size.
  • Enable read caching on disks for data files and TempDB.
  • Do not use GRS storage.

SQL Configuration

  • Enable instant file initialization
  • Enabled locked ages
  • Enable data page compression
  • Disable auto-shrink for your databases
  • Backup to URL with compressed backups – useful for a few VMs/databases. SQL 2016 does this very quickly.
  • Move all databases to data disks, including system databases (separate data and log). Use read caching.
  • Move SQL Server error log and trace file directories to data disks

Demo: Workload Performance of Standard Versus Premium Storage

A scripted demo. 2 scripts doing the same thing – one targeting a DB on Standard disk (up to 500 IOPS) and the second targets a DB on a Premium P30 (4,500 IOPS) disk. There’s table creation, 10,000 rows, inserts, more tables, etc. The scripts track the time required.

It takes a while – he has some stats from previous runs. There’s only a 25% difference in the test. Honestly – that’s no indicative of the differences. He needs a better demo.

An IFI test shows that the bigger the database file is, the bigger the difference is in terms of performance – this makes sense considering the performance nature of flash storage.

Seamless Database Migration

There is a migration guide, and tools/services. http://datamigration.microsoft.com. One-stop shop for database migrations. Guidance to get from source to target. Recommended partners and case studies.

Tools:

  • Data Migration Assistant: An analysis tool to produce a report.
  • Azure Database Migration Service (free service that runs in a VM): Works with Oracle, MySQL, and SQL Server to SQL Server, Azure SQL, Azure SQL Managed Instance. It works by backing up the DB on the source, moving the backup to the cloud, and restoring the backup.

Azure Backup

Today, SQL Server can backup from the SQL VM (Azure or on-prem) to a storage account in Azure. It’s all managed from SQL Server. Very distributed, no centralized reporting, difficult/no long-term retention.  Very cheap.

Azure Backup will offer centralized management of SQL Backup in an Azure VM. In preview today. Managed from the Recovery Services Vault. You select the type of backup, and a discovery will detect all SQL instances in Azure VMs, and their databases. A service account is required for this and is included in the gallery images. You must add this service for custom VMs. You then configure a backup policy for selected DBs. You can define a full backup policy, incremental, and transactional backup policy with SQL backup compression option. The retention options are the familiar ones from Azure Backup (up to 99 years by the looks of it). The backup is scheduled and you can do ad-hoc/manual backups as usual with Azure Backup.

You can restore databases too – there’s a nice GUI for selecting a restore date/time. It looks like quite a bit of work went into this. This will be the recommended solution for centralized backup of lots of databases, and for those wanting long term retention.

Backup Verification is not in this solution yet.

Azure AD Domain Services

 

Options when Moving to The Cloud

  • Switch to using SaaS versions of the s/w
  • Rewrite the app
  • Lift and shift: the focus today.

How Organizations Handle AD Requirements Today

  • They set up site-site VPN and deploy additional domain controllers in the cloud.
  • They deploy another domain/forest in the cloud and provision a trust, e.g. ADFS.

Imagine a Simpler Alternative

  • Simpler
  • Compatible
  • Available
  • Cost-effective

Introducing Azure AD Domain Services

  1. You provision a VNet.
  2. Then you activate Azure AD Domain Services in Azure AD on that VNet
  3. You can manage the domain using RSAT.
  4. You can optionally sync your Windows Server AD with Azure AD to share accounts/groups.

Managed Domains

  • Domain controllers are patched automatically.
  • Secure locked down domain, complaint with AD deployment best practices
  • You get 2 DCs, so fault tolerant
  • Automatic health detection and remediation. If a DC fails, a new one is provisioned.
  • Automatic backups for disaster recovery.
  • No need to monitor replication – done as part of the managed service.

Sync

If you deploy sync, e.g. Azure AD Connect, then it flows as follows: Windows Server AD <-> Azure AD <-> Azure AD Domain Services

Features

  • SIDs are reused. This means things like file servers can be lifted and shifted to Azure without re-ACLing your workloads.
  • OUs
  • DNS

Pricing

Based on the number of objects in the directory. Micro-pricing.

Decisions

27-09-2017 16-13 Office Lens

New Features

  • Azure Portal AD Experience is GA
  • ARM virtual network join is GA

Demo

He creates an AADS domain. THere are two OUs by default:

  • AADC Users
  • AADC Computers

Back to the PowerPoint

Notes

  • You cannot deploy AADDS in the classic Azure portal any more.
  • The classic deployment model will be retired – the ARM deployment is better and more secure.
  • The classic VNet support is ending (for new domains) soon.
  • Existing deployments will continue to be supported

Questions

  • Is there GPO sync? No. This is a different domain, so there is no replication of GPO from on prem to AADDS
  • Can you add another DC to this domain? No. There will be (in the future) the ability to add more AADDS “DCs” in other VNets.
  • What happens if a region goes down? The entire domain goes down now – but when they have additional DC support this will solve the problem
  • Is there support in CSP? No, but it’s being worked on.

Manage Azure IaaS VMs

You can join these machines to AADDS. You can push GPO from AADDS. You’ll sign into the VMs using AADDS user accounts/passwords.

GPO

Members of AADDC Administrators can create OUs. You can target GPO to OUs.

Move Servers to the Cloud

Sync users/passwords/SIDs to the cloud, and then lift/shift applications/VMs to the cloud. THe SIDs are in sync so you don’t need to change permissions, and there’s a domain already for the VMs to join without creating DC VMs.

LDAP over SSL

I missed most of this. I think you can access applications using LDAP over SSL via the Internet.

Move Server Applications To Azure

User AADDS to provision and manage service accounts.

Kerberos Constrained Delegation

Cannot work with AADDS using old methods –  You don’t have the privileges. The solution is to use PowerShell to implement resource-based KCD.

Modernize Legacy Apps with Application Proxy

You can get users to sign in via AAD and MFA into legacy apps. A token is given to the app to authorize the user.

SharePoint Lift and Shift

A new group called AAD DC Service Accounts. Add the SharePoint Profile sync user account to this group.

Domain Joined HDIsnight Cluster

You can “Kerber-ize” a HD cluster to increase security. This is in preview at the moment.

Remote Desktop Deployments

Domain-join the farm to AADDS. The licensing server is a problem at the moment – this will be fixed soon. Until then, it works, but you’ll get licensing warnings.

Questions

  • Schema extensions? Not supported but on the roadmap.
  • Logging? Everything is logged but you have to go through support to get at them at the moment. They want to work on self-service logging.
  • There is no trust option today. They are working on the concept of a resource domain – maybe before end of the year.
  • Data at rest, in ARM, is encrypted. The keys (1 set per domain) are managed by MS. MS has no admin credentials – there’s an audited process for them to obtain access for support. The NTLM hashes are encrypted.

Deciding When to DIY Your Own AD Deployment

27-09-2017 16-39 Office Lens

 

Features Being Considered

  • Cloud solution provider support – maybe early 2018.
  • Support for a single managed domain to space multiple virtual networks
  • Manage resource forests
  • Schema extensions – they’ll start with the common ones, and then add support for custom extensions.
  • Support for LDAP writes – some apps require this

Any questions/feedback to aaddsfb@microsoft.com