My Microsoft Ignite Strategy

Microsoft Ignite is running from Monday 24th until Friday 28th in Orlando, Florida, next week. Here’s how I plan to consume from this conference.

Why Am I Attending?

There are two answers to this question, depending on what you mean by the question.

Why would I care to consume content from Ignite? That’s simple – Ignite is a cornerstone event in the Microsoft calendar for techies. If you work with business software from Microsoft, then this is when the big stuff gets announced, and this is the best opportunity to learn from the product groups. Even as an MVP, I have a unique opportunity to interact and learn from product groups, but they focus a huge amount of effort on this particular week. The breadth of content is huge – over 1000 sessions covering almost every aspect of enterprise software from Microsoft. In this era of constant change, it’s foolish not to try to keep up. The real question should be – why would I not want to learn at Ignite?

As for the second interpretation of the question: why attend Ignite when every session will be live streamed and available to download within 48 hours? The realities of life are that if I’m around at the office, or even working from home, the phone will ring, the email will ping, and I won’t get a chance to focus on the content. I have a young family, and at night, they come first. Attending the conference gives me a chance to focus. It’s a few days away, but the value carries over for at least the next year, and beyond.

Note Taking

I always take lots of notes at Ignite – long-time readers of my blog know this because my notes are posts on this site. I open Live Writer and start typing as the speakers are talking. You’d be amazed how often I end up googling my own articles!

If you’re not a blogger, then I’d recommend opening OneNote and taking notes for each session. If work sent you, consider sharing the notebook with your colleagues. If you’re part of a team that is attending, then cerate a shared notebook, split up and attend different sessions – you’ll exponentially grow the organisational learning and value from the conference.

Sessions

I don’t get any real value from the opening keynote. It’s all too airy-fairy and marketing speak for the general news media. For me, the meat starts immediately after the opening keynote. For the last few years, there have been “breakout keynotes” straight after the Satya Nadella session. That’s when the likes of Jeff Woolsey (Windows Server) and Scott Guthrie (Azure) flood us with news and features. As with the last few years, I will be attending lots of Azure sessions. And if it’s like last year, almost every session will have additional announcements. There’s no “how to” learning here, it’s more of a “what’s possible” learning experience – I can figure out the “how to” at home once I know what to look for. To be honest, “how to” learning doesn’t work when there’s only 60-75 minutes and you cannot do hands-on.

I typically only attend the 75 minute breakout sessions. Scattered about the hallways and expo hall are the theatre sessions, which are where most of the non-Microsoft speakers are talking. These are typically 10 minute sessions. There’s some value here, but the nuggets are so small, and the timing doesn’t work for me – this is the sort of thing I can get from a blog post or a YouTube/Channel 9 video. But that’s not true for everyone – some of the theatre sessions had massive crowds last year – bigger than many of the breakouts.

Hands-On Labs

My calendar is filled out with breakout sessions, but I often change my planning based on my gut feel for what’s being presented. Sometimes a track is dull, sometimes the same speakers are doing the same content 3-4 times but with different session titles, sometimes I hear of something exciting that I didn’t expect, and sometimes I hear about a great session that filled out but is being repeated.

When I first attended TechEd Europe, one of the best learning experiences I had was in the hands-on labs (HOLs). This gives you a chance to try things out in a sandbox environment. I haven’t done this in years, but I could be tempted to try out some AI, data, or Kubernetes labs if there are any.

Social

I’ve got friends in this business that I only ever see at conferences. MVP Kevin Greene only lives 20-30 minutes from our house but I see him a handful of times per year – we have pretty full family/work lives. I enjoy meeting up with Kev, Damian Flynn, ex-MVP and now Azure CAT John McCabe, and a bunch of other MVP and Microsoft friends that I’ve met over the years, and even some folks that I know over social media. There’s plenty of opportunity to be social at Ignite. Tuesday is party night (watch out for invitations), but most evenings Microsoft has a “mini-party” in the expo hall – which is also a great place to learn. And of course, there’s the conference closing party on Thursday night in Universal – the Hogwarts ride is pretty cool, Spiderman is fun, and Hulk looks damned scary (it would make me puke but my eldest daughter did it 4 times in a row) – Rip Ride Rocket looks worse!

Say “Hi!”

I will be easy to identify. I’ll be wearing a Cloud Mechanix T-Shirt.

DmwPKudXsAIVN2B

Be sure to say “hi”; I don’t bite … often Open-mouthed smile

Understanding Big Data on Azure–Structured, Unstructured, and Streaming

These are my notes from the recording of this Ignite 2017 session, BRK2293.

Speaker: Nishant Thacker, Technical Product Manager – Big Data

This Level-200 overview session is a tour of big data in Azure, it explains why the services were created, and what is their purpose. It is a foundation for the rest of the related sessions at Ignite. Interestingly, only about 30% of the audience had done any big data work in the past – I fall into the other 70%.

What is Big Data?

  • Definition: A term for data sets that are so large or complex that traditional data processing application s/w if inadequate to deal with it. Nishant stresses “complex”.
  • Challenges: Capturing (velocity) data, data storage, data analysis, search, sharing, visualization, querying, updating, and information privacy. So … there’s a few challenges Smile

The Azure Data Landscape

This slide is referred to for quite a while:

image

Data Ingestion

He starts with the left-top corner:

  • Azure Data Factory
  • Azure Import/Export Service
  • Azure CLI
  • Azure SDK

The first problem we have is data ingestion into the cloud or any system. How do you manage that? Azure can manage ingestion of data.

Azure Data Factory is a scheduling, orchestration, and ingestion service. It allows us to create sophisticated data pipelines from the ingestion of the data through to processing, through to storing, through to making it available to end users to access. It does not have compute power of it’s own; it taps into other Azure services to deliver any required compute.

The Azure Import/Export service can help bring incremental data on board. You can also use it to bulk load on Azure. If you have terabytes of data to upload, bandwidth might not be enough. You can securely courier data via disk to an Azure region.

The Azure CLI is designed for bulk uploads to happen in parallel. The SDKs can be put into your code, so you can generate the data in your application in the cloud, instead of uploading to the cloud.

Operational Database Services

  • Azure SQL DB
  • Azure Cosmos DB

The SQL database offers SQL Server, MySQL, and PostgreSQL.

Cosmos DB is the more interesting one – it’s NoSQL and offers global storage. It also supports 4 programming models: Mongo, Gremlin/Graph, SQL (DocumentDB), and Table. You have flexibility to bring in data in its native form, and data can be accessed in an operational environment. Cosmos DB has plugs into other aspects of Azure that make it more than just an operational database such as, Azure Functions or Spark (HDInsight).

Analytical Data Warehouse

  • Azure SQL Data Warehouse

When you want to do reporting and dashboards from data in operational databases then you will need an analytical data warehouse that aggregates data from many sources.

Traits of Azure SQL Data Warehouse:

  • Can grow, shrink, and pause in seconds – up to 1 Petabyte
  • Fill enterprise-class SQL Server – means you can migrate databases and bring your scripts with you. Independent scale of compute and storage in seconds
  • Seamless integration with Power BI, Azure Machine Learning, HDInsight, and Azure Data Factory

NoSQL Data

  • Azure Blob storage
  • Azure Data Lake Store

When your data doesn’t fit into the rows and columns structure of a traditional database then this is when you need specialized big data storages – capacity, unstructured sorting/reading.

Unstructured Data Compute Engines

  • Azure Data Lake Analytics
  • Azure HDInsight (Spark / Hadoop): managed clusters of Hadoop and Spark with enterprise-level SLAs with lower TCO than on-premises deployment.

When you get data into a big unstructured stores such as Blob or Data Lake then you need specialized compute engines for the complexity and volume of the data. This compute must be capable of scaling out because you cannot wait hours/days/months to analyse the data.

Ingest Streaming Data

  • Azure IoT Hub
  • Azure Event Hubs
  • Kafka on Azure HDInsight

How do you ingest this real-time data as it is generated? You can tap into event generators (e.g. devices) and buffer up data for your processing engines.

Stream Processing Engines

  • Azure Stream Analytics
  • Storm and Spark streaming on Azure HDInsight

These systems allow you to process streaming data on the fly. You have a choice of “easy” or “open source extensibility” with either of these solutions.

Reporting and Modelling

  • Azure Analysis Services
  • Power BI

You have cleansed and curated the data, but what do you do with it? Now you want some insights from it. Reporting & modelling is the first level of these insights.

Advanced Analytics

  • Azure Machine Learning
  • ML Server (R)

The basics of reporting and modelling are not new. Now we are getting into advanced analytics. Using data from these AI systems we can predict outcomes or prescribe recommended actions.

Deep Learning

  • Cognitive Services
  • Bot Service

Taking advanced analytics to a further level by using these toolkits.

Tracking Data

  • Azure Search
  • Azure Data Catalog

When you have such a large data estate you need ways to track what you have, and to be able to search it.

The Azure Platform

  • ExpressRoute
  • Azure AD
  • Network Security Groups
  • Azure Key Management Service
  • Operations Management Suite
  • Azure Functions (serverless compute)
  • Visual Studio

You need a platform with enterprise capabilities in the best ways possible in a compliant manner.

Big Data Services

Nishant says that that darker shaded services are the ones usually being talked about when they talk about Big Data:

image

To understand what all these services are doing as a whole, and why Microsoft has gotten into Big Data, we have to step all the way back. There are 3 high-level trends that are a kind of an industrial revolution, making data a commodity:

  • Cloud
  • Data
  • AI

We are on the cusp of an era where every action produces data.

The Modern Data Estate

image

There are 2 principles:

  • Data on-premises and
  • Data in the cloud

Few organizations are just 1 or the other; most span both locations. Data warehouses aggregate operational databases. Data Lakes store the data used for AI, and will be used to answer the questions that we don’t even know of today.

We need three capabilities for this AI functionality:

  • The ability to reason over this data from anywhere
  • You have the flexibility to choose – MS (simplicity & ease of use), open source (wider choice), programming models, etc.
  • Security & privacy, e.g. GDPR

Microsoft has offerings for both on-premises and in Azure, spanning MS code and open source, with AI built-in as a feature.

Evolution of the Data Warehouse

image

There are 3 core scenarios that use Big Data:

  • Modern DW: Modernizing the old concept of a DW to consume data from lots of sources, including complexity (big data)
  • Advanced Analytics: Make predictions from data using Deep Learning (AI)
  • IoT: Get real time insights from data produced by devices

Implementing Big Data & Data Warehousing in Azure

Here is a traditional DW

image

Data from operational databases are fed into a single DW. Some analysis is done and information is reported/visualized for users.

SQL Server Integration services, a part of the Azure Data Factory, can allow you to consume data from your multiple operational assets and aggregate them as a DW.

Azure Analysis Services allows you yo build tabular models for your BI needs, and Power BI can be used to report and visualize those models.

If you have existing huge repositories of data that you want to bring into a DW then you can use:

  • Azure CLI
  • Azure Data Factory
  • BCP Command Line Utility
  • SQL Server Integration Services

This traditional model breaks when some of your data is unstructured. For example:

image

Structured operational data is coming in from Azure SQL DB as before.

Log files and media files are coming into blob storage as unstructured data – the structure of queries is unknown and the capacity is enormous.  That unstructured data breaks your old system but you still need to ingest it because you know that there are insights in it.

Today, you might only know some questions that you’d like to ask of the unstructured data. But later on, you might have more queries that you’d like to create. The vast scale of economy of Azure storage makes this feasible.

ExpressRoute will be used to ingest data from an enterprise if:

  • You have security/compliance concerns
  • There is simply too much data for normal Internet connections

Back to the previous unstructured data scenario. If you are curating the data so it is filtered/clean/useful, then you can use Polybase to ingest it into the DW. Normally, that task of cleaning/filtering/curating is too huge for you to do on the fly.

image

HDInsight can tap into the unstructured blob storage to clean/curate/process it before it is ingested into the DW.

What does HDInsight allow you to do? You forget that the data was structured/unstructured/semi-structured. You forget the complexity of the analytical queries that you want to write. You forget the kinds of questions you would like to ask of the data. HDInsight allows you to add structure to the data using some of it’s tools. Once the data is structured, you can import it into the DW using Polybase.

Another option is to use Azure Functions instead of HDInsight:

image

This serverless option can suit if the required manipulation of the unstructured data is very simple. This cannot be sophisticated – why re-invent the wheel of HDInsight?

Back to HDInsight:

image

Analytical dashboards can tap into some of the compute engines directly, e.g. tap into raw data to identify a trend or do ad-hoc analytics using queries/dashboards.

Facilitating Advanced Analytics

So you’ve got a modern DW that aggregates structured and unstructured data. You can write queries to look for information – but we want deeper insights.

image

The compute engines (HDIsnight) enable you to use advanced analytics. Machine Learning can only be as good as the quality and quantity of data that you provide to it – the compute engine’s job. The more data machine learning has to learn from, the more accurate the analysis will be. If the data is clean, then garbage results won’t be produced. To do this with TBs or PBs of data, you will need the scale-out compute engine (HDInsight) – a VM just cannot do this.

Some organizations are so large or so specialized that they need even better engines to work with:

image

Azure Data Lake store replaces blob storage for greater scales. Azure Data Lake Analytics replaces HDInsight offers a developer-friendly T-SQL-like & C# environment. You can also write Python R models. Azure Data Lake Analytics is serverless – there are no clusters as there are in HDInsight. You can focus on your service instead of being distracted by monitoring.

Note that HDInsights works with interactive queries against streaming data. Azure Data Lake is based on batch jobs.

image

You have the flexibility of choice for your big data compute engines:

image

Returning to the HDInsight scenario:

image

HDInsight, via Spark, can integrate with Cosmos DB. Data can be stored in Cosmos DB for users to consume. Also, data that users are generating and storing in Cosmos DB can be consumed by HDInsight for processing by advanced analytics, with learnings being stored back in Cosmos DB.

Demo

He opens an app on an iPhone. It’s a shoe sales app. The service is (in theory) using social media, fashion trends, weather, customer location, and more to make a prediction about what shoes the customer wants. Those shoes are presented to the customer, with the hope that this will simplify the shopping experience and lead to a sale on this app. When you pick a shoe style, the app predicts your favourite colour. If you view a shoe, but don’t buy it., the app can automatically entice you with promotional offers – stock levels can be queried to see what kind of promotion is suitable – e.g. try shift less popular stock by giving you a discount to do an in-store pickup where stock levels are too high and it would cost the company money to ship stock back to the warehouse. The customer might also be tempted to buy some more stuff when in the shop.

He then switches to the dashboard that the marketing manager of the shoe sales company would use. There’s lots of data visualization from the Modern DW, combining structured and unstructured data – the latter can come from social media sentiment, geo locations, etc. This sentiment can be tied to product category sales/profits. Machine learning can use the data to recommend promotional campaigns. In this demo, choosing one of these campaigns triggers a workflow in Dynamics to launch the campaign.

Here’s the solution architecture:

image

There are 3 data sources:

  • Unstructured data from monitoring the social and app environments – Azure Data Factory
  • Structured data from CRM (I think) – Azure Data Factory
  • Product & customer profile data from Cosmos DB (Service Fabric in front of it servicing the mobile apps).

HDInsight is consuming that data and applying machine learning using R Server. Data is being written back out to:

  • Blob storage
  • Cosmos DB – Spark integration

The DW consumes two data sources:

  • The data produced by HDInsight from blob storage
  • The transactional data from the sales transactions (Azure SQL DB)

Azure Analysis Services then provides the ability to consume the information in the DW for the Marketing Manager.

Enabling Real-Time Processing

This is when we start getting in IoT data, e.g. sensors – another source of unstructured data that can come in big and fast. We need to capture the data, analyse it, derive insights, and potentially do machine learning analysis to take actions on those insights.

image

Event hubs can ingest this data and forward it to HDIngsights – stream analysis can be done using Spark Streaming or Storm. Data can be analysed by Machine Learning and reported in real-time to users.

So the IoT data is:

  • Fed into HDInsights for structuring
  • Fed into Machine Learning for live reporting
  • Stored in Blob Storage.
  • Consumed by the DW using Polybase for BI

There are alternatives to this IOT design.

image

You should use Azure IoT Hub if you want:

  • Device registration policies
  • Metadata about your devices to be stored

If you have some custom operations to perform, Azure HDInsight (Kafka) can scale up from millions of events per second. It can apply some custom logic that cannot be done by Event Hub or IoT Hub.

We also have flexibility of choice when it comes to processing.

image

Azure Stream Analytics gives you ease-of-use versus HDInsight. Instead of monitoring the health & performance of compute clusters, you can use Stream Analytics.

The Azure Platform

The platform of Azure wraps this package up:

  • ExpressRoute: Private SLA networking
  • Azure Data Factory: Orchestration of the data processing, not just ingestion.
  • Azure Key Vault: Securely storing secrets
  • Operations Management Suite: Monitoring & alerting

And now that your mind is warped, I’ll leave it there Smile I thought it was an excellent overview session.

My Review of Microsoft Ignite 2017

Another week of Microsoft Ignite has come to an end. I’m sitting in my hotel room, thinking back on this week, and it’s time to write a review – based on past years, emails, and phone calls, some people in MS HQ are sitting up a little straighter now Smile

Putting Minds at Rest

Let’s let those tensed up people relax – Microsoft Ignite 2017 was an extremely well run show and the content was the best yet. Let’s dive a bit deeper.

Orlando

OK, Orlando in September is a bit of a gamble. That’s hurricane season in Florida, and Microsoft got lucky when Hurricane Irma swerved a little further west than it was originally projected to. There was a day or two when we were worried about the conference going ahead, but all was good. The venue, the OCCC, is a huge complex on International Drive, aka I-Drive. Two huge conference centres, North/South and West are connected together by a SkyBridge that passes the Hyatt Regency, which is also used. You can walk across the road, and there is also a shuttle service.

image 

I heard that 30,000 people attended this conference, plus maybe 10,000 staff/vendors/sponsors. Imagine that crowd in one venue? At Chicago, there were 22,000 attendees and it sure felt like it. On day 1 in Orlando, it felt busy but that’s because there are fewer/larger sessions and the crowds felt oppressive. But once the keynotes were over, the crowds spread out and things were good – especially after I found a lesser used path between West and the Hyatt Smile

What makes a city? It’s the people. My favourite TechEd (Europe and North America) was in New Orleans – yeah, even with all the walking! The people were just so friendly and appreciative of us visiting their city. The staff in Orlando were almost as amazing – please take that as a compliment because it’s meant to be. There was always a hello when you passed by, and if you had a question they did their best (including a radio call) to find the answer.

Hotels & Buses

I-Drive is the main place of accommodation for anyone doing the Universal Parks/Disney thing in Florida. There’s an abundant amount of hotels, bars, and restaurants along this road, especially within 25 minutes walk of the OCCC. My hotel, the Castle, was exactly 25 minutes walk away. I took the bus every morning, and was at the convention centre in around 10 minutes – a far cry from the 1 hour in Chicago! Every evening, bar one, I walked home – on the Friday I ordered an Uber and the 4+ star driver arrived in under 2 minutes, and I paid less than 7 dollars for the ride home – a far cry from the minimum of $60 dollars that Chicago taxi drivers demanded!

The hotels were all close by to the OCCC, and because of the nature of the area, all had plenty of services nearby. My hotel had IHOP, Dennys, many bars & restaurants, and plentiful tourist shops nearby. The hotels were all of a good quality.

The bus service was fast, and all the staff had a friendly hello. The driver on the last morning made sure to have a joke with us all after the Thursday night party, and made a big point of thanking us all for attending – she’d been driving buses for conferences for several years. These little things make a difference.

I used Uber for the first time ever in Orlando at those times/places when the conference bus service wasn’t an option. Wow! Let’s leave it there Smile

Food

Conference food is never exactly a Michelin star experience, but I am a man of simple tastes when it comes to food. I was disappointed when the North/South hall ran out of food on the Monday – it was the venue for the keynotes so that was where most people would be. We were redirected to the West hall, but that was 20+ minutes away! I went hungry because I had a session to be at.

After that, the conference had 30 minute breaks for lunch – enough time to grab lunch to go, which was sandwiches/fruit/salad/dessert in a box to go. I was quite happy with that because I was here to learn, not to dine. One day my lunch went into my laptop bag for later, others it was scoffed down.

For breakfast, I have learned to pick up a bowl, plastic spoons, cereal, and milk (kept in the hotel room fridge) for the week. I did that and was happy, but no-one was complaining about the food in the halls. There were no turkey sausages – REPEAT – no turkey sausages. What is it with Microsoft and turkey sausages?!?! Yup, it was good ol’ pastry, eggs, and bacon.

The Content

This is the reason we attend Ignite. The main keynote, Satya Nadella, was mostly the same thing that Nadella has presented since his rise to CEO. To be honest, I’m well bored of words I know, in sentences that mean little. The highlights were they keynote by a woman who was genuinely one of the most likeable &  enthusiastic people I’ve ever seen out of Redmond (boo-yah!), and a panel of scientists working on Microsoft’s quantum computing project that made us all feel stupid – in a good way.

I was here for the breakout sessions. My focus was on Azure, and I got lots of that. I also attended a pair of Windows Server sessions. For the most part, the session quality was excellent. I talked to loads of people during the week and they all said the same thing. In once conversation with a fellow MVP, he said “you know how you find yourself leaving a disappointing session, and try to find something else in that time slot …”, both he and I agreed that that hadn’t happened to us this year. The only time I thought about leaving early was during “customer stories”, which was nearly always a presentation by an Expo hall sponsor advertising their wares instead of talking about their experiences with the topic of the session. I really dislike being advertised to in a conference that I’ve (my employer, really) paid to attend. Luckily, that was only in a few sessions. Less of that please, Microsoft!

I didn’t attend any theatre sessions. Boy, have they changed since Chicago! I presented in Chicago and the organization of theatre sessions was … unorganized. This year, some of those presentations were drawing bigger crowds than the official breakout sessions by Microsoft. It seems now that a breakout is normally a subject that can be covered in 20 minutes instead of the 75 that is normal for the breakout sessions. And Microsoft presented a bunch of them too, not just community members.

I do have a bit of a downer – Friday. Friday is a dead day. Microsoft staff mostly abandon the conference on Thursday afternoon. Unless things change, book your travel to leave on Thursday night/Friday morning. I would have loved to have left on Thursday night, to spend the weekend with my family before heading off again on Sunday. It felt like, this year, that the weakest content was on Friday. Normally there were 30+ breakout sessions per time slot from Monday-Thursday, but on Friday it was 10-12, and not much got my attention. I ended up attending the first session, doing a podcast recording with a friend, and leaving early. That’s time with my family that I lost, that could have been put to good use, but was wasted. If Friday morning is dead, then I would prefer Microsoft to run a Monday-Thursday conference.

Overall Feeling

I got quite a lot out of this week. There was so much announced that it will take me weeks to digest it all. Between Monday-Thursday, there was so much that I wanted to attend that I will have to download sessions to get up to date. I came here wanting to learn lots of Azure PaaS, but so much was going on that I couldn’t attend everything – thankfully we have MyIgnite, Channel 9, and YouTube. I also missed out on the hands-on labs, but they will remain live for attendees for 6 months.

I was delighted to hear that the conference will return to Orlando next year. I’d heard a nasty rumour about Ignite merging with Microsoft’s internal MGX conference in Las Vegas, which would have been an unmitigated disaster. Orlando has been the best place so far for handling the huge audience. In my opinion, there’s not much that Microsoft has to do to improve Ignite – keep what’s there and rethink Friday. Oh – and invent a new way to absorb 4 sessions at once.

Wow, this review is sooo different to my TechEd Europe 2009 review Smile *dodges more bullets*

Azure IaaS Design & Performance Considerations–Best Practices & Learnings From The Field

Speaker: Daniel Neumann, TSP – Azure Infrastructure, Microsoft (ex-MVP).

Selecting the Best VM Size

Performance of each Azure VM vCPU/core is rated using ACU, based on 100 for the Standard A-Series. E.g. D_v2 offers 210-250 per vCPU. H offers 290-300. Note that the D_v3 has lower speeds than D_v2 because it uses hyprethreading on the host – MS matched this by reducing costs accordingly. Probably not a big deal – DB workloads which are common on the D-family care more about thread count than GHz.

Network Performance

Documentation has been improved to show actual Gbps instead of low/medium/high. Higher-end machines can be created with Accelerated Networking (SR-IOV) which can offer very high speeds. Announced this week: the M128s the VM can hit 30 Gbps.

RSS

Is not always enabled by default for Windows VMs. It is on larger VMs, and it is for all Linux machines. Can greatly improve inbound data transfer performance for multi-core VMs.

Storage Throughput

Listed in the VM sizes. This varies between series, and increases as you go up through the sizes. Watch out when using Premium Storage – lower end machines might not be able to offer the potential of larger disks or storage pools of disks, so you might need a larger VM size to achieve the performance potential of the disks/pool.

Daniel uses a tool called PerfInsights from MS Downloads to demo storage throughput.

Why Use Managed Disks

Storage accounts are limited to 50,0000 IOPS since 20/9/2017. That limits the number of disks that you can have in a single storage account. If you put too many disks in a single storage account, you cannot get the performance potential of each disk.

Lots of reasons to use managed disks. In short:

  • No more storage accounts
  • Lots more management features
  • FYI: no support yet for Azure-to-Azure Site Recovery (replication to other regions)

If you use un-managed disks with availability sets, it can happen that all 3 copies of storage accounts are in the same fault domain. With managed disks, availability set alignment is mirrored by disk placement.

Storage Spaces

Do not use disk mirroring. Use simple virtual disks/LUNs.

Ensure that the column count = the number of disks for performance.

Daniel says to format the volume with 64KB allocation unit size. True, for almost everything except SQL Server. For normal transactional databases, stick with 64KB allocation unit size. For SQL Server data warehouess, go with 256KB allocation unit size – from the SQL Tiger team this week.

Networking

Daniel doesn’t appear to be a fan of micro-segmentation of a subnet using an NVA. Maybe the preview DPDK feature for NVA performance might change that.

He shows the NSG Security Group View in Network Watcher. It allows you to understand how L4 firewall rules are being applied by NSGs. In a VM you also have: effective routes and effective security rules.

Encryption Best Practices

Azure Disk Encryption requires that your key vault and VMs reside in the same Azure region and subscription.

Use the latest version of Azure PowerShell to configure Azure Disk Encryption.

You need an Azure AD Service Principal – the VM cannot talk directly to the key vault, so it goes via the service principal. Best practice is to have 1 service principal for each key vault.

Storage Service Encryption (managed disks) is easier. There is no BYOK at the moment so there’s no key vault function. The keys are managed by Azure and not visible to the customer.

The Test Tools Used In This Session

29-09-2017 09-33 Office Lens (1)

Comparing Performance with Encryption

There’s lots of charts in this section so best to watch the video on Channel 9/Ignite?YouTube.

In short, ADE encryption causes some throughput performance hits, depending on disk tier, size, and block size of data – CPU 3% utilization, no IOPS performance hit. SSE has no performance impact.

Azure Backup Best Practices

You need a recovery services vault in the same region/subscription as the VM you want to backup.

VMs using ADE encryption must have a Key Encryption Key (KEK).

Best case performance of Azure Backup backups:

  • Initial backup: 20 Mbps.
  • Incremental backup: 80 Mbps.

Best practices:

  • Do not schedule more than 40 VMs to backup at the same time.
  • Make sure you have Python 2.7 in Linux VMs that you are backing up.

Protect Your Data With Microsoft Azure Backup

Speakers:

  • Vijay Tandra Sistla, Principal PM Manager
  • Aruna Somendra, Senior Program Manager

Aruna is first to speak. It’s a demo-packed session. There was another session on AB during the week – that’s probably worth watching as well.

All the attendees are from diverse backgrounds, and we have one common denominator: data. We need to protect that data.

Impact of Data Loss

  • The impact can be direct, e.g. WannaCry hammering the UK’s NHS and patients.
  • It can impact a brand
  • It can impact your career

Azure Backup was built to:

  • Make backups simple
  • Keep data safe
  • Reduce costs

Single Solution

Azure Backup covers on-premises and Azure. It is one solution, with 1 pricing system no matter what you protect: instance size + storage consumed.

Protecting Azure Resources

A demo will show this in action, plus new features coming this year. They’ve built a website with some content on Azure Web Apps – images in Azure FIles and data in SQL in an IaaS VM. Vijay refreshes the site and the icons are ransomwared.

Azure Backup can support:

  • Azure IaaS VMs – the entire VM, disks, or file level recovery
  • Azure Files via Storage account snapshots (NEW)
  • SQL in an Azure IaaS VM (NEW)

Discovery of databases is easy. An agent in the guest OS is queried, and all SQL VMs are discovered. Then all databases are shown, and you back them up based on full / incremental / transaction log backups, using typical AB retention.

For Azure File Share, pick the storage account, select the file share, and then choose the backup/retention policy. It keeps up to 120 days in the preview, but longer term retention will be possible at GA.

When you create a new VM, the Enable Backup option is in the Settings blade. So you can enable backup during VM creation instead of trying to remember to do it later – no longer an afterthought.

Conventional Backup Approaches

What happens behind the scenes in AB. Instead of using on-prem SQL, file servers, you’re starting to use Azure Files and SQL in VMs. Instead of hacking backups into Azure storage (doesn’t scale, and messy) you enable Azure Backup which offers centralized management, In Azure, it is infrastructure-free. SQL is backed up using a backup extension, VM’s are backed up using a backup extension.

28-09-2017 14-34 Office Lens

Azure File Sync is supported too:

In preview, there is short-term retention using snpashots in the source storage account. After GA they will increase retention and enable backups to be storage in the RSV.

28-09-2017 14-38 Office Lens

Linux

When you backup a Linux VM, you can run a pre-script, do the backup, and then run a post-script. This can enable application-consistent backups in Linux VMs in Azure. Aruna logs into a Linux VM via SSH. There are Linux CLI commands in the guest OS, e.g. az backup. There is a JSON file that describes the pre-and post scripts. There’s some scripts by a company by a company called capside for MySQL. The pre-script creates database dumps and stops the databases.

28-09-2017 14-49 Office Lens

az backup recoverypoint list and some flags can be used to list the recovery points for the currently logged in VM. The results show if they are app or file consistent.

az backup restore files and some parameters can be used to mount the recovery point – you then copy files from the recovery point, and unmount the recovery point when done.

28-09-2017 14-45 Office Lens

Restore as a Service

28-09-2017 14-50 Office Lens

On-Premises

2/3 of customers keeping on-premises data.

Two solutions in AB for hybrid backup:

  • Microsoft Azure Backup Server (MABS) / DPM: Backup Hyper-V, VMware, SQL, SharePoint, Exchange, File Server & System State to local storage (short-term retention)  and to the cloud (long term retention)
  • MARS Agent: Files & Folders, and System State backed up directly to the cloud.

System State

Protects Active Directory, IIS metadata, file server metadata. registry, COM+ Cert Services, Cluster services info, AD, IIS metabase.

Went live in MARS agent last month.

In a demo, Vijay deletes users from AD. He restores system state files using MARS. Then you reboot the DC in AD restore mode. And then use the wbadmin tool to restore the system state. wbadmin start systemstaterecovery. You reboot again, and the users are restored.

Vijay shows MARS deployment, and shows the Project Honolulu implementation.

Next he talks about the ability to do an offline backup instead of an online full backup. This leverages the Azure storage import service, which can leverage the new Azure Data Box – a tamper proof storage solution of up to 100 TB.

Security

Using cloud isolates backup data from the production data. AB includes free multi-approval process to protect destructive operations to hybrid backups. All backup data is encrypted. RBAC offers governance and control over Azure Backup.

There are email alerts (if enabled) for destructive operations.

If data is deleted, it is retained for 14 days so you can still restore your data, just in case.

Hybrid Backup Encryption

Data is encrypted before it leaves the customer site.

Customers want:

  • To be able to change keys
  • Keep the key secret from MS

A passphrase is used to create they key. This is a key encryption key process. And MS never has your KEK.

Azure VM Disk Encryption

You still need to be able to backup your VMs. If a disk is encrypted using a KEK/BEK combination in the Key Vault, then Azure Backup includes the keys in the backup so you can restore from any point in time in your retention policy.

Isolation and Access Control

Two levels of authorization:

  • You can control access/roles to individual vaults for users.
  • There are permissions or roles within a vault that you can assign to users.

Monitoring & Reporting

Typical questions:

  • How much storage am I using?
  • Are my backups healthy?
  • Can I see the trends in my system?

Vijay does a tour of information in the RSV. Next he shows the new integration with OMS Log Analytics. This shows information from many RSVs in a single tenant. You can create alerts from events in Log Analytics – emails, webhooks, runbooks, or trigger an ITSM action. The OMS data model, for queries, is shared on docs.microsoft.com.

For longer term reporting, you can export your tenant’s data to an AB Content Pack in PowerBI – note that this is 1 tenant per content pack import, so a CSP reseller will need 100 imports of the content pack for 100 customers. Vijay shows a custom graphical report showing the trends of data sources over 3 months – it shows growth for all sources, except one which has gone down.

Power BI is free up to 1 GB of data, and then it’s a per-user monthly fee after that.

Roadmap

  • Backup of SQL in IaaS – preview
  • Backup of Azure file – preview
  • Azure CLI
  • Backup of encrypted VMs without KEK
  • Backup of VMs with storage ACLs
  • Backup of large disk VMs
  • Upgrade of classic Backup Vault to ARM RSV
  • Resource move across RG and subscription
  • Removal of vault limits
  • System State Backup

From IT Pros to IT Heros–With Azure DevTest Labs

Claude Remillard, Group Program Manager

How IT pros can make devs very happy!

Reason to Exist

50% or more of infrastructure is used for non-production environment. In an old job of mine, we have dev, test, and production versions of every system. That’s a lot of money! The required life of dev & test is up and down. The cloud offers on-demand capacity.

DevTest Labs

Solution for fast, easy, and agile dev-test environments in Azure:

  • Fast provisioning
  • Automation & self-service
  • Cost control and governance

Think of it as a controlled subset of Azure where the devs can roam free.

Test Environments

Typical pipeline:

  1. Check-in
  2. Build
  3. Test
  4. Release

You can pre-configure a lot of things to get a VM. Standardize images. Use an image factory.

Training / Education

A number of training companies are using DevTest environments. They can set a limit in the lab, and then let people do what they need to do in that lab.

Trials / Demos / Hackathons

Invite people in to try something out, experiment with designs/patterns, and do this in a short-lived and controlled environment.

Demo

DevTest Labs is just another Azure service. You create the lab, configure it, and assign users to it.

In Overview, you can see VMs you own, and VMs that you can claim. In virtual machines, you can see an environment alongside VMs; this is a collection of related resources. Claimable VMs are pre-created and shutdown. An IT pro could take a s/w build, deploy it overnight, and let devs/tests claim the machines the following morning.

When he goes into a VM, it has a tiny subset of the usual VM features. It has other things, like Auto-Start and Auto-Shutdown to reduce costs. You can create a custom image from a VM, which includes optionally running Sysprep. That image is then available to everyone in the lab to create VMs from. Images can be shared between labs.

Everything in the lab can be automated with APIs, PowerShell (and, thus, Automation).

He goes to create a VM. The new VM is build from a “base”. Bases can be custom/gallery images, ARM templates, or formulas. It sounds like the ARM template could be in a source control system and you could have multiple labs subscribe to those templates, or other artefacts.

If you select a VM base, there’s just one blade to create it. Name the machine, put in a guest OS username/password (can be saved as a reusable secret), choose disk type/size, select a VM series/size (restricted by admin), add other artefacts (additional s/w you can add to the VM at the time of creation, e.g. Chrome using Choclatey package manager, join an AD domain, etc), optionally do some advanced settings (network options, IP config, auto-delete the VM, number of instance, make the VM claimable), and click Create.

You can export a lab as a file, and use that file to spin up new labs.

Back in the lab, he goes to Configuration & Policies. Cost Tracking shows trends and resource specific costs. This is based on RRP costs – special deals with MS are not available to the DevTest Lab APIs. The goal here isn’t to do accounting– it’s to see spend trends and spikes.

Users: Devs should be “Lab Users”. You can share a lab with external users, e.g. consultants.

Policy Settings allows you to control:

  • Allowed virtual machines: You select which series/size can be deployed.
  • Virtual machines per user: You can limit the number of machines. You can limit the number of machines using Premium Disks. Enforced per user.
  • Virtual machines per lab: You can limit VMs and Premium VM disks per lab

Schedules:

  • Auto-Start
  • Auto-Stop

You can send emails and webhooks before auto-shutdown.

External Resources:

  • Repositories: Places where you pull artefacts from. Supports VSTS, GitHub and Git. The asure-devtestlab GitHub has lots of sample artefacts, scripts, and templates. This is the best way to share things between labs.
  • Virtual Networks: What networks will be available – should be pre-created by IT pros. You set up a default virtual network for new VMs, optionally with S2S VPN/ExpressRoute. You can control whether a VM can have a public IP or not.

Virtual Machine Bases:

  • Marketplace Images: What is available from the Marketplace: nothing / all / subset.
  • Custom images:
  • Formulas:

At this point I asked if Azure DevTest Labs is available in CSP. The speaker had never heard of the primary method for selling Azure by MS Partners. That’s pretty awful, IMO.

Image Factories

A way to build images that can be reused. It’s a bit more though – it’s a configuration that builds VMs with configurations automatically on a regular basis. This makes it possible to produce the latest VM images with bits baked in to your devs and testers.

That’s everything.

Application-Aware Disaster Recovery For VMware, Hyper-V, and Azure IaaS VMs with Azure Site Recovery

Speaker: Abhishek Hemrajani, Principal Lead Program Manger, Azure Site Recovery, Microsoft

There’s a session title!

The Impact of an Outage

The aviation industry has suffered massive outages over the last couple of years costing millions to billions. Big sites like GitHub have gone down. Only 18% of DR investors feel prepared (Forrester July 2017 The State of Business Technology Resiliency. Much of this is due to immature core planning and very limited testing.

Causes of Significant Disasters

  • Forrester says 56% of declared disasters are caused by h/w or s/w.
  • 38% are because of power failures.
  • Only 31% are caused by natural disasters.
  • 19% are because of cyber attacks.

Sourced from the above Forrester research.

Challenges to Business Continuity

  • Cost
  • Complexity
  • Compliance

How Can Azure Help?

The hyper-scale of Azure can help.

  • Reduced cost – OpEx utility computing and benefits of hyper-scale cloud.
  • Reduced complexity: Service-based solution that has weight of MS development behind it to simplify it.
  • Increased compliance: More certifications than anyone.

DR for Azure VMs

Something that AWS doesn’t have. Some mistakenly think that you don’t need DR in Azure. A region can go offline. People can still make mistakes. MS does not replicate your VMs unless you enable/pay for ASR for selected VMs. Is highly certified for compliance including PCI, EU Data Protection, ISO 27001, and many, many more.

  • Ensure compliance: No-impact DR testing. Test every quarter or, at least, every 6 months.
  • Meet RPO and RTO goals: Backup cannot do this.
  • Centralized monitoring and alerting

Cost effective:

  • “Infrastructure-less” DR sites.
  • Pay for what you consume.

Simple:

  • One-click replication
  • One-click application recovery (multiple VMs)

Demo: Typical SharePoint Application in Azure

3 tiers in availability sets:

  • SQL cluster – replicated to a SQL VM in a target region or DR site (async)
  • App – replicated by ASR – nothing running in DR site
  • Web – replicated by ASR – nothing running in DR site
  • Availability sets – built for you by ASR
  • Load balancers – built for you by ASR
  • Public IP & DNS – abstract DNS using Traffic Manager

One-Click Replication is new and announced this week. Disaster Recovery (Preview) is an option in the VM settings. All the pre-requisites of the VM are presented in a GUI. You click Enable Replication and all the bits are build and the VM is replicated. You can pick any region in a “geo-cluster”, rather than being restricted to the paired region.

For more than one VM, you might enable replication in the recovery services vault (RSV) and multi-select the VMs for configuration. The replication policy includes recovery point retention and app-consistent snapshots.

New: Multi-VM consistent groups. In preview now, up to 8 VMs. 16 at GA. VMs in a group do their application consistent snapshots at the same time. No other public cloud offers this.

Recovery Plans

Orchestrate failover. VMs can be grouped, and groups are failed over in order. You can also demand manual tasks to be done, and execute Azure Automation runbooks to do other things like creating load balancer NAT rules, re-configuring DNS abstraction in Traffic Manager, etc. You run the recovery plan to failover …. and to do test failovers.

DR for Hyper-V

You install the Microsoft Azure Recovery Services (MARS) agent on each host. That connects you to the Azure RSV and you can replicate any VM to that host. No on-prem infrastructure required. No connection broker required.

DR for VMware

You must deploy the ASR management appliance in the data centre. MS learned that the setup experience for this is complex. They had a lot of pre-reqs and configurations to install this in a Windows VM. MS will deliver this appliance as an OVF template from now on – familiar format for VMware admins, and the appliance is configured from the Azure Portal. Replicate Linux and Windows VMs to Azure, as with Hyper-V from then on.

Demo: OVF-Based ASR Management Appliance for VMware

A web portal is used to onboard the downloaded appliance:

  1. Verify the connection to Azure.
  2. Select a NIC for outbound replication.
  3. Choose a recovery services vault from your subscription.
  4. Install any required third-party software, e.g. PowerCLI or MySQL.
  5. Validate the configuration.
  6. Configure vCenter/ESXi credentials – this is never sent to Azure, it stays local. The name of the credential that you choose might appear in the Azure portal.
  7. Then you enter credentials for your Windows/Linux guest OS. This is required to install a mobility service in each VMware VM. This is because VMware doesn’t use VHD/X, it uses VMDK. Again, not sent to MS, but the name of the credential will appear in the Azure Portal when enabling VM replication so you can select the right credentials.
  8. Finalize configuration.

This will start rolling out next month in all regions.

Comprehensive DR for VMware

Hyper-V can support all Linux distros supported by Azure. On VMware they’re close to all. They’ve added Windows Server 2016, Ubuntu 14.04 and 16.04 , Debian 7/8, managed disks, 4 TB disk support.

Achieve Near-Zero Application Data Loss

Tips:

  • Periodic DR testing of recovery plans – leverage Azure Automation.
  • Invoke BCP before disasters if you know it’s coming, e.g. hurricane.
  • Take the app offline before the event if it’s a planned failover – minimize risks.
  • Failover to Azure.
  • Resume the app and validate.

Achieve 5x Improvement in Downtime

Minimize downtime: https://aka.ms/asr_RTO

He shows a slide. One VM took 11 minutes to failover. Others took around/less than 2 minutes using the above guidance.

Demo: Broad OS Coverage, Azure Features, UEFI Support

He shows Ubunu, CentOS, Windows Server, and Debian replicating from VMware to Azure. You can failover from VMware to Azure with UEFI VMs now – but you CANNOT failback. The process converts the VM to BIOS in Azure (Generation 1 VMs). OK if there’s no intention to failback, e.g. migration to Azure.

Customer Success Story – Accenture

They deployed ASR. Increased availability. 53% reduction in infrastructure cost. 3x improvement in RPO. Savings in work and personal time. Simpler solution and they developed new cloud skills.

They get a lot of alerts at the weekend when there’s any network glitches. Could be 500 email alerts.

Demo: New Dashboard & Comprehensive Monitoring

Brand new RSV experience for ASR. Lots more graphical info:

  • Replication health
  • Failover test success
  • Configuration issues
  • Recovery plans
  • Error summary
  • Graphical view of the infrastructure: Azure, VMware, Hyper-V. This shows the various pieces of the solution, and a line goes red when a connection has a failure.
  • Jobs summary

All of this is on one screen.

He clicks on an error and sees the hosts that are affected. He clicks on “Needs Attention” in one of the errors. A blade opens with much more information.

We can see replication charts for a VM and disk – useful to see if VM change is too much for the bandwidth or the target storage (standard VS premium). The disk level view might help you ID churn-heavy storage like a page file that can be excluded from replication.

A message digest will be sent out at the end of the day. This data can be fed into OMS.

Some guest speakers come up from Rackspace and CDW. I won’t be blogging this.

Questions

  • When are things out: News on the ASR blog in October
  • The Hyper-V Planner is out this week, and new cost planners for Hyper-V and VMware are out this week.
  • Failback of managed disks is there for VMware and will be out by end of year for Hyper-V.

Running Tier 1 Worklaods on SQL Server on Microsoft Azure Virtual Machines

Speaker: Ajay Jagannathan, Principal PM Manager, Microsoft Data Platform Group. He leads the @mssqltiger team.

I think that this is the first every SQL Server that I’ve attended in person at a TechEd/Ignite. I was going to a PaaS session instead, but I’ve got so many customers running SQL Server on Azure VMs, that I thought that this was important for me to see. I also thought it might be useful for a lot of readers.

Microsoft Data Platform

Starting with SQL 2016, the goal was to make the platform consistent on-premises, with Azure VMs, or in Azure SQL. With Azure, scaling is possible using VM features such as scale sets. You can offload database loads, so analytics can be on a different tier:

  • On-premises: SQL Server and SQL Server (DW) Reference architecture
  • IaaS: SQL Server in Azure VM with SQL Server (DW) in Azure VM.
  • PaaS: Azure SQL database with Azure SQL data warehouse

Common T-SQL surface area. Simple cloud migration. Single vendor for support. Develop once and deploy anywhere.

Azure VM

  • Azure load balancer routes traffic to the VM NIC.
  • The compute and storage are separate from the storage.
  • The virtual machine issues operations to the storage.

SQL Server in Azure VM – Deployment Options

  • Microsoft gallery images: SQL Server 2008 R2 – 2017, SQL Web, Std, Ent, Dev, Express. Windows Server 2008 R2 – WS2016. RHEL and Ubuntu.
  • SQL Licensing: PAYG based on number of cores and SQL edition. Pay per minute.
  • Bring your own license: Software Assurance required to move/license SQL to the cloud if not doing PAYG.
  • Creates in ~10 miuntes.
  • Connect via RDP, ADO, .NET, OLEDB, JBDC, PHO …
  • Manage via Portal, SSMS, owerShell, CLI, System Center …

It’s a VM so nothing really changes from on-premises VM in terms of management.

Everytime there’s a critical update or service pack, they update the gallery images.

VM Sizes

The recommend DS__V2- or FS-Series with Premium Storage. For larger loads, they recommend the GS- and LS-Series.

For other options, there’s the ES_v2 series (memory optimized DS_v3), and the M-Series for huge RAM amounts.

VM Availability

Availability sets distribute VMs across fault and update domains in a single cluster/data centre. You get a 99.95% SLA on the service for valid configurations. Use this for SQL clusters.

Managed disks offer easier IOPS management, particularly with Premium Disks (storage account has a limit of 20,000 IOPS). Disks are distributed to different storage stamps when the VM is in an availability set – better isolation for SQL HA or AlwaysOn.

High Availability

Provision a domain controller replica in a different availability set to your SQL VMs. This can be in the same domain as your on-prem domain (ExpressRoute or site-to-site VPN).

Use (Get-Cluster).SameSubnetThreshold = 20 to relax Windows Cluster failure detection for transient network failure.

Configure the cluster to ignore storage. They recommend AlwaysOn. There is no shared storage in Azure. New-Cluster –Name $ClusterName –NoStorage –Node $LocalMachineName

Configure Azure load balancer and backend pool. Register the IP address of listener.

There are step-by-step instructions on MS documentation.

SQL Server Disaster Recovery

Store database backups in geo-replicated readable storage. Restore backups in a remote region (~30 min).

Availability group options:

  • Configure Azure as remote region for on-premise
  • Configure On-prem as DR for Azure
  • Replicate in Azure Remote region – failover to remove in ~30s. Offload remote reads.

Automated Configuration

Some of these are provided by MS in the portal wizard:

  • Optimization to a target workload: OLTP/DW
  • Automated patching and shutdown – latter is very new, and to reduce costs for new dev/test workloads to reduce costs at the end of the workday.
  • Automated backup to a storage account, including user and system databases. Useful for a few databases, but there’s another option coming for larger collections.

Storage Options

The recommend LRS only to keep write performance to a maximum. GRS storage is slower, and could lead to database file being written/replicated before log storage.

Premium Storage: high IOPS and low latency. Use Storage Spaces to increase capacity and performance. Enable host-based read caching in data disks for better IOPS/latency.

Backup to Premium Storage is 6x faster. Restore is 30x faster.

Azure VM Connectivity

  • Over the Internet.
  • Over site-site tunnel: VPN or ExpressRoute
  • Apps can connect transparently via a listener, e.g. Load Balancer.

Demo: Deployment

The speaker shows a PowerShell script. Not much point in blogging this. I refer JSON anyway.

http://aka.ms/tigertoolbox is the script/tools/demos repository.

Security

  • Physical security of the datacenter
  • Infrastructure security: virtual network isolation, and storage encryption including bring-your-own-key self-service encryption with Key Vault. Best practices and monitoring by Security Center.
  • Many certifications
  • SQL Security: auto-patching, database/backup encryption, and more.

VM Configuration for SQL Server

  • Use D-Series or higher.
  • Use Storage Spaces for performance of disks. Use Simple disks: the number of columns should equal the number of disks. For OLTP use 64KB interleave and use 256KB for data warehouse.
  • Do not use the system drive.
  • Put TempDB, logs, and databases on different volumes because of their different write patterns.
  • 64K allocation unit size.
  • Enable read caching on disks for data files and TempDB.
  • Do not use GRS storage.

SQL Configuration

  • Enable instant file initialization
  • Enabled locked ages
  • Enable data page compression
  • Disable auto-shrink for your databases
  • Backup to URL with compressed backups – useful for a few VMs/databases. SQL 2016 does this very quickly.
  • Move all databases to data disks, including system databases (separate data and log). Use read caching.
  • Move SQL Server error log and trace file directories to data disks

Demo: Workload Performance of Standard Versus Premium Storage

A scripted demo. 2 scripts doing the same thing – one targeting a DB on Standard disk (up to 500 IOPS) and the second targets a DB on a Premium P30 (4,500 IOPS) disk. There’s table creation, 10,000 rows, inserts, more tables, etc. The scripts track the time required.

It takes a while – he has some stats from previous runs. There’s only a 25% difference in the test. Honestly – that’s no indicative of the differences. He needs a better demo.

An IFI test shows that the bigger the database file is, the bigger the difference is in terms of performance – this makes sense considering the performance nature of flash storage.

Seamless Database Migration

There is a migration guide, and tools/services. http://datamigration.microsoft.com. One-stop shop for database migrations. Guidance to get from source to target. Recommended partners and case studies.

Tools:

  • Data Migration Assistant: An analysis tool to produce a report.
  • Azure Database Migration Service (free service that runs in a VM): Works with Oracle, MySQL, and SQL Server to SQL Server, Azure SQL, Azure SQL Managed Instance. It works by backing up the DB on the source, moving the backup to the cloud, and restoring the backup.

Azure Backup

Today, SQL Server can backup from the SQL VM (Azure or on-prem) to a storage account in Azure. It’s all managed from SQL Server. Very distributed, no centralized reporting, difficult/no long-term retention.  Very cheap.

Azure Backup will offer centralized management of SQL Backup in an Azure VM. In preview today. Managed from the Recovery Services Vault. You select the type of backup, and a discovery will detect all SQL instances in Azure VMs, and their databases. A service account is required for this and is included in the gallery images. You must add this service for custom VMs. You then configure a backup policy for selected DBs. You can define a full backup policy, incremental, and transactional backup policy with SQL backup compression option. The retention options are the familiar ones from Azure Backup (up to 99 years by the looks of it). The backup is scheduled and you can do ad-hoc/manual backups as usual with Azure Backup.

You can restore databases too – there’s a nice GUI for selecting a restore date/time. It looks like quite a bit of work went into this. This will be the recommended solution for centralized backup of lots of databases, and for those wanting long term retention.

Backup Verification is not in this solution yet.

Azure AD Domain Services

 

Options when Moving to The Cloud

  • Switch to using SaaS versions of the s/w
  • Rewrite the app
  • Lift and shift: the focus today.

How Organizations Handle AD Requirements Today

  • They set up site-site VPN and deploy additional domain controllers in the cloud.
  • They deploy another domain/forest in the cloud and provision a trust, e.g. ADFS.

Imagine a Simpler Alternative

  • Simpler
  • Compatible
  • Available
  • Cost-effective

Introducing Azure AD Domain Services

  1. You provision a VNet.
  2. Then you activate Azure AD Domain Services in Azure AD on that VNet
  3. You can manage the domain using RSAT.
  4. You can optionally sync your Windows Server AD with Azure AD to share accounts/groups.

Managed Domains

  • Domain controllers are patched automatically.
  • Secure locked down domain, complaint with AD deployment best practices
  • You get 2 DCs, so fault tolerant
  • Automatic health detection and remediation. If a DC fails, a new one is provisioned.
  • Automatic backups for disaster recovery.
  • No need to monitor replication – done as part of the managed service.

Sync

If you deploy sync, e.g. Azure AD Connect, then it flows as follows: Windows Server AD <-> Azure AD <-> Azure AD Domain Services

Features

  • SIDs are reused. This means things like file servers can be lifted and shifted to Azure without re-ACLing your workloads.
  • OUs
  • DNS

Pricing

Based on the number of objects in the directory. Micro-pricing.

Decisions

27-09-2017 16-13 Office Lens

New Features

  • Azure Portal AD Experience is GA
  • ARM virtual network join is GA

Demo

He creates an AADS domain. THere are two OUs by default:

  • AADC Users
  • AADC Computers

Back to the PowerPoint

Notes

  • You cannot deploy AADDS in the classic Azure portal any more.
  • The classic deployment model will be retired – the ARM deployment is better and more secure.
  • The classic VNet support is ending (for new domains) soon.
  • Existing deployments will continue to be supported

Questions

  • Is there GPO sync? No. This is a different domain, so there is no replication of GPO from on prem to AADDS
  • Can you add another DC to this domain? No. There will be (in the future) the ability to add more AADDS “DCs” in other VNets.
  • What happens if a region goes down? The entire domain goes down now – but when they have additional DC support this will solve the problem
  • Is there support in CSP? No, but it’s being worked on.

Manage Azure IaaS VMs

You can join these machines to AADDS. You can push GPO from AADDS. You’ll sign into the VMs using AADDS user accounts/passwords.

GPO

Members of AADDC Administrators can create OUs. You can target GPO to OUs.

Move Servers to the Cloud

Sync users/passwords/SIDs to the cloud, and then lift/shift applications/VMs to the cloud. THe SIDs are in sync so you don’t need to change permissions, and there’s a domain already for the VMs to join without creating DC VMs.

LDAP over SSL

I missed most of this. I think you can access applications using LDAP over SSL via the Internet.

Move Server Applications To Azure

User AADDS to provision and manage service accounts.

Kerberos Constrained Delegation

Cannot work with AADDS using old methods –  You don’t have the privileges. The solution is to use PowerShell to implement resource-based KCD.

Modernize Legacy Apps with Application Proxy

You can get users to sign in via AAD and MFA into legacy apps. A token is given to the app to authorize the user.

SharePoint Lift and Shift

A new group called AAD DC Service Accounts. Add the SharePoint Profile sync user account to this group.

Domain Joined HDIsnight Cluster

You can “Kerber-ize” a HD cluster to increase security. This is in preview at the moment.

Remote Desktop Deployments

Domain-join the farm to AADDS. The licensing server is a problem at the moment – this will be fixed soon. Until then, it works, but you’ll get licensing warnings.

Questions

  • Schema extensions? Not supported but on the roadmap.
  • Logging? Everything is logged but you have to go through support to get at them at the moment. They want to work on self-service logging.
  • There is no trust option today. They are working on the concept of a resource domain – maybe before end of the year.
  • Data at rest, in ARM, is encrypted. The keys (1 set per domain) are managed by MS. MS has no admin credentials – there’s an audited process for them to obtain access for support. The NTLM hashes are encrypted.

Deciding When to DIY Your Own AD Deployment

27-09-2017 16-39 Office Lens

 

Features Being Considered

  • Cloud solution provider support – maybe early 2018.
  • Support for a single managed domain to space multiple virtual networks
  • Manage resource forests
  • Schema extensions – they’ll start with the common ones, and then add support for custom extensions.
  • Support for LDAP writes – some apps require this

Any questions/feedback to aaddsfb@microsoft.com