What Impact on You Will AMD EPYC Processors Have?

Microsoft has announced new HB-V2, Das_v3, and Eas_v3 virtual machines based on hosts with AMD EPYC processors. What does this mean to you and when should you use these machines instead of the Intel Xeon alternatives?

A is for AMD

The nomenclature for Azure virtual machines is large. It can be confusing for those unfamiliar with the meanings. When I discussed the A-Series, the oldest of the virtual machine series, I would tell people “A is the start of the alphabet” and discuss these low power machines. The A-Series was originally hosted on physical machines with AMD Opteron processors, a CPU that had lots of cores and required little electricity when compared to the Intel Xeon competition. These days, an A-Series might actually be hosted on hosts with Intel CPUs, but each virtual processor is throttled to offer similar performance to the older hosts.

Microsoft has added the AMD EPYC 7002 family of processors to their range of hosts, powering new machines:

  • HB_v2: A high performance compute machine with high bandwidth between the CPU and RAM.
  • Das_v3 (and Da_v3): A new variation on the Ds_v3 that offers fast disk performance that is great for database virtual
  • Eas_v3 (and Ea_v3): Basically the Das_v3 with extra

EPYC Versus Xeon

The 7002 or “Rome” family of EPYC processors is AMD’s second generation of this type of processor. From everything I have read, this generation of the processor family firmly returns AMD back into the data centre.

I am not a hardware expert, but some things really stand out about the EPYC, which AMD claims is revolutionary about how it focuses on I/O, which pretty important for services such as databases (see the Ds_v3/Es_v3 core scenarios). EPYC uses PCI Gen 4 which is double the performance of Gen 3 which Intel still uses. That’s double the bus to storage … great for disk performance. The EPYC gets offers 45% faster RAM access than the Intel option … hence Microsoft’s choice for the HB_v2. If you want to get nerdy, then there are fewer NUMA nodes per socket, which reduces context switches for complex RAM v process placement scenarios.

Why AMD Now?

There have been rumours that Microsoft hasn’t been 100% happy with Intel for quite a while. Everything I heard was in the PC market (issues with 4th generation, battery performance, mobility, etc). I have not heard any rumours of discontent between Azure and Intel – in fact, the DC-Series virtual machine exists because of cooperation between the two giant technology corporations on SGX. But two things are evident:

  • Competition is good
  • Everything you read about AMD’s EPYC makes it sound like a genuine Xeon killer. As AMD says, Xeon is a BMW 3-series and EPYC is a Tesla – I hope the AMD build quality is better than the American-built EV!
  • As is often the case, the AMD processor is more affordable to purchase and to power – both big deals for a hosting/cloud company.

Choosing Between AMD and Xeon

OK, it was already confusing which machine to choose when deploying in Azure … unless you’ve heard me explain the series and specialisation meanings. But now we must choose between AMD and Intel processors!

I was up at 5 am researching so this next statement is either fuzzy or was dreamt up (I’m not kidding!): it appears that for multi-threaded applications, such as SQL Server, then AMD-powered virtual machines are superior. However, even in this age-of-the-cloud, single threaded applications are still running corporations. In that case, (this is where things might be fuzzy) an Intel Xeon-powered virtual machine might be best. You might think that single-threaded applications are a thing of the past but I recently witnessed the negative affect on performance of one of those – no matter what virtual/hardware was thrown at it.

The final element of the equation will be cost. I have no idea how the cost of the EPYC-powered machines will compare with the Xeon-powered ones. I do know that the AMD processor is cheaper and offers more threads per socket, and it should require less power. That should make it a cheaper machine to run, but higher consumption of IOs per machine might increase the cost to the hosting company (Azure). I guess we’ll know soon enough when the pricing pages are updated.

Webinar – Getting More Performance From Azure VMs

I will be doing a webinar later today for the European SharePoint Office 365 & Azure Community (from the like-named conference). The webinar is at 14:00 UK/Irish, 15:00 CET, and 09:00 EST. Registration is here.

Title: Getting More Performance from Azure Virtual Machines

Speaker: Aidan Finn, MVP, Ireland

Date and Time: Wed, May 1, 2019 3:00 PM – 4:00 PM CEST

Webinar Description:  You’ve deployed your shiny new application in the cloud, and all that pride crashes down when developers and users start to complain that it’s slow. How do you fix it? In this session you’ll learn to understand what Azure virtual machines can offer, how to pick the right ones for the right job, and how to design for the best possible performance, including networking, storage, processor, and GPU.

Key benefits of attending:
– Understand virtual machine design
– Optimise storage performance
– Get more from Azure networking

Azure Availability Zones in the Real World

I will discuss Azure’s availability zones feature in this post, sharing what they can offer for you and some of the things to be aware of.

Uptime Versus SLA

Noobs to hosting and cloud focus on three magic letters: S, L, A or service level agreement. This is a contractual promise that something will be running for a certain percentage of time in the billing period or the hosting/cloud vendor will credit or compensate the customer.

You’ll hear phrases like “three nines”, or “four nines” to express the measure of uptime. The first is a 99.9% measure, and the second is a 99.99% measure. Either is quite a high level of uptime. Azure does have SLAs for all sorts of things. For example, a service deployed in a valid virtual machine availability set has a connectivity (uptime) SLA of 99.9%.

Why did I talk about noobs? Promises are easy to make. I once worked for a hosting company that offers a ridiculous 100% SLA for everything, including cheap-ass generic Pentium “servers” from eBay with single IDE disks. 100% is an unachievable target because … let’s be real here … things break. Even systems with redundant components have downtime. I prefer to see realistic SLAs and honest statements on what you must do to get that guarantee.

Azure gives us those sorts of SLAs. For virtual machines we have:

  • 5% for machines with just Premium SSD disks
  • 9% for services running in a valid availability set
  • 99% for services running in multiple availability zones

Ah… let’s talk about that last one!

Availability Sets

First, we must discuss availability sets and what they are before we move one step higher. An availability set is anti-affinity, a feature of vSphere and in Hyper-V Failover Clustering (PowerShell or SCVMM); this is a label on a virtual machine that instructs the compute cluster to spread the virtual machines across different parts of the cluster. In Azure, virtual machines in the same availability set are placed into different:

  • Update domains: Avoiding downtime caused by (rare) host reboots for updates.
  • Fault domains: Enable services to remain operational despite hardware/software failure in a single rack.

The above solution spreads your machines around a single compute (Hyper-V) cluster, in a single room, in a single building. That’s amazing for on-premises, but there can still be an issue. Last summer, a faulty humidity sensor brought down one such room and affected a “small subset” of customers. “Small subset” is OK, unless you are included and some mission critical system was down for several hours. At that point, SLAs are meaningless – a refund for the lost runtime cost of a pair of Linux VMs running network appliance software won’t compensate for thousands or millions of Euros of lost business!

Availability Zones

We can go one step further by instructing Azure to deploy virtual machines into different availability zones. A single region can be made up of different physical locations with independent power and networking. These locations might be close together, as is typically the case in North Europe or West Europe. Or they might be on the other side of a city from each other, as is the case in some in North America. There is a low level of latency between the buildings, but this is still higher than that of a LAN connection.

A region that supports availability zones is split into 4 zones. You see three zones (round robin between customers), labeled as 1, 2, and 3. You can deploy many services across availability zones – this is improving:

  • VNet: Is software-defined so can cross all zones in a single region.
  • Virtual machines: Can connect to the same subnet/address space but be in different zones. They are not in availability sets but Azure still maintains service uptime during host patching/reboots.
  • Public IP Addresses: Standard IP supports anycast and can be used to NAT/load balance across zones in a single region.

Other network resources can work with availability zones in one of two ways:

  • Zonal: Instances are deployed to a specific zone, giving optimal latency performance within that zone, but can connect to all zones in the region.
  • Zone Redundant: Instances are spread across the zone for an active/active configuration.

Examples of the above are:

  • The zone-aware VNet gateways for VPN/ExpressRoute
  • Standard load balancer
  • WAGv2 / WAFv2

Considerations

There are some things to consider when looking at availability zones.

  • Regions: The list of regions that supports availability zones is increasing slowly but it is far from complete. Some regions will not offer this highest level of availability.
  • Catchup: Not every service in Azure is aware of availability zones, but this is changing.

Let me give you two examples. The first is VM Boot Diagnostics, a service that I consider critical for seeing the console of the VM and getting serial console access without a network connection to the virtual machine. Boot Diagnostics uses an agent in the VM to write to a storage account. That storage account can be:

  • LRS: 3 replicas reside in a single compute cluster, in a single room, in a single building (availability zone).
  • GRS: LRS plus 3 asynchronous replicas in the paired region, that are not available for write unless Microsoft declares a total disaster for the primary region.

So, if I have a VM in zone 1 and a VM in zone 2, and both write to a storage account that happens to be in zone 1 (I have no control over the storage account location), and zone 1 goes down, there will be issues with the VM in zone 2. The solution would be to use ZRS GPv2 storage for Boot Diagnostics, however, the agent will not support this type of storage configuration. Gotcha!

Azure Advisor will also be a pain in the ass. Noobs are told to rely on Advisor (it is several questions in the new Azure infrastructure exams) for configuration and deployment advice. Advisor will see the above two VMs as being not highly available because they are not (and cannot) be in a common availability set, so you are advised to degrade their SLA by migrating them to a single zone for an availability set configuration – ignore that advice and be prepared to defend the decision from Azure noobs, such as management, auditors, and ill-informed consultants.

Opinion

Availability zones are important – I use them in an architecture pattern that I am working on with several customers. But you need to be aware of what they offer and how certain things do not understand them yet or do not support them yet.

 

Generation 2 Virtual Machines Make Their First Public Appearance in Microsoft Azure

Microsoft has revealed that the new preview series of confidential computing virtual machines, the DC-Series, which went into public preview overnight are based on Generation 2 (Gen 2) Hyper-V virtual machines. This is the first time that a non-Generation 1 (Gen 1) VM has been available in Azure.

Note that ASR allows you to migrate/replicate Generation 2 machines into Azure by converting them into Generation 1 at the time of failover.

These confidential compute VMs use hardware features of the Intel chipset to provide secure enclaves to isolate the processing of sensitive data.

The creation process for a DC-Series is a little different than usual – you have to look for Confidential Compute VM Deployment in the Marketplace and then you work through a (legacy blade-based) customised deployment that is not as complete as a normal virtual machine deployment. In the end a machine appears.

I’ve taken a screenshot from a normal Azure VM including a view of Device Manager from Windows Server 2016 with the OS disk.

image

Note that both the OS disk and the Temp Drive are IDE drives on a Virtual HD ATA controller. This is typical a Generation 1 virtual machine. Also note the IDE/ATA controller?

Now have a look at a DC-Series machine:

image

Note how the OS disk and the Temp Drive are listed as Microsoft Virtual Disk on SCSI controllers? Ah – definitely a Generation 2 virtual machine! Also do you see the IDE/ATA controller is missing from the device listing? If you expand System Devices you will find that the list is much smaller. For example, the Hyper-V S3 Cap PCI bus video controller (explained here by Didier Van Hoye) of Generation 1 is gone.

Did you Find This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Frankfurt on December 3-4, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Microsoft Ignite 2018: Implement Cloud Backup & Disaster Recovery At Scale in Azure

Speakers: Trinadh Kotturu, Senthuran Sivananthan, & Rochak Mittal

Site Recovery At Scale

Senthuran Sivananthan

WIN_20180927_14_18_30_Pro

Real Solutions for Real Problems

Customer example: Finastra.

  1. BCP process: Define RPO/RTO. Document DR failover triggers and approvals.
  2. Access control: Assign clear roles and ownership. Levarage ASR built-in roles for RBAC. Different RS vault for different BU/tenants. They deployed 1 RSV per app to do this.
  3. Plan your DR site: Leveraged region pairs – useful for matching GRS replication of storage. Site connectivity needs to be planned. Pick the primary/secondary regions to align service availability and quota availability – change the quotas now, not later when you invoke the BCP.
  4. Monitor: Monitor replication health. Track configuration changes in environment – might affect recovery plans or require replication changes.
  5. DR drills: Periodically do test failovers.

Journey to Scale

  • Automation: Do things at scale
  • Azure Policy: Ensure protection
  • Reporting: Holistic view and application breakdown
  • Pre- & Post- Scripts: Lower RTO as much as possible and eliminate human error

Demos – ASR

Rochak for demos of recent features. Azure Policies coming soon.

WIN_20180927_14_33_20_Pro

Will assess if VMs are being replicated or not and display non-compliance.

Expanding the monitoring solution.

Demo – Azure Backup & Azure Policy

Trinadh creates an Azure Policy and assigns it to a subscription. He picks the Azure Backup policy definition. He selects a resource group of the vault, selects the vault, and selects the backup policy from the vault. The result is that any VM within the scope of the policy will automatically be backed up to the selected RSV with the selected policy.

Azure Backup & Security

Supports Azure Disk Encryption. KEK and BEK are backed up automatically.

AES 256 protects the backup blobs.

Compliance

  • HIPAA
  • ISO
  • CSA
  • GDPR
  • PCI-DSS
  • Many more

Built-in Roles

Cumulative:

  • Backup reader – see only
  • Backup Operator: Enable backup & restore
  • Backup contributor: Policy management and Delete-Stop Backup

Protect the Roles

PIM can be used to guard the roles – protect against rogue admins.

  • JIT access
  • MFA
  • Multi-user approval

Data Security

  • PIN protection for critical actions, e.g. delete
  • Alert: Notification on critical actions
  • Recovery: Data kept for 14 days after delete. Working on blob soft delete

Backup Center Demo

Being built at the moment. Starting with VMs now but will include all backup items eventually.

WIN_20180927_15_06_47_Pro

All RSVs in the tenant (doh!) managed in a central place.

Aimed at the large enterprise.

They also have Log Analytics monitoring if you like that sort of thing. I’m not a fan of LA – I much prefer Azure Monitor.

Reporting using Power BI

Trinadh demos a Power BI reporting solution that unifies backup data from multiple tenants into a single report.

Backup Your Data With Microsoft Azure Backup

Speakers: Saurabh Sensharma & Shivam Garg

Saurabh starts. He shows a real ransomware email. The ransom was 1.7 bitcoins for 1 PC or 29 bitcoins for all PCs. Part of the process to restore was to send files to the attacker to prove decryption works. The two files the customer sent contained customer data! Stuff like this has GDPR implications, brand, etc.

Secure Backup is Your Last Line of Defense

Azure Backup – a built-in service. Lower and predictable TCO. Can be zero-infrastructure. And it offers trust-no-one encryption and secure backups.

Shivam comes up. He’s going to play the role of the customer in this session.

Question: Backup is decades old – what has changed?

Digital transformation. People using the cloud to transform on-prem IT, even if it stays on-prem.

Shivam: Backup should be like a checkbox. Customers want a seamless experience. Backup should not be a distraction.

Azure Backup releases you from the management of a backup infrastructure. Azure Backup is built on:

  • Scalability
  • Availability
  • Resilience

Shivam: What does this “built-in” mean if I have a three-tier .Net app running in the cloud?

We see a demo of restoring a SQL Server database in an Azure VM. We see the point-in-time restore will be an option because there are log backups. Saurabh shows the process to backup SQL Server in Azure VMs. He highlights “auto-protect” – if the instance is being protected then all the databases (even new ones that are created later) are backed up.

Saurabh demos creating a new VM. He highlights the option to enable backup during the VM creation – something many didn’t know was possible when this option wasn’t in the VM creation process. VMs are backed up using a snapshot in local storage. 7 of those are kept, and the incremental is sent to the recovery services vault. If you want to restore from a recent backup, you can restore very quickly from the snapshot.

A new restore option is coming soon – Replace Existing (virtual machine). They place the existing disks of the VM into a staging location – this gives them a rollback if something goes wrong. Then the disks of the VM are replaced from backup. So this solves the availability set issue.

Under the Covers – SQL

Anything that has a native backup engine is referred to as enlightened. Azure Backup talks to the SQL Backup Engine using native APIs via Azure Backup plugin for SQL (VM extension). They ask SQL Backup Engine to create the backup APIs. Data is temporarily stored in VM storage. And then there is a HTTPS transfer using incremental backups to the RSV where they are encrypted at rest using SSE.

It’s all built-in. No manual agents, no backup servers, etc.

Non-Enlightened VM Workloads

E.g. MySQL in a VM. Azure Backup can call a pre-script. This can instruct MySQL to freeze transactions to disk. When you recover, there’s no need to do a fixup. A snapshot of the disks is taken, enabling a backup. And then a post-script is called and the database is thawed. Application providers typically share these on GitHub.

VM Backup

An extension is in every Azure VM. The extension associates itself to a backup policy that you select in the RSV. A command is sent to the backup extension. This executes a snapshot (VSS for Windows). It’s an Instant Recovery Snapshot in the VM storage. A HTTPS transfer to SSE storage as incremental blocks.

Azure Disk Encryption

KEK and BEK keys are stored in Azure Keyvault. These are also protected when you backup the VM. This ensures that the files can be unlocked when restored.

Up to 1000 VMs can be protected in a single RSV now.

Azure VM Restore

VM restore options:

  • Files
  • Disks
  • VM
  • Replace Disks

Replace Disks (new):

  1. They snapshot copy the VMs disks to a staging location. This allows roll backup if the process is broken.
  2. They replace the disks by restore.

This (confirmed) is how restoring a VM will allow you to keep availability set membership.

Azure File Sync

The MS sync/tiering solution. Everything is stored in the cloud. So you can move on-prem backup for file servers to the cloud. Demo of deleting a file and restoring it. Saurabh clicks Manage Backups in the Azure File Share and clicks File Recovery and goes through the process.

When the backup API triggers a backup of Files, it pauses sync to create a snapshot. After the snapshot, the sync resumes. Now they have a means to a file consistent backup.

On-Prem Resources

There is no Azure File Sync in this scenario, but they want to use cloud backup without a backup server. This scenario is Azure Backup MARS agent with Windows Admin Center. A demo of enabling Azure Backup protection via the WAC.

Deleting Backup

  1. Malware cannot delete your backups because this task requires you to manually generate a PIN in the Azure Portal (human authentication)
  2. If a human maliciously deletes a backup, Azure Backup retains backups for 14 days. And it will send an email to the registered notification address(es).

Security

  • Security PIN for critical tasks
  • Azure Disk Encryption support
  • SSE encryption with TLS 1.2
  • RBAC for roles
  • Alerts in the portal and via notifications
  • On-server encryption (on-prem) before transport to Azure

Rich Management

Questions:

  • What’s my storage consumption?
  • Are my backups healthy?
  • Can I get insights by looking at trends?

This is the sort of stuff that normally requires a lot of on-prem infrastructure. Azure leverages Azure features, such as a Storage Account. No infrastructure, enterprise-wide, and it uses an open data model (published online on docs.microsoft.com) that anyone can use (Kusto, etc). You can also integrate with Service Manager, ServiceNow, and more (ITSM).

Custom reports.

And ….. cross-tenant support! Yay! This is a big deal for partners. It’s a PowerBI solution. It’s a content pack that you can import. It ingests Azure reporting data from a storage account.

Once you set this up, it takes up to 24 hours to get data moving, and then it’s real-time after that.

Roadmap

Cloud resources:

  • Azure VM abckup – Standard SSD, resource improvements, 16+ disks, cross-region support
  • Azure Files Backup: Premium Files, 5 TB+ shares, ACL, secondary backups.
  • Workloads: SAP Hana, SQL in Azure VM GA.

Availability Zones:

  • ZRS
  • Recovery from cross-zone backups

And more that I couldn’t grab in time.

Microsoft Ignite 2018–Azure Compute

Speaker: Corey Sanders

95% of Fortune 500 building on Azure. Adobe is building on open source – one of the biggest PostgreSQL customers. NeuroIntiative using GPUs to simulate drug tests for treatments for Alzheimer’s.

There’s no one way to use Azure. Find the bits you want to use and deploy them in a good way that suits.

Infrastructure for Every Workload

54 announced regions. Availability Zones in US, Europe, and Asia, more regions coming soon.

New VM Portfolio

NDv2: 8 x NVIDIA Tesla V100 NVLINK GPUs, 40 Intel SkyLake cores, 672 GB RAM, AI, ML, and HPC workloads.

NVv2: Tesla M60 GPU. Premium SSD suppor, up to 448 GB RAM, CAD, Gaming 3D design

HB: Highest memory bandiwidth in the cloud. 60 AMD EPYC cores, 100 Gbps Infiniband. Memory bandwidth intensive HPC workloads.

HC: Up to 3.7 Ghz clock speed. 44 Intel SkyLake cores, 100 Gbps Infiniband. CPU intensive HPC workloads.

Storage

200 trillion objects. 160 trillion transactions per month.

Standard SSD is GA. Ultra SSD in preview – sub millisecond latency, up to 160,000 IOPS and 2,000 MB/s throughput.

A demo of Ultra SSD. Opens up an E64s_v3 VM with Ultra SSD. Run IOMETER. Gets nearly 80,000 IOPS and .6 millisecond latency. That’s a single disk! Now for demo 2 with a new VM type. Runs IOMETER. Gets 161,000 IOPS on a single ultra SSD without striping or caching – durable writes. Double the performance of the competition.

There will be a single VM SLA for VMs running Ultra SSD.

Networking

100,000 miles of fibre to connect the 54 regions with 130+ edge sites.

ExpressRoute Global Reach allows you to connect your connections together to use the MS WAN as your WAN. Virtual WAN is GA. Front Door uses those edge as a globally available secure entry point to web services in Azure. And Azure ExpressRoute Direct offers 100 Gbps connections to Azure.

SAP

24 TB RAM physical machines. 12 TB RAM VMs on the way. 20+ certified solution architectures on Azure.

Containers

Reasons:

  • Agility
  • Portability
  • Density
  • Rapid Scale

A new feature in Kubernetes (K8s) to allow burst capacity based on Azure Container Instances called Virtual Node. The node is a VM that can be loaded up with ACIs when demand spikes. You get per-second billing to deal with unusual loads.

Hybrid

Microsoft offers the only true consistent hybrid experience. Azure Stack, DevOps, data, AD, and security/management.

A key piece of this is Windows Server 2019, which has hybrid built in. Hybrid: Azure Backup, ASR, Storage Migration Services, Azure Network Adapter

Erin Chapple comes out to demo Windows Admin Center.

Windows Server 2008/R2

End of life coming January 2020, and for SQL Server on July 9, 2019. If you migrate these to Azure, you’ll get 3 years of free security fixes – you’ll have to pay if you stay on-premises.

Edge

Microsoft has announced availability of the first Azure Sphere dev kit.

Data Box Edge is also announced. You can pre-process data on-prem before moving it to the cloud. It has FPGAs (or whatever) built in.

Azure Stack will support more nodes in the coming weeks. Event hubs and Blockchain deployment coming in preview this year.

Security & Management

Starts with the physical and software security of Azure and extends out to the edge and on-premises. 1.2 billion devices and 750,000 user authentications offer a lot of data for analysis.

  • 85+ compliance offerings.
  • 40+ industry specific regulated offerings
  • Trusted, responsible, and inclusive cloud

New announcements:

  • Confidential computing is a new series of VMs – DC-Series. The data is protected even from Azure when being processed by the CPU.
  • Azure Firewall is GA.
  • Azure Security Center improvements.

Governance

Governance normally restricts and slows down. Azure Policy doesn’t slow you down. A new addition, Blueprints, plans out deployments that are known and trusted. DevOps can deploy a blueprint to stay within the guardrails. It’s ARM template + Policy, resource group(s), and RBAC.

In a demo, we see a new Azure Policy feature – the ability to remediate variance.

Migration

CTO of JB Hunt, Gary Downy comes on stage. A trucking company that also does last mile and rail transport. Facing disruptive technologies such as driver-less and a shortage of drivers. They had on-prem systems but they wouldn’t scale with the business. Now they use Azure DevOps, Git, and Kubernetes for most of their systems.

Start with assessment. Then migrate. Then optimize and transition into management & security (ownership).

Tools:

  • Azure Migrate now supports Hyper-V and VMware.
  • Azure Database Migration Service which does Azure SQL, MySQL, PostreSQL, and MongoDB.

 

Cloud Mechanix – “Starting Azure Infrastructure” Training Coming To Frankfurt, Germany

I have great news. Today I got confirmation that our venue for the next Cloud Mechanix class has been confirmed. So on December 3-4, I will be teaching my Cloud Mechanix “Starting Azure Infrastructure” class in Frankfurt, Germany. Registration Link.

Buy Ticket

About The Event

This HANDS-ON theory + practical course is intended for IT professionals and developers that wish to start working with or improve their knowledge of Azure virtual machines. The course starts at the very beginning, explaining what Azure is (and isn’t), administrative concepts, and then works through the fundamentals of virtual machines before looking at more advanced topics such as security, high availability, storage engineering, backup, disaster recovery, management/alerting, and automation.

Aidan has been teaching and assisting Microsoft partners in Ireland about Microsoft Azure since 2014. Over this time he has learned what customers are doing in Azure, and how they best get results. Combined with his own learning, and membership of the Microsoft Valuable Professional (MVP) program for Microsoft Azure, Aidan has a great deal of knowledge to share.

We deliberately keep the class small (maximum of 20) to allow for a more intimate environment where attendees can feel free to interact and ask questions.

Agenda

This course spans two days, running on December 3-4, 2018. The agenda is below.

Day 1 (09:30 – 17:00):

  • Introducing Azure
  • Tenants & subscriptions
  • Azure administration
  • Admin tools
  • Intro to IaaS
  • Storage
  • Networking basics

Day 2 (09:30 – 17:00):

  • Virtual machines
  • Advanced networking
  • Backup
  • Disaster recovery
  • JSON
  • Diagnostics
  • Monitoring & alerting
  • Security Center

The Venue

The location is the Novotel Frankfurt City. This hotel:

  • Has very fast Wi-Fi – an essential requirement for hands-on cloud training!
  • Reasonably priced accommodation.
  • Has car parking – which we are paying for.
  • Is near the Messe (conference centre) and is beside the Kuhwaldstraße tram station and the Frankfurt Main West train station and S-Bahn.
  • Is just a 25 minute walk or 5 minutes taxi from the Hauptbahnhof (central train station).
  • It was only 15-20 minutes by taxi to/from Frankfurt Airport when we visited the hotel to scout the location.

image

Costs

The regular cost for this course is €999 per person. If you are registering more than one person, then the regular price will be €849 per person. A limited number of early bird ticks are on sale for €659 each.

You can pay for for the course by credit card (handled securely by Stripe) or PayPal on the official event site. You can also pay by invoice/bank transfer by emailing contact@cloudmechanix.com. Payment must be received within 21 days of registration – please allow 14 days for an international (to Ireland) bank transfer. We require the following information for invoice & bank transfer payment:

  • The name and contact details (email and phone) for the person attending the course.
  • Name & address of the company paying the course fee.
  • A purchase Order (PO) number, if your company require this for services & purchases.

The cost includes tea/coffee and lunch. Please inform us in advance if you have any dietary requirements.

Note: Cloud Mechanix is a registered education-only company in the Republic of Ireland and does not charge for or pay for VAT/sales tax.

See the event page for Terms and Conditions.

Buy Ticket

Azure Template DSC Never Starts

In this post, I’ll explain how I figured out a problem where I couldn’t get the Azure Resource Manager (ARM) JSON template DSC extension to execute. The problem below might explain why your DSC extension never appears to start, assuming that you have uploaded your DSC pack (zip file) to an accessible Internet location, and enter the URL and module names correctly in your template.

In my scenario, I wanted to deploy a domain controller as a VM on a virtual network. Normally, when you do this you would configure the DNS settings of the VNet to point at the desired static IP of the DC. For example, you’d create a NIC for the DC, set that NIC to have a static IP (10.0.0.4 for example), and then edit the settings of the VNet to be the IP address of the DC’s NIC. In am ARM template, the resource dependencies would order that process as below:

FailedDcDscAzureJSON

I configured my ARM template as above and everything was deploying … or so it appeared. The DSC extension appeared in the Portal and had a status of Created. However, when I used PowerShell to query things, I found it still had a status of Creating, and when I logged into the DC VM I found that nothing had happened. I don’t know how many hours I spent trying to figure out what I had done wrong. My emphasis on DNS above should give you a clue.

The virtual network has been configured to use the VM is it’s own DNS server, but the VM is still not a DNS server because the DSC extension hasn’t added the roles or done the DCPROMO. So when I tried to download the DSC pack (zip file) from the Internet, it wasn’t downloading. In fact, I couldn’t resolve any DNS names. I went looking at some of the sample ARM templates that do a DCPROMO and noticed a trend. They did the following using nested templates:

WorkingDcDscAzureJSON

What changed? A nested template is used to deploy the virtual network using the default Azure DNS addresses (no configuration required). Now the new DC VM can access Internet resources via DNS names – and the DSC pack can be downloaded from the Internet and applied – adding the roles and executing the DCPROMO to make the machine a domain controller. The final step is to fix up the virtual network – so another nested template is executed to modify the VNet’s DNS settings to use the static IP address of the DC.

Did you Find This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in London on July 5-6, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Faster & Bigger Azure Backup for Azure VMs

Azure Backup recently rolled out an update to their service for protecting Azure VMs to improve backup speed, restore performance, and to add support for larger disks.

Support for Large Disks

Azure Backup didn’t support disks that were larger than 1 TiB (1 TB is the marketing measure of 1000 GB, and 1 TiB is the computer science measure of 1024 GiB). Those large disks must be popular – I know that people couldn’t get their head around the idea of a volume being spread across disk aggregation (they never heard of RAID, I guess) and wouldn’t touch Azure VMs because of this.

Today, Azure Backup, once upgraded by you, does support the large disks that Azure can offer (over 1 TiB).

Snapshot-Based Backup

People who deploy large VMs have seen that the traditional process of protecting their machines has been slow. Historically Azure Backup would:

  1. Create a snapshot of the virtual machine.
  2. Transfer the backup data from the storage cluster to the recovery services vault (standard tier block blob storage) over a network.

The snapshot was then dispensed with.

The backup was slow (the process of calculating changes, the network transfer and the write to standard storage), and restores were just as slow. It’s one thing for a backup to be slow, but when a restore is a 12 hour job, you’ve got a problem!

Azure made some changes, and now the process of a backup is:

  1. Create a snapshot of the virtual machine and keep 7 snapshots (7 backups).
  2. Use the previous snapshot to speed up the process of identifying changes.
  3. Transfer the backup data from the storage cluster to the recovery services vault (standard tier block blob storage) over a network.

Two things to note:

  • The differencing calculation is faster, speeding up the end-to-end process.
  • But after you upgrade Azure Backup, you can do a restore once the snapshot is complete, and while the backup job (transfer) is still happening!

Capture

7 snapshots are kept, and you can restore a virtual machine from either:

  • A snapshot from the last 7 backups)
  • A recovery point in the recovery services vault from up to the last 99 years, 9999 recovery points, depending on your backup policy.

AzureVMBackupRestoreUsingSnapshot

Restoring from a snapshot should be much quicker, and this will benefit large workloads, such as database servers, where a restore is usually from as recent a backup as possible.

Distributed Disks Restore

The last new feature is that when you restore a virtual machine with un-managed disks (storage account disks) then you can opt to distribute the disks to different storage accounts.

Accessing the Features

A one-time one-way upgrade must be done in each subscription to access the new Azure Backup for IaaS VM features. When you open a (single) recovery services vault, a banner will appear at the top. Click the banner, and then read the blade that opens. When you understand the process, click Upgrade. A quick task will complete and approximately two hours later, your entire subscription will be upgraded and able to take advantage of the features described above.

Was This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.