The Digital Intern – Early Experience with Microsoft Copilot

I will share my early experiences with Microsoft Copilot, the positives and negatives, clear up some false expectations, and explain why I think of Generative AI as a digital intern.

What is Generative AI?

The name gives it away. Generative AI generates or creates something from other known things. Examples are:

  • DALL-E: Creating images, such as Bing Create
  • Chat GPT: A text-based interface for finding things and generating text, such as the Copilot brand from Microsoft.

Pre-Microsoft

There are lots of brands out there but the one that’s grabbing most of the headlines is Open AI because of ChatGPT, which is only on of their products. Like millions of others, I’ve played with ChatGPT. I’ve used it to create Terraform code. It was “OK” but I found:

  • Some of the code was out of date.
  • The structure wasn’t great.

I had to clean up that code to make it usable. But ChatGPT saved me time. I didn’t have to go googling. I was able to create a baseline and use my knowledge and ability to troubleshoot/edit to make the code usable.

I also “ChatGPTd” myself – don’t do it too often or you’ll go blind! Most of what ChatGPT wrote about me was correct. But there were some factual errors. Apparently, I’ve written two books on Azure. Factcheck: I have not published any books on Azure.

Some of the facts were also out of date. I have been “an Azure MVP for 2 years”. That was probably pulled from some online source. ChatGPT didn’t understand the fact (it’s just a calculated set of numbers) and therefore hadn’t the logic to use “2 years” and the publication date to recalculate – or maybe put a date in brackets with the fact.

Copilot

Microsoft has just launched Microsoft 365 Copilot and there is a lot of hoopla and hype which is helping Microsoft shares, even with a bit of a slump in the stock market in general.

I’ve been playing with it and trying things out. First up was PowerPoint. Yes, I can quickly create a presentation. I can add slides. I can change images. But the logic is limited. For example, I cannot change the theme after creating the slides.

The usual fact-checking issues are there too. I used Copilot to create a presentation for my wife on company X in Ireland. The name of company X is also used by companies in the UK and the USA. Even with precise instructions, Copilot tried to inject facts from the UK/USA companies.

However, Copilot did create a skeleton presentation and that saved some time. I played around with it in Word, and it’ll generate a doc nicely. For example, it will write a sales proposal in the style of Yoda. Copilot in Teams is handy – ask it to summarize a chat that you’ve just been added to. Outlook too does a nice job at drafting an email.

Drafting is a good choice of words. Because the text is often just mumbo jumbo that is nothing to do with your or your organisation. It’s filler. In the end, it’s up to you to put in the real information that you want to push.

Bing Enterprise Chat is an option too. You can go into Bing Chat and select the M365 option. You can interrogate facts from “the graph” and M365. You can ask your agenda for the day.

Don’t ask Copilot to tell you how many vacation days are in your calendar. It will search your chat/email history for discussions of vacation time. It does not look at items in your calendar. It will not do maths – more on this next.

Prompt Engineering

Go into Bing Create and ask it to create an image of a countryside scene. Expand the prompt in different ways:

  • Add a run-down building
  • Change the time of day
  • Alter the viewing point
  • Add a background
  • Place some birds in the sky
  • Add a person into the scene
  • Make the foreground more interesting
  • Change the style of image

The image changes gradually as you expand or change the prompt. This is called prompt engineering. Eventually, the final image is nothing like the first image from the basic prompt. What you ask for changes things. Think of the AI as lacking in the “I” part and be as clear and precise as you can be – like how one might instruct a toddler.

Custom Data

I decided to do a mini-recreation of something that I saw the folks from Prodata do with Power BI years ago for presentations. I downloaded publicly available residential property sale information for the Irish market and supplied it to Copilot.

“Tell me how many properties were sold in Dublin in 2023”. No answer because that information was not in the data. Each property sale including address, county, value, and description was in the data, but the “Y properties were sold” fact was not in the data. One would assume that an artificial intelligence would understand the question and know to list/count the items that match the search filter but that is not what happens.

I also found other logic issues. “What was the most expensive property sold in 2023” resulted in a house in Dublin for €1.55 million. I then asked it to list all houses costing more than €1 million. The €1.55m house was not included. I tried other prompts and then returned to my list question – and I got a different answer!

Don’t ask Copilot to do any maths – it won’t tell you averages, differences or sums – because that information was not in the “table” of supplied data.

Data Preparation

You cannot expect to just throw your data at Copilot and for magic to happen. Copilot needs data to be prepared, especially custom (non-Office) data. It needs to be in consumable chunks. You also need to understand what people might ask for – and include that information in the data.

I’m wandering outside of my expertise now, but let’s take my property example. I wanted to analyze property values, do summations, averages, and comparisons. The act of preparing this data for Copilot needs to do these calculations in advance and include the results in the data that is shared with Copilot.

Thoughts

I am not writing off ChatGPT/Copilot. There are problems but it is still very early days and things will be improved.

Right now, we need to understand what Copilot can do, and what it is good at/not good at, and match it up with what will assist the organization.

The most important thing is how we consider Copilot. The name choice by Microsoft was deliberate. They did not call it “Pilot”.

Generative AI is an assistant. It will handle repetitive tasks based on existing data. It has no intelligence to infer new data. It cannot connect two facts that we know are logically connected but are not written down as connected. And Generative AI makes mistakes.

Microsoft called it Copilot because the pilot is responsible for the plane. The user is the pilot. The intention is that Generative AI handles the dull stuff but we add the creativity (prompt engineering/editing) and fact-checking (review/editing).

If you think about it, Copilot is acting like a Digital Intern. How are interns used? You ask them to do the simple things: get lunch, research X and write a short report, write a draft document, and so on. Does the intern produce the final product for a customer/boss? No. Is the intern responsible for what comes out of your team/department? No.

The intern is fresh out of school and knows almost nothing. They will produce exactly what you tell them – if the prompt is too general they get lost in the possibilities. You take what the intern gives you and review/edit/improve it. Their work saves you time, but your knowledge, expertise, and creativity are still required.

I might sound like a downer – I’m not. I’m just not on board the hype train. I’m saying that the train is useful to get from A to B right now, but the line doesn’t go all the way to Z yet. It is still valuable but you have to understand that value and don’t get lost in the hype and the Hollywood-ing of IT.

Default Outbound Access For VMs In Azure Will Be Retired

Microsoft has announced that the default route, an implicit public IP address, is being deprecated 30 September 2025.

Background

Let’s define “Internet” for the purposes of this post. The Internet includes:

  • The actual Internet.
  • Azure services, such as Azure SQL or Azure’s KMS for Windows VMs, that are shared with a public endpoint (IP address).

We have had ways to access those services, including:

  • Public IP address associated with a NIC of the virtual machine
  • Load Balancer with a public IP address with the virtual machine being a backend
  • A NAT Gateway
  • An appliance, such as a firewall NVA or Azure firewall, being defined as the next hop to Internet prefixes, such as 0.00.0/0

If a virtual machine is deployed without having any of the above, it still needs to reach the Internet to do things like:

  • Activate a Windows license against KVM
  • Download packages for Ubuntu
  • Use Azure services such as Key Vault, My SQL for Azure SQL, or storage accounts (diagnostics settings)

For that reason, all Azure virtual machines are able to reach the Internet using an implied public IP address. This is an address that is randomly assigned to SNAT the connection out from the virtual machine to the Internet. That address:

  • Is random and can change
  • Offers no control or security

Modern Threats

There are two things that we should have been designing networks to stop for years:

  • Malware command and control
  • Data exfiltration

The modern hack is a clever and gradual process. Ransomware is not some dumb bot that gets onto your network and goes wild. Some of the recent variants are manually controlled. The malware gets onto the network and attempts to call home to a “machine” on the Internet. From there, the controllers can explore the network and plan their attack. This is the command and control. This attempt to “call home” should be blocked by network/security designs that block outbound access to the Internet by default, opening only connections that are required for workloads to function.

The controller will discover more vulnerabilities and download more software, taking further advantage of vulnerable network/security designs. Backups are targeted for attack first, data is stolen, and systems are crippled and encrypted.

The data theft, or exfiltration, is to an IP address that a modern network/security design would block.

So you can see, that a network design where an implied public IP address is used is not a good practice. This is a primary consideration for Microsoft in making its decision to end the future use of implied public IP addresses.

What Is Happening?

On September 30th, all future virtual machines will no longer be able to use an implied public IP address. Existing virtual machines will be unaffected – but I want to drill into that because it’s not as simple as one might think.

A virtual machine is a resource in Azure. It’s not some disks. It’s not your concept of “I have something called X” that is a virtual machine. It’s a resource that exists. At some point, that resource might be removed. At that point, the virtual machine no longer exists, even if you recreate it with the exact same disks and name.

So keep in mind:

  • Virtual networks with existing VMs: The existing VMs are unaffected, but new VMs in the VNet will be affected and won’t work.
  • Scale-out: Let’s say you have a big workload with dozens of VMs with no public IP usage. You add more VMs and they don’t work – it’s because they don’t have an implied IP address, unlike their older siblings.
  • Restore from backup: You restore a VM to create a new VM. The new VM will not have an implied public IP address.

Is This a Money Grab?

No, this is not a money grab. This is an attempt by Microsoft to correct a “wrong” (it was done to be helpful to cloud newcomers) that was done in the original design. Some of the mitigations are quite low-cost, even for small businesses. To be honest, what money could be made here is pennies compared to the much bigger money that is made elsewhere by Azure.

The goal here is to:

  • Be secure by default by controlling egress traffic to limit command & control and data exfiltration.
  • Provide more control over egress flows by selecting the appliance/IP address that is used.
  • Enable more visibility over public IP addresses, for example, what public address should I share with a partner for their firewall rules?
  • Drive better networking and security architectures by default.

What Is Your Mitigation?

There are several paths that you can choose.

  1. Assign a public IP address to a virtual machine: This is the lowest cost option but offers no egress security. It can get quite messy if multiple virtual machines require public IP addresses. Rate this as “better than nothing”.
  2. Use a NAT Gateway: This allows a single IP address (or a range from an Azure Public IP Address Prefix) to be shared across an entire subnet. Note that NAT Gateway gets messy if you span availability zones, requiring disruptive VNet and workload redesign. Again this is not a security option.
  3. Use a next hop: You can use an appliance (virtual machine or Marketplace network virtual appliance) or the Azure Firewall as a next hop to the Internet (0.0.0.0/0) or specific Internet IP prefixes. This is a security option – a firewall can block unwanted egress traffic. If you are budget-conscious, then consider Azure Firewall Basic. No matter what firewall/appliance you choose, there will be some subnet/VNet redesign and changes required to routing, which could affect VNet-integrated PaaS services such as API Management Premium.

September 2025 is a long time away. But you have options to consider and potentially some network redesign work to do. Don’t sit around – start working.

In Summary

The implied route to the Internet for Azure VMs will stop being available to new VMs on September 30th, 2025. This is not a money grab – you can choose low-cost options to mitigate the effects if you wish. The hope is that you opt to choose better security, either from Microsoft or a partner. The deadline is a long time away. Do not assume that you are not affected – one day you will expand services or restore a VM from backup and be affected. So get started on your research & planning.

What is a Managed Private Endpoint?

Something new appeared in recent times: the “Managed Private Endpoint”. What the heck is it? Why would I use it? How is it different from a “Private Endpoint”?

Some Background

As you are probably aware, most PaaS services in Azure have a public endpoint by default. So if I use a Storage Account or Azure SQL, they have a public interface. If I have some security or compliance concerns, I can either:

  • Switch to a different resource type to solve the problem
  • Use a Private Endpoint

Private Endpoint is a way to interface with a PaaS resource from a subnet in a virtual network. The resource uses the Private Link service to receive connections and respond – this stateful service does not allow outbound connections providing a form of protection against some data leakage vectors.

Say I want to make a Storage Account only accessible on a VNet. I can set up a Private Endpoint for the particular API that I care about, such as Blob. A Private Endpoint resource is created and a NIC is created. The NIC connects to my designated subnet and uses an IP configuration for that subnet. Name resolution (DNS) is updated and now connections from my VNet(s) will go to the private IP address instead of the public endpoint. To enforce this, I can close down the public endpoint.

The normal process is that this is done from the “target resource”. In the above case, I created the Private Endpoint from the storage account.

Managed Private Endpoint

This is a term I discovered a couple of months ago and, to be honest, it threw me. I had no idea what it was.

So far, Managed Private Endpoints are features of:

The basic concept of a Managed Private Endpoint has not changed. It is used to connect to a PaaS resource, also referred to as the target resource (ah, there’s a clue!) over a private connection.

Microsoft: Azure Data Factory Integration Runtime connecting privately to other PaaS targets

What is different is that you create the Managed Private Endpoint from a client resource. Say, for example, I want Azure Synapse Analytics to connect privately to an Azure Cosmos DB resource. The Synapse Analytics resource doesn’t do normal networking so it needs something different. I can go to the Synapse Analytics resource and create a Managed Private Endpoint to the target Cosmos DB resource. This is a request – because the operator of the Cosmos DB resource must accept the Private Endpoint from their target resource.

Once done, Synapse Analytics will use the private Azure backbone instead of the public network to connect to the Cosmos DB resource.

Managed Virtual Network

Is your head wrecked yet? A Managed Private Endpoint uses a Managed Virtual Network. As I said above, a resource like Synapse Analytics doesn’t do normal networking. But a Managed Private Endpoint is going to require a Virtual Network and a subnet to connect the Managed Private Endpoint and NIC.

These are PaaS resources so the goal is to push IaaS things like networking into the platform to be managed by Microsoft. That’s what happens here. When you want to use a Managed Private Endpoint, a Managed Virtual Network is created for you in the same region as the client resource (Synapse Analytics in my example). That means that data engineers don’t need to worry about VNets, subnets, route tables, peering, and all the stuff when creating integrations.

Azure Infrastructure Announcements – September 2023

September is a month of storms. There appears to have been lots of activity in the Azure cloud last month too. Everyone working on Azure should pay attention to the PAY ATTENTION! section.

PAY ATTENTION!

Default outbound access for VMs in Azure will be retired— transition to a new method of internet access

On 30 September 2025, default outbound access connectivity for virtual machines in Azure will be retired. After this date, all new VMs that require internet access will need to use explicit outbound connectivity methods such as Azure NAT Gateway, Azure Load Balancer outbound rules, or a directly attached Azure public IP address.

There will be more communications on this from Microsoft. But this is more than a “don’t worry about your existing VMs” situation. What happens when you add more VMs to an existing old network? What happens when you do a restore? What happens when you do an Azure Site Recovery failover? Those are all new VMs in old networks and they are affected. Everyone should do some work to see if they are affected and prepare remediations in advance – not on the day when they are stressed out by a restore or a Black Friday expansion.

App Service Environment version 1 and version 2 will be retired on 31 August 2024

After 31 August 2024, App Service Environment v1 and v2 will no longer be supported and these App Service Environments and the applications running on them will be deleted and any application data associated with them will be lost.

Oh yeah, you’d better start working on migrations now.

Azure Kubernetes Service

Application gateway for Containers vs Application Gateway Ingress Controller – What’s changed?

Application Gateway for Containers is a new application (layer 7) load balancing and dynamic traffic management product for workloads running in a Kubernetes cluster. At the time of writing this service is currently in public preview. In this article we will look at the differences between AGIC and Application Gateway for containers and some of the great new features available through this new offering. 

I know little about AKS but this subject seems to have excited some AKS users.

A Bucket Load Of Stuff

Too much for me to get into and I don’t know enough about this stuff:

App Services

Announcing Public Preview of Free Hosting Plan for WordPress on App Service

We announced the General Availability of WordPress on App Service one year ago, in August 2022 with 3 paid hosting plans. We learnt that sometimes you might need to try out the service before you migrate your production applications. So, we are offering you a playground for a limited period – a free hosting plan to and explore and experiment with WordPress on App Service. This will help you understand the offering better before you make a long-term investment.

They really want you to try this out – note that this plan is not for production workloads.

Hybrid

Announcing the General Availability of Jumpstart HCIBox

Almost one year ago the Jumpstart team released the public preview of HCIBox, our self-contained sandbox for exploring Azure Stack HCI capabilities without the need for physical hardware. Feedback from the community has been fantastic, with dozens of feature requests and issues submitted and resolved through our open-source community.

Today, the Jumpstart team is excited to announce the general availability of HCIBox!

It’s one thing to test out the software functionality of Azure Stack HCI. But the reality is that this is a hardware-centric solution and there is no simulating the performance, stability, or operations of something this complex.

Generally Available: Windows Server 2012 and 2012 R2 Extended Security Updates enabled by Azure Arc

Windows Server 2012 and 2012 R2 Extended Security Updates (ESUs) enabled by Azure Arc is now Generally Available. Windows Server 2012 and 2012 R2 are going End of Support on October 10, 2023. With ESUs, customers who are running Windows Server 2012 on-premises or in other clouds can get three more years of critical security updates from Microsoft to protect their End of Life infrastructure.

This is not free. This is tied into the news about Azure Update Manager (below).

Miscellaneous

Detailed CSP to EA Migration guidance and crucial considerations

In this blog, I’ve shared insights drawn from real-world migration experiences. This article can help you meticulously plan your own CSP to EA migration, ensuring a smoother transition while incorporating critical considerations into your migration strategy.

One really wishes that CSP, EA, etc were just differences in billing and not Azure APIs. Changing of billing should be like changing a phone plan.

Top 10 Considerations for running your workload successfully on Azure this Holiday Season

Black Friday, Small Business Saturday and Cyber Monday will test your app’s limits, and so it’s time for your Infrastructure and Application teams to ensure that your platforms delivers when it is needed the most. Be it shopping applications on the web and mobile or payment gateways or banking systems supporting payments or inventory systems or billing systems – anything and everything associated with the shopping season should be prepared to face the load for this holiday season.

The “holiday season” starts earlier every year. Tesco Ireland started in August. Amazon has a Prime Day next Tuesday (October 10). These events test systems harder than ever and monolithic on-prem designs will not handle it. It’s time to get ready – if it’s not already too late!

Ungated Public Preview: Azure API Center

We’re thrilled to share that Azure API Center is now open for everyone to try during our ungated public preview! Azure API Center is a new Azure service that is part of the Azure API Management platform. It is the central hub where you can effortlessly keep track of all your APIs company-wide, making them readily discoverable, reusable, and manageable.

Managing a catalog of APIs could be challenging. Tooling is welcome.

Generally available: Secure critical infrastructure from accidental deletions at scale with Policy

We are thrilled to announce the general availability of DenyAction, a new effect in Azure Policy! With the introduction of Deny Action, policy enforcement now expands into blocking request based on actions to the resource. These deny action policy assignments can safeguard critical infrastructure by blocking unwarranted delete calls.  

Can you believe that Azure was designed deliberately to not have a deny permission? Adding it after is not easy. The idea here is that delete locks on resources/resource groups become too easy to remove – and are frequently removed. Something, like a policy, that is enforced in the API (between you and the resources) is always applied and is not easy to remove and can be easily deployed at scale.

Virtual Machines

Generally available: Azure Premium SSD v2 Disk Storage is now available in more regions

Azure Premium SSD v2 Disk Storage is now available in Australia East, France Central, Norway East and UAE North regions. This next-generation storage solution offers advanced general-purpose block storage with the best price performance, delivering sub-millisecond disk latencies for demanding IO-intensive workloads at a low cost.

Expanded region availability makes this something more interesting. But, Azure Backup support is in very limited preview since the Spring.

Announcing the general availability of new Azure burstable virtual machines

we are announcing the general availability of the latest generations of Azure Burstable virtual machine (VM) series – the new Bsv2, Basv2, and Bpsv2 VMs based on the Intel® Xeon® Platinum 8370C, AMD EPYC™ 7763v, and Ampere® Altra® Arm-based processors respectively. 

Faster and cheaper than the previous editions of B-Series VMs and they include ARM support too. The new virtual machines support all remote disk types such as Standard SSD, Standard HDD, Premium SSD and Ultra Disk storage.

Generally Available: Azure Update Manager

We are pleased to announce that Azure Update Manager, previously known as Update Management Center, is now generally available.

The controversial news is that Arc-managed machines will cost $5/month. I’m still not sold on this solution – it still feels less than legacy solutions like WSUS.

Announcing Public Preview of NVMe-enabled Ebsv5 VMs offering 400K IOPS and 10GBps throughput

Today, we are announcing a Public Preview of accelerated remote storage performance using Azure Premium SSD v2 or Ultra disk and selected sizes within the existing NVMe-enabled Ebsv5 family. The higher storage performance is offered on the E96bsv5 and E112ibsv5 VM sizes and delivers up to 400K IOPS (I/O operations per second) and 10GBps of remote disk storage throughput.

Even the largest SQL VM that I have worked with comes nowhere near these specs. The customer(s) that have justified this investment by Microsoft must be huge.

Azure savings plan for compute: How the benefit is applied

Organizations are benefiting from Azure savings plan for compute to save up to 65% on select compute services – and you could too. By committing to spending a fixed hourly amount for either one year or three years, you can save on plans tailored to your budget needs. But you may wonder how Azure applies this benefit.

It’s simple really. The system looks at your VMs, calculates the theoretical savings, and first applies your discount to the machines where you will save the most money, and then repeats until your discount is used.

General Availability: Share VM images publicly with community gallery – Azure Compute Gallery feature

With community gallery, a new feature of Azure Compute Gallery, you can now easily share your VM images with the wider Azure community. By setting up a ‘community gallery’, you can group your images and make them available to other Azure customers. As a result, any Azure customer can utilize images from the community gallery to create resources such as virtual machines (VMs) and VM scale sets.

This is a cool idea.

Trusted Launch for Azure VMware Solution virtual machines

Azure VMware Solution proudly introduces Public Preview of Trusted Launch for Virtual Machines. This advanced feature comprises Secure Boot, Virtual Trusted Platform Module (vTPM), and Virtualization-based Security (VBS), collectively forming a formidable defense against modern cyber threats.

A feature that was introduced in Windows Server 2016 Hyper-V.

Infrastructure-As-Code

Introduction to Azure DevOps Workload identity federation (OIDC) with Terraform

Workload identity federation is an OpenID Connect implementation for Azure DevOps that allow you to use short-lived credential free authentication to Azure without the need to provision self-hosted agents with managed identity. You configure a trust between your Azure DevOps organisation and an Azure service principal. Azure DevOps then provides a token that can be used to authenticate to the Azure API.

This looks like a more secure way to authenticate your pipelines. No secrets are stored and a trust between your DevOps organasation and Azure enables short-lived authentication with desired access rights/scopes.

Quickstart: Automate an existing load test with CI/CD

In this article, you learn how to automate an existing load test by creating a CI/CD pipeline in Azure Pipelines. Select your test in Azure Load Testing, and directly configure a pipeline in Azure DevOps that triggers your load test with every source code commit. Automate load tests with CI/CD to continuously validate your application performance and stability under load.

This is not something that I have played with but I suspect that you don’t want to do this against production systems!

General Availability: GitHub Advanced Security for Azure DevOps

Starting September 20th, 2023, the core scanning capabilities of GitHub Advanced Security for Azure DevOps can now be self-enabled within Azure DevOps and connect to Microsoft Defender for Cloud. Customers can automate security checks in the developer workflow using:

  • Code Scanning: locates vulnerabilities in source code and provides remediation guidance.
  • Secret Scanning: identifies high-confidence secrets and blocks developers from pushing secrets into code repositories.
  • Dependency Scanning: discovers vulnerabilities with open-source dependencies and automates update alerts for developers.

This seems like a good direction to go but I’m told it’s quite pricey.

Networking

General availability: Sensitive Data Protection for Application Gateway Web Application Firewall

WAF running on Application Gateway now supports sensitive data protection through log scrubbing. When a request matches the criteria of a rule, and triggers a WAF action, that event is captured within the WAF logs. WAF logs are stored as plain text for debuggability, and any matching patterns with sensitive customer data like IP address, passwords, and other personally identifiable information could potentially end up in logs as plain text. To help safeguard this sensitive data, you can now create log scrubbing rules that replace the sensitive data with “******”.

Sounds good to me!

General availability: Gateway Load Balancer IPv6 Support

Azure Gateway Load Balancer now supports IPv6 traffic, enabling you to distribute IPv6 traffic through Gateway Load Balancer before it reaches your dual-stack applications. 

With this support, you can now add IPv6 frontend IP addresses and backend pools to Gateway Load Balancer. This allows you to inspect, protect, or mirror both IPv4 and IPv6 traffic flows using third-party or custom network virtual appliances (NVAs). 

Useful for security architectures where NVAs are being used

Azure Backup

Preview: Cross Region Restore (CRR) for Recovery Services Agent (MARS) using Azure Backup

We are announcing the support of Cross Region Restore for Recovery Services Agent (MARS) using Azure Backup.

This makes sense. Let’s say I back up my on-prem data, located in Virginia, to Azure East US, in Boydton Virginia. And then there’s a disaster in VA that wipes out my office and Azure East US. Now I can restore to a new location from the paired region replica.

Preview: Save Azure Backup Recovery Services Agent (MARS) passphrase to Azure Key Vault

Now, you can save your Azure Recovery Services Agent encryption passphrase in Azure Key Vault directly from the console, making the Recovery Services Agent installation seamless and secure.

This beats the old default option of saving it as a text file on the machine that you were backing up.

General availability: Selective Disk Backup and Restore in Enhanced Policy for Azure VM Backup

We are adding the “Selective Disk Backup and Restore” capability in Enhanced Policy of Azure VM Backup. 

Be careful out there!

Storage

General Availability: Malware Scanning in Defender for Storage

Malware Scanning in Defender for Storage will be generally available September 1, 2023.

Please make sure that you read up on how much this will cost you. The DfC plans changed recently, and the pricing model for Storage plans changed to include this feature.

Azure Monitor

Public preview: Alerts timeline view

Azure Monitor alerts is previewing a new timeline view that simplifies the consumption experience of fired alerts. The new view has the following advantages:

  • Shows fired alerts on a timeline
  • Helps identify co-occurrence of alerts
  • Displays alerts in the context of the resources they fired on
  • Focuses on showing counts of alerts to better understand impact
  • Supports viewing alerts by severity
  • Provides a more intuitive discovery and investigation path

This might be useful if you are getting a lot of alerts.

Azure Virtual Desktop

Announcing general availability of Azure Virtual Desktop Custom Image Templates

Custom image templates allow admins to build a custom “golden image” using the Azure Virtual Desktop management user interface. Leverage a variety of built-in customizations or add your own customization scripts to install applications or configurations.

Why are they not using Azure Image Builder like I do?

Experts Live Europe 2023

I spoke at Experts Live Europe last week and this post is a report of my experience at this independently run tech conference.

Experts Live

I cannot claim to be a historian on Experts Live Europe (I’ll call it Experts Live after this) but it’s a brand that I’ve known of for years. Many of the MVPs (Microsoft Valuable Professionals) and community experts that I know have attended and presented at this conference for as long as it has been running. It started off as a System Center-focused event and evolved as Microsoft has done, transitioning to a cloud-focused conference covering M365 and Azure.

Previously, I never got to speak at Experts Live. When it started, I had mostly fallen off the System Center track and didn’t feel qualified to apply to speak. Later, as the conference evolved and our interests aligned, I was always booked to be on vacation abroad when the conference was running so I didn’t apply. This was a sickener because the likes of Kevin Greene and Damian Flynn raved about how good this event was for speakers and attendees.

This year, that changed and I applied to speak. I was delighted to hear that I was accepted and was looking forward to attending.

The organisation changed a little, but the central organiser, Isidora Maurer, was still at the helm. I knew that this would be a quality event.

Experts Live is a brand that has expanded and now includes local events across Europe. I’ve been lucky to speak at a couple of those over the years.

Prague 2023

This year’s conference was hosted in Prague, a beautiful city. I’ve spoken in Prague before but it was my usual speaker experience: fly in – taxi to the hotel – speak – taxi to the airport – fly home. This time, because flights home were a little awkward, I was staying an extra night so I could experience the city a little bit.

The conference center is just outside the city centre and the hotels were just next door. Many of the speakers booked into the Corinthian Hotel, a nice place, which was a 2-minute walk across a bridge or through a train station.

Attending

I arrived at the conference center to register on the last day, about 40 minutes before I was due to speak in the second slot. I registered quickly and was told to go upstairs. I did – and the place was a ghost town. I was sure that something was wrong. Whenever you go to a tech event, there are always people in the hallways either on calls or filling time because they don’t like the current sessions. I found the speakers’ room and did my final prep. Then I went to the room I was speaking in next, and it was packed. All of the rooms were packed. Almost no one was “filling time”. I’ve never seen that and it says a lot about the schedule organisers, the sessions/speakers, and the attendees’ dedication.

Another observation – that my wife made afterward while looking at event photos on social media – there were a lot more women at this event than one will usually see at other technical events. The main organiser, Isidora, is a well-known advocate for women in IT and I suspect that her activities help to restore some levels of balance.

My Session

My session was called “Azure Firewall: The Legacy Firewall Killer“. In the session, I compare & contrast Azure Firewall with third-party NVAs, while teaching a little about Azure Firewall features and demonstrate a simple DevSecOps process using infrastructure-as-code.

Credit: Carsten Rachfahl, MVP

I had a full room which was pretty cool and there was lots of engagement after the session – throughout the day!

I attended sessions in all but one slot, catching the end of Carsten Rachfahl’s hybrid session, Didier Van Hoye’s session on QUIC, Damian Flynn’s Azure Policy session, and Eric Berg’s session on Azure networking native versus third-party options. All were excellent, as I expected.

It has been a long time since I’ve had the opportunity to attend technical sessions – the pandemic suspended in-person events for years, I can’t focus on digital events (for several reasons), and Microsoft Ignite is a marketing/vanity event now 🙁

Afterwards

The after-party featured some lovely snacks and drinks with some light-hearted entertainment. It was short – understandably – because many people were leaving straight away.

Entertainment for the evening was hosted for the speakers: we gathered at 19:00 and were taken on a riverboat tour where we had a few drinks and dinner while enjoying the city views in the warm autumn evening. It was quite enjoyable. And maybe, just maybe, many of the speakers continued on in various locations afterward!

Wrap Up

Experts Live is a very well-run event with lots of content spanning multiple expertise areas. I love that the sessions are technical – in fact, some of the speakers adjusted their content to suit the observed technical levels of the audience while at the event. In 2024, if you want to learn, then make sure you check out this conference and hopefully if I’m accepted, I’ll see you there!

Terrafying Azure – A Tale From The Dark Side

This post is a part of the Azure Back to School 2023 online event. In this post, I will discuss using Microsoft Azure Export for Terraform, also known as Aztfexport and previously known as Azure Terrafy (a great name!), to create Terraform code from existing Azure deployments, why you would do it, and share a few tips.

Terraform

Terraform is one of a few Infrastructure-as-Code (IaC) languages out there that support Microsoft Azure. You might wonder why I would use it when Azure has ARM and Bicep. I’ll do a quick introduction to Terraform and then explain my reasoning which you are free to disagree with 🙂

Terraform is a product of Hashicorp available as a free-to-use product that is supported with some paid-for services. Like other IaC languages, it describes and desired end result. The major feature that differs from the native Azure languages is the use of state files – a file that describes what is deployed in Azure. This state file has a few nice use cases, including:

  • The outputs of a resource are documented, enabling effortless integration between resources in the same or even different files – with some effort, outputs from different deployments can be included in another deployment.
  • A true what-if engine that (mostly) works, unlike the native what-if in Azure, greatly reducing the time required for deployments and the ability to plan (pre-review) a deployment’s expected changes.

My first encounter with Terraform was a government project where the customer wanted to use Terraform over Bicep. Their reasoning was that elected politicians come and go, and suppliers come and go. If they were going to invest in an IaC skillset, they wanted the knowledge to be transferrable across clouds.

That’s the big advantage of Terraform. While the code itself is not cloud portable, the skill is. Terraform uses providers to be able to manage different resource types. Azure is a provider, written by Microsoft. Azure AD is a provider – ARM/Bicep still do not support Azure AD! AWS and GCP have providers. VMware has a provider. GitHub has a provider – the list goes on and on. If a provider does not exist, you can (in theory) write your own.

On that project, I was meant to be hands-off as an architect. But there were staffing and scheduling issues so I stepped up. Having never written a line of Terraform before I had my first workload, with some review help from a teammate, written in under a day. By the way, the same thing in Bicep took three days! Terraform is really well documented, with lots of examples, and the language makes sense.

Unlike Bicep, which is still beholden to a lot of the complexity of ARM. Doing simple things can involve stupidly complicated functions that only a C programmer (I used to be one) could enjoy (and I didn’t). I got hooked on Terraform and convinced my colleagues that it was a better path than Bicep, which was our original plan to replace ARM/JSON.

Aztfexport

Switching Terraform creates a question – what do we do with our existing workloads which are either deploying using Click Ops (Portal), script, or ARM/Bicep?

Microsoft has created a tool called Azure Export for Terraform (Aztfexport) on GitHub. The purpose of this tool is to take an existing resource group/resource/Graph query string and export it as Terraform code.

The code that is produced is intended to be used in some other way. In other words, Microsoft is not exporting code that should be able to immediately deploy new resources. They say that the produced code should be able to pass a terraform plan where the existing resources are compared with the state file and the code and say “the code is clean and there are no changes required”.

The Terraform configurations generated by aztfexport are not meant to be comprehensive and do not ensure that the infrastructure can be fully reproduced from said generated configurations. For details, please see limitations).

Azure/aztfexport: (github.com)

Why Use Aztfexport?

If I can’t use the code to deploy resources then what value is it? Hopefully you will see what aztfexport is a central part of my toolkit. I see it being useful in the following ways:

  • Learning Terraform: If you’ve not used Terraform before then it’s useful to see how the code can be produced, especially from resources that you are already familiar with.
  • Creating TF for an existing workload: You need to “terrafy” a resource/resource group and you want a starting point.
  • Azure-to-Azure migrations: You have a set of existing resources and you want to get a dump of all the settings and configurations.
  • Learning how a resource type/solution is coded: My favourite learning method is to follow the step-by-step and then inspect the resource(s) as code.
  • Understand how a resource type/solution works: This is a logical jump from the previous example, now including more resources as a whole solution.
  • Auditing: Comparing what is there with what should be there – or not there.
  • Documentation: The best form of resource documentation is IaC – why create lengthy documentation when the code is the resource?

I did use Aztfexport to learn Terraform more. In my current project, I have used it again and again to do Azure-to-Azure migrations, taking legacy ClickOps deployments and rewriting them as new secure/governed deployments. I’ve save countless hours capturing settings and configurations and re-using them as new code.

The Bad Stuff

Nothing is perfect, and Aztfexport has some thorns too. Notice that the expected usage is that the produced code should pass a terraform plan. That is because in many situations (like with ARM exports) the code is not usable to deploy resources. That can be because:

  • ARM APIs do not expose everything, so how can Terraform get those settings?
  • The tool or the providers using used do not export everything.

One example I’ve seen includes App Services configurations that do not include the code type details. Another recent one was with WAF Policies overridden WAF rules were not documented. In both cases, the code would pass a plan. But neither would re-produce the resources. I’ve learned that I do need to double-check things with a resource type that I’ve never worked with before – then I know what to go and manually grab either from an ARM export or a visual inspection in the Portal.

Another thing is that the resources are named by a “machine” – there is no understanding of the role. Every resource is res-1, res-2, and so on, no matter the type or the role in the workload. That is a bit anonymous, but I find that useful when inspecting dependencies between resources.

A giant main.tf file is created, which I break up into many smaller files. I can find relationships based on those easy-to-track dependencies and logically group resources where it suits my coding style.

One feature of TF is the easy reuse of resource IDs. One can easy refer to resource_type.resource_name.id in a property and know that the resource ID of that resource will be used. Unfortunately, some Aztfsexport code doesn’t do that so you get static resource IDs that should be replaced – that happens with other properties of resources too, so that all should be cleaned up to make code more reusable.

Installing Aztfexport

You will need to install Terraform – I prefer to use a Package Manager for that – the online instructions for a manual installation are a mess. You will also require Azure CLI.

The full instructions for installing Aztfexport are shared on GitHub, covering Windows, MacOS and Linux. The Windows installation is easy:

winget install aztfexport

You will need to restart your terminal (Windows) to get an updated Path variable so the aztfexport binary can be found.

Before you use aztfexport, you will need to log in using Azure CLI:

Open your terminal

Login:
az login

Change subscription:
az account set -subscription <subscription ID>

Verify the correct subscription was selected by checking the resource groups:
az group list

Create an empty folder on your PC and navigate to that folder in your terminal. The aztfexport tool requires an empty folder, by default, to create an export including all the required provider files and the generated code.

If you want to create an export of a single resource then you can run:

aztfexport resource <resource ID>

If you want to create an export of a resource group, then you can run:

aztfexport resource-group -n <resource group name>

Not the -n above means “don’t bother me with manual confirmation of what resources to include in the export”. In Terraform, sub-resources that can be managed as their own Terraform resources would otherwise need to be confirmed and that gets pretty tiresome pretty fast.

Tips

I’ve got to hammer on this one again, the produced code is not intended for deployment. Take the code, copy and paste it into new files and clean it up.

If your goal is to take over an existing IaC/ClickOps deployment with Terraform then you are going to have some fun. The resources already exist and Terraform is going to be confused because there is no state file. You will have to produce a state file using Terraform export for every resource definition in your code. That means knowing the resource IDs of everything, including Azure AD objects, role assignments, and sub-resources. You’ll need to understand the format of those resource IDs – use an existing state file for that. Often the resource ID is the simple Azure resource ID, or a derivation of a parent resource ID that you can figure out from another state file. Sometimes you need to wander through Azure AD (look at assignments in scopes that you do have access to if you don’t have direct Azure AD rights), use Azure CLI to “list” resources or items, or browse around using Resource Explorer in the Azure Portal.

Do take some time to compare your code with any previous IaC code or with an ARM export. Look for things that are missing – Terraform has many defaults that won’t be included and that code is missing because it is not required. I often include that code because I know that they are settings that Devs/Ops might want to tune later.

If you have the misfortune of having to work an existing Terraform module library then you will have to translate the exported code as parameter/variable files for the new code – I do not envy you 🙂

Summary

This post is an introduction to Microsoft Azure Export for Terraform and a quick how-to-get-started guide. There is much more to learn about, such as how to use a custom backend (if resource names in Terraform are not a big deal and to eliminate the terraform import task) or even how to use a resource map to identify resources to export across many resource groups.

The tool is not perfect but it has saved me countless hours over the last year or so, dating back to when it was called Azure Terrafy. It’s one in my toolkit and I regularly break it out to speed up my work. In my opinion, anyone starting to work with Terraform should install and use this tool.

Microsoft Ignite 2023 – I Will Not Be Attending

Microsoft Ignite 2023 has been announced as a hybrid event. Let me explain why I have no interest in attending in person or taking part digitally.

Technical Education

One of the reasons that I became a pretty regular attendee of Microsoft’s technical conferences was to learn. My first time to attend TechEd Europe was a real eye-opener. I took part in hands-on labs, tried out new products, and went to sessions where I learned a lot about products/features that I worked with or was interested in.

When a past manager asked me about my training budget/plan it was quite simple: I had no interest in traditional training because I knew all that I could learn in the necessary areas – I could often rewrite the courses with better content. But attending a conference where the creators of the product/feature stood on stage and got into deep technical detail – that was unmatched.

The TechEd brand was killed off years ago and replaced with the much larger Ignite conference. The immediate noticeable change was that the main breakouts were 99% reserved for Microsoft staff and sponsors – I avoid sponsor sessions because they are 100% advertising. The Microsoft sessions slowly changed away from technical Program Managers to managers, and then to corporate vice presidents (CVPs). That meant that the level of technical content was dropping and there was a shift to marketing.

Pandemic

As we all know, COVID-19 shut the world down and brought down conferences with it. Microsoft switched to a digital format for Ignite. In theory, this should have increased the audience and potentially the breadth & depth of content. However, Ignite “online” featured 30 minute-long sessions (because of “feedback”) that featured only:

  • Bullet point announcements with no technical follow-up
  • Marketing by CVPs.

Sure, Ignite became a glossy, well-produced digital event but it was pointless. I don’t care how many live streams they had – how many of those people were paying attention? I don’t care how many downloads/non-live streams they had – how many of those people finished got more than 1/3 through the session?

I can read bullet point announcements in the blog posts on day 1 of the conference much more easily than I can from a PowerPoint – and there will be links to more detailed information.

I have no interest in some CVP trying to be the next Stephen Elop-style failed techie celebrity, burning up time that would have been better with a program manager sharing knowledge on the new tech that they’ve been working on for months/years.

I remember a few years ago that one group in Microsoft staged their own “Ignite” outside of the official content/site in order to get their news out – that didn’t happen again. I guess somebody squashed that.

Why Attend?

I attended the last few TechEd North America conferences and all but the very first Microsoft Ignite events. I have been in a couple of conversations about attending this year and I’ve made it clear: I have no interest – and that seems to be a common opinion.

It costs a lot of money to travel to such an event. A flight is between €600-€1200. A hotel will clock in at over €2000. The early bird ticket price this year is $1,525 (around €1,424). Don’t forget local expenses like travel and food. If you’re a consultant like me then the company has lost revenue while you are away. And then there is the priceless time away from family and the impact on the partner who has to keep things running while you are far away. Attending a conference is an investment. I always saw attending Ignite as an investment in the following year: I would have knowledge that only a few others in my market had. If the return is near zero then Microsoft Ignite is a bad investment.

OK, can’t I just watch it online? I think I have watched maybe 3 Ignite sessions from the Pandemic years. Last year there was supposed to be a deep dive in one area that I work in. I tuned in live, and it was a CVP in a digital marvel or marketing, uttering words that they probably have never used in that order before. Even the time to watch the online content is not worth the investment.

What Needs To Change?

I don’t think that any of this will happen – there are those in Microsoft who view Ignite as irrelevant (yuk! tech!), a distraction, or a cost. The switch to an online video brochure suits them. I think that sucks. I know that there is an in-person option, but check out the mostly pre-recorded content – are you going to pay to stream the same content as everyone else while sitting in a conference centre?

The presenters need to switch back to the program managers from the teams. These are people who have worked on the products/features since inception and are qualified to talk about the content at a technical level and are trained in public/customer interaction (it’s normally a part of the job description).

The length of session needs to return to either 60 minutes or 75 minutes. As a presenter, I can tell you that it is impossible to bring an audience through a progression from level 200 to level 300/400 in 30 minutes while doing all the necessary steps and delivering any meaningful amount of content. 60 minutes is the minimum. 75 minutes gives the presenter a real chance to drill deep – which a large part of the audience really wants.

Become an expert in automation and AI in 21 minutes during this breakout deepdive!

The content needs to include large amounts of technical sessions. Sure, go ahead and have those level 100-200 sessions for the C-suite or people getting into subjects for the first time. But give us techies a reason to participate, either in person or online.

Give Us TechEd!

The thing that is most missing today is knowledge. There is too much focus on introduction/bullet point announcements/blog posts, training to get a practically useless certification, and documentation that fails to explain the why’s and how’s.

We need technical content from the people who work on the product/features and really know them. I say this as a person who wants to learn but also as a person who witnesses the lack of knowledge or understanding in the market – the iPad generation is trying to use The Cloud without knowing why/how/what’s best/what’s secure because they’re limited to the next-next getting started docs that are the only technical information out there anymore.

Azure Infrastructure Announcements – August 2023

This post brings you a summary of the infrastructure announcements from Azure that were made during August 2023. There are lots of announcements from Storage and a few interesting notes for VMs, networking, and ASR.

Storage

Azure Managed Lustre: not your grandparents’ parallel file system

With a few clicks of a web interface or an Azure Resource Manager template, AMLFS lets you provision an all-flash Lustre file system in minutes. What’s different is that this Lustre file system is all yours. If someone else in Azure is running a job that creates a million files, you won’t ever know it because your Lustre servers and SSDs are exclusively yours.

Massively scaled and high performance file systems for HPC workloads.

General availability | Azure NetApp Files: SMB Continuous Availability (CA) shares

To enhance resiliency during storage service maintenance operations, SMB volumes used by Citrix App Layering, FSLogix user profile containers and Microsoft SQL Server on Microsoft Windows Server can be enabled with Continuous Availability

SMB Transparent Failover means that clients should not notice maintenance operations.

Public preview: Azure Storage Mover support for SMB and Azure Files

Storage Mover is a fully managed migration service that enables you to migrate on-premises files and folders to Azure Storage while minimizing downtime for your workload. Azure Storage Mover can now migrate your SMB shares to Azure file shares.

To be honest, I’ve not encountered a “replace the file server with Azure Files” scenario yet. Third-party vendors often won’t support it for LOB apps. User data typically ends up in SharePoint/OneDrive. And wouldn’t most Citrix/RDS admins want to start with new profiles?

Generally available: Azure Blob Storage Cold Tier

Azure Blob Storage Cold Tier is now generally available. It is a new online access tier that is the most cost-effective Azure Blob offering for storing infrequently accessed data with long-term retention requirements, while providing instant access. The pricing of the cold tier storage option lies between the cool and archive tiers, and it follows a 90-day early deletion policy. You can seamlessly utilize the cold tier in the same way as the hot and cool tiers.

Cool – Cold. Tell me that isn’t confusing. The scenario is that you want to store data for a long time, but you need it immediately available. Archive requires a 15-hour restore (“rehydration”) that can be accelerated with a charge. Cold is one step up, but not as cost-effective.

Public Preview: Azure NetApp Files Cloud Backup for Virtual Machines

With Cloud Backup for Virtual Machines, you can now create VM consistent snapshot backups of VMs on Azure NetApp Files datastores. The associated virtual appliance installs in the Azure VMware Solution cluster and provides policy-based automated and consistent backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores lowering RTO, RPO, and improving total cost of ownership.

General Availability: Incremental snapshots for Premium SSD v2 Disk and Ultra Disk Storage

You can now instantly restore Premium SSD v2 and Ultra Disks from snapshots and attach them to a running VM without waiting for any background copy of data. This new capability allows you to read and write data on disks immediately after creation from snapshots, enabling you to recover your data from accidental deletes or a disaster quickly

I can see third-party backup making use of this.

Azure Elastic SAN updates: Private Endpoints & Shared Volumes

As we approach general availability of Azure Elastic SAN, we continue improving the service and adding features based on your feedback. Today, we are releasing private endpoint support and volume sharing support via SCSI (Small Computer System Interface) Persistent Reservation.

This sounds like the sort of feature maturity one will expect as the service approaches general availability. I wonder what the actual target market is for this service.

Azure Site Recovery

Private Preview – DR for Shared Disks – Azure Site Recovery

We are excited to announce the Private Preview of DR for Azure Shared Disks for workloads running Windows Server Failover Clusters (WSFC) on Azure VMs. Now you can protect, monitor, and recover your WSFC-clusters as a single unit across its DR Lifecycle, while also generating cluster-consistent recovery points – which are consistent across all the disks (including the Shared Disk) of the cluster.

This feature is long overdue for customers using shared virtual hard disks to create failover clusters.

Networking

Public preview: Support for new custom error pages in Application Gateway

In addition to the response codes 403 and 502, the Azure Application Gateway now lets you configure company-branded error pages for more response codes – 400, 405, 408, 500, 503, and 504. You can configure these error pages at a global level to apply to all the listeners on your gateway or individually for each listener. 

These pages can be shared on any publicly accessible URI.

Azure Firewall: New Monitoring and Logging Updates

Notes:

  • (Preview) With the Azure Firewall Resource Health check, you can now view the health status of your Azure Firewall and address service problems that may affect your Azure Firewall resource. Resource Health allows IT teams to receive proactive notifications regarding potential health degradations and recommended mitigation actions for each health event type
  • (Preview) The Azure Firewall Workbook presents a dynamic platform for analyzing Azure Firewall data. Within the Azure portal, you can utilize it to generate visually engaging reports.
  • (GA) The Latency Probe metric is designed to measure the overall latency of Azure Firewall and provide insight into the health of the service. IT administrators can use the metric for monitoring and alerting if there is observable latency and diagnosing if the Azure Firewall is the cause of latency in a network.

Resource health should make for a useful alert, especially when enabling DevSecOps – be aware of the dreaded “out of sync” error. I just tried the workbook in a production system – I noticed a couple of things that I might not have otherwise noticed because they didn’t trigger a human response (yet). The latency probe is interesting – I think it originated from customer network performance scenarios where it was suspected that the firewall was the root cause.

Virtual Machines

Public preview: Azure Mv3 Medium Memory (MM) Virtual Machines

Today we are announcing the public preview of the next generation Mv3 Medium Memory (MM) virtual machine series. Powered by the 4th Generation Intel® Xeon® Scalable Processor and DDR5 DRAM technology, the Mv3 medium memory (MM) virtual machines can scale for SAP workloads from 250GB to 4TB. With Azure Boost, Mv3 MM provides a ~25% improvement in network throughput and up to 1.5X improvement in remote storage throughput over the previous M-series families. 

These machines start at 12 vCPUs and 240 GB RAM, scaling up to 176 vCPUs and 2794 RAM. That should just about be enough to run Teams.

Azure Infrastructure Announcements – July 2023

Many people in Europe take the month of July off for vacation so they would have missed out on an unusually busy few weeks of announcements for Microsoft. This post summarises the infrastructure announcements from Microsoft Azure during July 2023.

Update: 01/09/2023. I’m not sure how this happened but I missed a bunch of interesting items from the second half of July. I guess that I got distracted while putting this list together (there’s a lot of task hopping during the day job) and thought that I’d completed the list. I have added some items today.

Networking

Public Preview: Default Rule Set 2.1 for Regional WAF with Application Gateway

DRS 2.1 is baselined off the Open Web Application Security Project (OWASP) Core Rule Set (CRS) 3.3.2 and extended to include additional proprietary protection rules developed by Microsoft.

All improvements to the CRS claimed to reduce the detection of false positives and I never saw that in reality. I’m going to be skeptical about this one – a simple rules-based system will still detect the same false positives that I continue to see daily.

Public preview: Azure Virtual Network Encryption

With Virtual Network encryption, customers can enable encryption of traffic between Virtual Machines and Virtual Machines Scale Sets within the same virtual network and between regionally and globally peered virtual networks.

This will be useful in limited scenarios going forward for customers. Too many networking features are limited to VMs. Legacy ssytems that are migrated to Azure or niche solutions that are best on VMs are fewer in number every day – customers that are already in the cloud normally choose PaaS first.

General availability: ExpressRoute private peering support for BGP communities

ExpressRoute private peering now supports the use of custom Border Gateway Protocol (BGP) communities with virtual networks connected to your ExpressRoute circuits. Once you configure a custom BGP community for your virtual network, you can view the regional and custom community values on outbound traffic sent over ExpressRoute when originating from that virtual network.

This one could be useful for customers where they have multiple ExpressRoute circuits with 1:M or N:M site:gateway scenarios.

General availability: Always Serve for Azure Traffic Manager

Always Serve for Azure Traffic Manager (ATM) is now generally available. You can disable endpoint health checks from an ATM profile and always serve traffic to that given endpoint. You can also now choose to use 3rd party health check tools to determine endpoint health, and ATM native health checks can be disabled, allowing flexible health check setups.

Not much to say here 🙂

Public preview: Route Server Hub Routing Preference

When branch-to-branch is enabled and Route Server learns multiple routes across site-to-site (S2S) VPN, ExpressRoute, and SD-WAN NVAs, for the same on-premises destination route prefix, users can now configure connection preferences to influence Route Server route selection.

Azure Route Server is a great resource. It’s so simple to configure. I just wish there were native solutions where you could program routes into it when using only native Azure networking resources. Using BGP instead of UDRs in a hub & spoke would be so much more reliable and agile.

Azure’s Cross-Region Load Balancer is Now Generally Available

With cross-region Load Balancer, you can distribute traffic across multiple Azure regions with ultra-low latency and high performance.

This smells like one of those Azure resource types that was developed for other Azure or Microsoft cloud services (like telephony) and they released it to the public too.

Updated default TLS policy for Azure Application Gateway

We have updated the default TLS configuration for new deployments of the Application Gateway to Predefined AppGwSslPolicy20220101 policy to improve the default security. This recently introduced, generally available, predefined policy ensures better security with minimum TLS version 1.2 (up to TLS v1.3) and stronger cipher suites.

Those of you using older deployments or modular code for new deployments should consult your application owners and start a planning process to upgrade.

Generally available: Cloud Next-Generation Firewall (NGFW) by Palo Alto Networks – an Azure Native ISV Service

Cloud NGFW by Palo Alto Networks is the first ISV next-generation firewall service natively integrated in Azure. Developed through a collaboration between Microsoft and Palo Alto Networks, this service delivers the cutting-edge security features of Palo Alto Network’s NGFW technology while also offering the simplicity and convenience of cloud-native scaling and management. 

If you really must stay with on-prem tech 😀

Azure Kubernetes Service

Public preview: Azure Application Gateway for Containers

Application Gateway for Containers is the next evolution of Application Gateway + Application Gateway Ingress Controller (AGIC), providing application (layer 7) load balancing and dynamic traffic management capabilities for workloads running in a Kubernetes cluster.

It sounds good, but AKS folks that I respect seem to prefer NGINX. That said, I know SFA about K8s.

Public preview: Network observability add-on for AKS

The new network observability add-on for AKS, now in public preview, provides complete observability into the network health and connectivity of your AKS cluster.

I’m surprised that something like this wasn’t already available. My current project might not include AKS, but monitoring network performance and health between services was critical. Doing the same between micro-services seems more important to me.

Public preview: Bring your own key on Ephemeral OS disk for AKS

BYOK support provides you the option to use your own customer managed keys (CMK) to encrypt your ephemeral OS Disks, providing you increased control over your encryption keys.

This sounds like one of those “a really big customer wanted it” features and it won’t be of interest to too many others.

Azure Virtual Desktop

Announcing Public Preview of Personal Desktop Autoscale on Azure Virtual Desktop

Personal Desktop Autoscale is Azure Virtual Desktop’s native scaling solution that automatically starts session host virtual machines according to schedule or using Start VM on Connect and then deallocates session host virtual machines based on the user session state (log off/disconnect).

This could be a real money saver for a very expensive solution – personal desktops in the cloud.

Announcing the General Availability of Private Link for Azure Virtual Desktop

Private Link for Azure Virtual Desktop is now generally available! With this feature, users can securely access their session hosts and workspaces using a private endpoint within their virtual network. Private Link enhances the security of your data by ensuring it stays within a trusted and secure private network environment.

I have encountered a customer scenario where the connection had to go over a “leased line”. Even, at the time, had “Windows Virtual Desktop” been ready, the use of a public endpoint would have forced us to use Citrix instead. The use of a Private Endpoint forces the client to connect over a private network.

Azure Virtual Desktop Watermarking Support

We are announcing the general availability for Watermarking support on Azure Virtual Desktop, an optional protection feature to Screen Capture that acts as a deterrent for data leakage.

A QR code is watermarked onto the screen. The QR code can be scanned to obtain the connection ID of the session. Then admins can trace that session through Log Analytics. There are limitations.

Virtual Machines

Announcing General Availability of Confidential VMs in Azure Virtual Desktop

Azure confidential VMs (CVMs) offer VM memory encryption with integrity protection, which strengthens guest protections to deny the hypervisor and other host management components code access to the VM memory and state.

This might sound like overkill to most of us, but I have encountered one virtual desktop scenario where the nature of the data and the legal requirements might mandate the use of this technology.

Public Preview: Azure Dedicated Host – Resize

With Azure Dedicated Host’s new ‘resize’ feature, you can easily move your existing dedicated host to a new Azure Dedicated Host SKU (e.g., from Dsv3-Type1 to Dsv3-Type4). This new ‘resize’ feature minimizes the impact and effort involved in configuring VMs when you want to upgrade your underlying dedicated host system.

For you Hyper-V folks out there: yes, Live Migration will be used to keep the VMs running for all but a second or two (just like vMotion).

Dev-optimized, cloud-based workstations—Microsoft Dev Box is now generally available

Dev Box combines developer-optimized capabilities with the enterprise-ready management of Windows 365 and Microsoft Intune.

Think of this as the cousin of Windows 365 which is aimed at developers. For me, this has two use cases:

  • Supplying pay-as-you-go virtual machines to contract developers instead of purchasing hardware or trusting their hardware.
  • Providing a full development experience that is in a secured network and can be trusted to connect to Azure services.

Hotpatch is now generally available on Windows Server VMs on Azure with the Desktop Experience installation mode

Hotpatch is now available for Windows Server Azure Edition VMs with Desktop Experience installation mode using the newly released image.

Hmm, did someone say that Server Core is not widely popular? It’s about time.

Announcing public preview of new burstable VMs – Bsv2, Basv2 and Bpsv2

The new additions to the B family consist of 3 new VM series – Bsv2, Basv2, and Bpsv2, each based on the Intel® Xeon® Platinum 8370C, AMD EPYC™ 7763v, and Ampere® Altra® Arm-based processors respectively. These new burstable v2-series virtual machines offer up to 15% better price-performance, up to 5X higher network bandwidth with accelerated networking, and 10X higher remote storage throughput when compared to the original B series.

This is easily the most popular series of VMs for any customer that I have gone near. It makes sense that new hardware is being introduced to enable continued growth.

Preview: Azure Boost

Azure Boost is a new system that offloads virtualization processes traditionally performed by the hypervisor and host OS onto purpose-built hardware and software … customers participating in the preview to achieve a 200 Gbps networking throughput and a leading remote storage throughput up to 10 GBps and 400K IOPS, enabling the fastest storage workloads available today.

Back when I was a Hyper-V MVP, this was the sort of feature that would have caught my attention and led to a bunch of really detailed blog posts. If you follow the links you can read:

Azure Boost VMs in preview can achieve up to 200 Gbps networking throughput, marking a significant improvement with a doubling in performance over other existing Azure VMs … industry leading remote storage throughput and IOPS performance of 10 GBps and 400K IOPS with our memory optimized E112ibsv5 VM using NVMe-enabled Premium SSD v2 or Ultra Disk options.”

It doesn’t appear to be just the extreme spec VMs that get improved:

“Offloading storage data plane operations from the CPU to dedicated hardware results in accelerated and consistent storage performance, as customers are already experiencing on Ev5 and Dv5 VMs.  This also enhances existing storage capabilities such as disk caching for Azure Premium SSDs.”

“Azure Boost’s isolated architecture inherently improves security by running storage and networking processes separately on Azure Boost’s purpose-built hardware instead of running on the host server.” This might only be a Linux feature based on Security Enhanced Linux (SELinux).

I wish that Ben Armstrong was still doing tech presentations for Microsoft. He did an amazing job at sharing how things worked. in Hyper-V (what Azure is built upon).

The Classic VMs retirement deadline is now September 6, 2023

he deadline to migrate your Iaas VMs from Azure Service Manager to Azure Resource Manager is now September 6*, 2023. To avoid service disruption, we recommend that you complete your migration as soon as possible. We will not provide any additional extenstions after September 6, 2023. 

There won’t be too many pre-ARM virtual machines out there. But those that are out there are probably old and mostly un-touched in years. It’s already late to get planning … so get planning!

Azure Migrate

Azure Migrate – Product & Partner Updates

A few notes:

  • Components in financial estimates through the “TCO/Business case” feature to allow you to analyze cost more comprehensively before moving to the cloud.
  • Tanium’s (a partner) real-time operational data can be used by Azure Migrate for assessments and to generate a business case to move to Azure.
  • Azure Migrate will now support in-place upgrade of end-of-support (EOS) Windows Server 2012 and later operating system (OS), during the move to Azure.

I have never been able to use Azure Migrate in 4+ years of migrating customers to Azure due to various reasons so I cannot comment on the above.