Referencing Private Endpoint IP Addresses In Terraform

It is possible to dynamically retrieve the resulting IP address of an Azure Private Endpoint and use it in other resources in Terraform. This post will show you how.


You are building some PaaS resources using Private Endpoints. You have no idea what the IP addresses are going to be. But you need to use those IP addresses elsewhere in your Terraform code, for example in an NSG rule. How do you get the IP addresses?

Find The Properties

The trick for this is to use the terraform state command. In my case, I deployed a Cosmos DB resource using azurerm_private_endpoint.cosmosdb-account1. To view the state of the resource, I can run:

terraform state show azurerm_private_endpoint.cosmosdb-account1

That outputs a bunch of code:

Terraform state of a Cosmos DB resource

You can think of the exposed state as a description of the resource the moment after it was deployed. Everything in that state is addressable. A common use might be to refer to the resource ID ( or resource name ( properties. But you can also get other properties that you don’t know in advance.

The Solution

Take another look at the above diagram. There is an array property called private_dns_zone_configs that has one item. We can address this property as azurerm_private_endpoint.cosmosdb-account1.private_dns_zone_configs[0].

In there there is another array property, with two items, called record_sets. There is one record set per IP address created for this private endpoint. We can address these properties as azurerm_private_endpoint.cosmosdb-account1.private_dns_zone_configs[0].record_sets[0] and azurerm_private_endpoint.cosmosdb-account1.private_dns_zone_configs[0].record_sets[1].

Cosmos DB creates a private endpoint with multiple different IP addresses. I deliberately chose Cosmos DB for this example because it shows a more complex probelm and solution, demonstrating a little bit more of the method.

Dig into record_sets and you’ll find an array property called ip_addresses with 1 item. If I want the two IP addresses of this private endpoint then I will use: azurerm_private_endpoint.cosmosdb-account1.private_dns_zone_configs[0].record_sets[0].ip_addresses[0] and azurerm_private_endpoint.cosmosdb-account1.private_dns_zone_configs[0].record_sets[1].ip_addresses[0].

Using the Addresses

destination_address_prefixes = [
 azurerm_private_endpoint.cosmosdb-account1.private_dns_zone_configs[0].record_sets[0].ip_addresses[0], // Cosmos DB Private Endpoint IP 1
 azurerm_private_endpoint.cosmosdb-account1.private_dns_zone_configs[0].record_sets[1].ip_addresses[0] // Cosmos DB Private Endpoint IP 2

And now I have code that will deploy an NSG rule with the correct destination IP address(es) of my private endpoint without knowing them. And even better, if something causes the IP address(es) to change, I can rerun my code without changing it, and the rules will automatically update.

Avoiding Sticker Shock in Azure

In this post, I’m going to discuss the shock that switching from traditional CapEx spending to cloud/OpEx spending causes. I will discuss how to prepare yourself for what is to come, how to govern spending, and how to enforce restrictions.

The Switch

Most of you who will read this article have been working in IT for a while, that is, you are not a “cloud baby” (born in the cloud). You’ve likely been involved with the entire lifecycle of systems in organisations. You’ve specified some hardware, gone through a pricing/purchase process, owned that hardware, and replaced it 3-10 years later in a cyclical process. It’s really only during the pricing/purchase process which happens only every 3-10 years in the life of a system, that you have cared about pricing. The accountants cared – they cared a lot about saving money and doing tax write-offs. But once that capital expenditure (CapEx) was done, you forgot all about the money. And you’re in IT so you don’t care about the cost of electricity, water, floorspace, or all the other things that are taken care of by some other department such as Facilities.

Things are very different in The Cloud. Here, we get a reminder every month about the cost of doing business. Azure sends out an invoice and someone has gotta pay the piper. Cloud systems run on a “use it and pay for it” model, just like utilities such as electricity. The more you use, the more you pay. Conversely, the less you use, the less you pay.

Sticker Shock

Have you ever wandered around a shop, seen something you liked, had a look at the price tag and felt a shocked at the high price? That’s how the person who signs the checks in your organisation starts to feel every month after your first build in or migration into Azure. Before an organisation starts up in The Cloud, their fears are about security, compliance, migration deadlines, and so on. But after the first system goes live, the attention of the business is on the cost of The Cloud.

There is a myth that The Cloud is cheaper. Sometimes, yes, but not always – large virtual machines and wasteful resource sizing stand out. In CapEx-based IT, you paid for hardware and software. Someone else in the business paid for all the other stuff that made the data centre or computer room possible. In The Cloud, the cost includes all those aspects, and you get the bill every month. This is why cost management becomes a number 1 concern for Cloud customers.

I have seen the effect of sticker shock on an organisation. In one project that I was a lead on, the CTO questioned every cost soon after the bills started to arrive. The organisation was a non-profit and cash flow was intended for their needy clients. Every time something was needed to enable one of their workloads, the justification for the deployment was questioned.

In other scenarios, the necessary (for agility) self-service capability of The Cloud provides developers and operators with a spigot through which cash can leave the organisation. I heard a story when I started working with Azure about a developer that wrote a bad Azure SQL query and left it to run over a long weekend. The IT department came in the following week to find three years of Azure budget spent in a few days.

Dec, Ops, And … Fin?

You’ve probably heard of DevOps, the mythical bringing together of eternal enemies, Developers and IT Operations. DevOps hopes to break down barriers and enable aligned agility that provides services to the business.

Now that we’ve all been successful at implementing DevOps (right?!?!) it’s time to forge those polar IT opposites with the folks in finance.

Finance needs to play a role:

  • Early in your cloud journey
  • During the lifecycle of each workload

The Cloud Journey

The process that an organisation goes through while adopting The Cloud is often called a cloud journey. Mid-large organisations should look at the Cloud Adoption Framework (a CAF exists for Azure, AWS, and Google Cloud) because of the structure that it provides to the cloud journey. Smaller organisations should take some inspiration from CAF – a lot of the concepts will be irrelevant.

A critical early step in a CAF is to work with the people that will be signing the cheques. The accountants need to learn:

  • Developers and operators will be free to deploy anything they want, within the constraints of organisation-implemented governance.
  • How the billing process is going to change to a monthly schedule based on past usage.
  • About the possibilities of monitoring and alerting on consumption.

The Lifecycle of Each Workload

In DevOps, Developers and Operators work together to design & operate the code and resources together, instead of the historical approach where square code is written and Ops try to squeeze it into round resources.

When we bring Finance into the equation, the prediction of cost and the management of cost should be designed with the workload and not be something that is tacked on later.

Architects must be aware that resource selection impacts costs. Picking a vCore Azure SQL database instead of a lower-cost DTU SKU “just to be safe” is safe from a technical perspective but can cost 1000% more. Designing an elastic army of ants, based on small compute instances that auto-scale while maintaining state, provides a system where the cost is a predictable percentage of revenue. Reserved instances and licensing to use hybrid use benefit can reduce costs of several resource types (not just virtual machines) over one-to-three years.

A method of associating resources with workloads/projects/billing codes must be created. The typical method that is discussed is to use tagging – which, despite all the talk of Azure Policy – requires a human to apply values to the tags which may be deployed automatically. I prefer a different approach, using one subscription per workload and using that natural billing boundary to do the work for me.

The tool for managing cost is perfectly named: Azure Cost Management. Cost Management is not perfect – I seriously dislike how some features do not work with CSP offers – but the core features are essential. You can select any scope (tag, subscription, or resource group) and get an analysis of costs for that scope in many different dimensions, including a prediction for the final cost at the end of the billing period. A feature that I think is essential for each workload is a budget. You can use cost analysis to determine what the spend of a workload will be, and then create alerts that will trigger based on current spending and forecasted spending. Those alerts should be sent to the folks that own the workload and pay the bill – enabling them to crack some fingers should the agreed budget be broken.

Source: Microsoft

Wrap Up

Once the decision to go to The Cloud is made, there is a rush to get things moving. Afterward, there’s a panic when the bills start to come in. Sticker shock is not a necessity. Take the time to put cost management into the process. Bring the finance people and the workload owners into the process and educate them. Learn how resources are billed for and make careful resource and SKU selections. Use Azure Cost Management to track costs and generate alerts when budgets will be exceeded. You can take control, but control must be created.

Connecting To A Third-Party Network From Azure Using NAT

An unfortunately common scenario is where you must create a site-to-site network connection with a third-party network from your Azure network using NAT. This post will explain a few solutions.

The Scenario

There are those out there who think that every implementation in The Cloud is 100% under your control and is cloud-ready. But sometimes, you must fit in with other people’s designs and you can’t use cool integrations such as Private Link or API. Sometimes you need to connect your network to a third party and they dictate the terms of the connection.

The connection is typically a site-to-site connection, usually VPN but I have seen ExpressRoute used. VPN means there are messy bits – you can control that with your own on-premises firewalls but you have no control over the VPN configuration of an externally owned firewall.

Site-to-site connections with a service provider means that there could be IP address overlap. The only way to handle that is to use NAT – and that is not always possible natively in the platform or it’s really badly documented.

Solution 1: On-Premises Relay

In this scenario, the third-party will make a connection to your on-premises network. NAT is implemented on the on-premises network to translate your private Azure address to some “public address” (it is routed only over the private connections).

The connection between on-premises and Azure could be VPN or ExpressRoute.

This design is useful in two situations:

  1. You are using ExpressRoute – the ExpressRoute Gateway does not offer NAT functionality.
  2. The third-party insists that you use some kind of VPN configuration that is not supported in Azure, for example, GRE.

The downside with this design is that might be additional latency between the third-party and your Azure network.

Solution 2: AWS Relay

Oh – did this post by an Azure MVP just mention AWS? Sure – there is a time and a place for everything.

This solution is similar to the on-premises relay solution but it replaces on-premises with AWS. This can be useful where:

  1. You want to minimise on-premises resources. AWS does support GRE so a VPN connection to a third-party that requires GRE can be handled in this way.
  2. You can use an AWS region that is close to either the third-party and/or your Azure region and minimise latency.

Note that the connection from AWS or Azure could be either VPN or ExpressRoute (with an ISP that supports Azure ExpressRoute and AWS Direct Connect).

The downside is that there is still “more stuff” and a requirement for skills that you might not have. On the plus side, it offers compatibility with reduced latency.

Solution 3: Azure Relay

In this design, the third-party makes a connection to your Azure network(s) using ExpressRoute. But as usual, you must implement a NAT rule. The ExpressRoute Gateway cannot natively implement NAT. That requires that you must deploy “an appliance” (NVA or Linux VM with NAT tables).

In the above design, there is a route table associated with the GatewaySubnet of the ExpressRoute Gateway. An user-defined route with a prefix of will forward to the appliance as the next hop. A user-defined route on the VM’s subnet with a prefix of the third-party network(s) will use the appliance as the next hop.

This design allows you to use ExpressRoute to connect to the third-party but it also allow you to implement NAT.

Solution 4: VPN Gateway & NAT

Other than using some modern solution, such as authenticated API over HTTPS, this is probably “the best” scenario in terms of Azure resource simplicity.

The third-party connects to your Azure network using a site-to-site VPN. The connection is terminated in Azure using a VPN Gateway. The Azure VPN Gateway is capable of supporting NAT rules. Unfortunately, that’s where things begin to fall apart because of the documentation (quality and completeness).

This is a simple scenario where the third-party needs access to an IP address (VM other otherwise) hosted in your Azure network. That internal address of your Azure resource must be translated to a different External IP Address.

As long as your VPN Gateway is VpnGw2/VpnGw2Az or higher, then you can create NAT rules in the Gateway. The scenario that I have described requires a confusingly-named egress NAT rule – you are translating an internal IP address(es) to an external IP address(es) to abstract the internal address(es) for ingress traffic. An ingress NAT rule translates an external IP address(es) to an internal address(es) to abstract the external address(es) for ingress traffic.

The Terraform code for my scenario is shown below: I want to make my Azure resource with available externally as on TCP 443:

Once you have the NAT rule, you will associate it with the Connection resource for the VPN.

And that’s it – will be available as on TCP 443 to the third-party – no other connection can use this NAT rule unless it is associated with it.

Solution 5 – NVA & NAT

This is alm ost the same as the previous example, but an NVA is used instead of the Azure VPN Gateway, maybe because you like their P2S VPN solution or you are using SD-WAN. The NAT rules are implemented in the NVA.

Get The Diagnostics Logs Names For An Azure Resource

This post will show you how to get the ARM (also for Bicep, Terraform, etc) names of the diagnostics logs for an Azure resource.


When you are deploying Azure resources as code, you might need to enable diagnostics logs. This might require you to know the name of each log. Here’s the issue: the names of the logs in the Azure Portal are usually different from the names that are used in the code. Sure, they’ll remove the spaces and use camel-case, but that’s predictable. Often, the logs have completely different names.

Sometimes the names are documented – thank you App Services! Sometimes you cannot find the log names – boo Azure SQL!


The tip that I’m going to share is useful – this is the second time in a few weeks that I’ve used this approach.

If you know what you are looking for, diagnostics logs in this case, then do a search online for something like “Azure Diagnostics Settings REST API”. This will bring you to a Microsoft page that shares different methods for the API.

I wanted to see what the log names are for an Azure SQL Database. So I manually created the diagnostic setting. After that, grab the resource ID of the Azure SQL Database.

Then I did the above search. I clicked the Get method and then clicked the Try It button. Put the name of the diagnostic setting (that you created) in name. Put the resource ID of the Azure SQL Database in resourceID. And then click Run. A second later, the ARM for the diagnostic setting is presented on a screen below, including all the diagnostics log names.

Importing Azure Resource To Terraform State After Timed Out Pipeline

This article will explain how to simply import a resource that was successfully deployed by Terraform from a GitHub action or DevOps pipeline that timed out into your state file.


I’m working a lot with Terraform these days. ARM doesn’t scale, and while I’d prefer to use a native toolset such as Bicep, it is just a prettier ARM and has most of the same issues – scale (big architectures) and support (Azure AD = helloooo!).

The Scenario

You are writing Terraform to deploy resources in Microsoft Azure. That code is run by a DevOps pipeline or a GitHub action. You add a resource such as App Service Environment v3 or Azure SQL Managed Instance that can take hours to deploy. A DevOps pipeline will timeout after 1 hour.

As expected, the pipeline times out but the resource deploys. You try to run the pipeline again but pipeline will fail because you have resources that don’t exist in the state file. Ouch! You do your due diligence and search, and you find nothing but noise, and that does not help you. That was my experience, anyway!

State File Lock

I use blob storage in secured Azure Storage Accounts to store state files. The timed-out pipeline locked the state file using a blob lease. Browse to the container, select the blob and release the lock.

The Fix

The fix is actually pretty simple. You’ve already done most of the work – defining the resource.

In my example, I have a file called I have a resource definition that goes something like this:

resource "azurerm_app_service_environment_v3" "ase" {



I made a copy of my pipeline file. Then I modified my pipeline yaml file so it would run a terraform import command instead of a terraform apply.

terraform import azurerm_app_service_environment_v3.ase /subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.Web/hostingEnvironments<resource name>

I used the  GetAzAppServiceEnvironment cmdlet in Cloud Shell to retrieve the resource ID of the ASE because it wasn’t shared in the Azure Portal.

I re-ran the pipeline and the state file was updated with the resource. Reset the pipeline file back to the way it was (back to terraform apply) and your pipeline should run clean.

Cannot Remove Subnet Because of App Service VNet Integration

This post explains how to unlock a subnet when you have deleted an App Service/Function App with Regional VNet Integration.

Here I will describe how you can deal with an issue where you cannot delete a subnet from a VNet after deleting an Azure App Service or Function App with Regional VNet Integration.


You have an Azure App Service or Function App that has Regional VNet Integration enabled to connect the PaaS resource to a subnet. You are doing some cleanup or redeployment work and want to remove the PaaS resources and the subnet. You delete the PaaS resources and then find that you cannot:

  • Delete the subnet
  • Disable subnet integration for Microsoft.Web/serverFarms

The error looks something like this:

Failed to delete resource group workload-network: Deletion of resource group ‘workload-network’ failed as resources with identifiers ‘Microsoft.Network/virtualNetworks/workload-network-vnet’ could not be deleted. The provisioning state of the resource group will be rolled back. The tracking Id is ‘iusyfiusdyfs’. Please check audit logs for more details. (Code: ResourceGroupDeletionBlocked) Subnet IntegrationSubnet is in use by /subscriptions/sdfsdf-sdfsdfsd-sdfsdfsdfsd-sdfs/resourceGroups/workload-network/providers/Microsoft.Network/virtualNetworks/workload-network-vnet/subnets/IntegrationSubnet/serviceAssociationLinks/AppServiceLink and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See (Code: InUseSubnetCannotBeDeleted, Target: /subscriptions/sdfsdf-sdfsdfsd-sdfsdfsdfsd-sdfs/resourceGroups/workload-network/providers/Microsoft.Network/virtualNetworks/workload-network-vnet)

It turns out that deleting the PaaS resource leaves you in a situation where you cannot disable the integration. You have lost permission to access the platform mechanism.

In my situation, Regional VNet integration was not cleanly disabling so I did the next logical thing (in a non-production environment): started to delete resources, which I could quickly redeploy using IaC … but I couldn’t because the subnet was effectively locked.


There are 2 solutions:

  1. Call support.
  2. Recreate the PaaS resources

Option 1 is a last resort because that’s days of pain – being frankly honest. That leaves you with Option 2. Recreate the PaaS resources exactly as they were before with Regional VNet Integration Enabled. Then disable the integration (go into the PaaS resource, go into Networking, and disconnect the integration).

That process cleans things up and now you can disable the Microsoft.Web/serverFarms delegation and/or delete the subnet.

SignalR Disconnects On Azure Application Gateway

I will explain a recent situation where an application that uses SignalR/WebSockets disconnected when routed through the Azure Application Gateway during listener configuration changes.


Me and my team are working with a client, migrating their on-premises workloads to Microsoft Azure. Some of the workloads are built using SignalR which provides optimal communication for in-sequence data over WebSockets. The users of the applications expect a reliable stream of data over a long period of time.

Our design features an Azure Application Gateway with the Web Application Firewall. The public DNS records for the applications point to the AppGw, which inspects the traffic and proxies to the backend pools which host the applications.

As one can imagine, there has been a lot of testing, debugging, and improvement. That means there have been many configuration changes to the application configurations in the AppGw: listeners, HTTP settings, and backend pools.

The Problem

We had stable connections from test clients to the applications but the developers saw something. Every now, and then, all clients would lose their connection. The developers observed the times and noticed a correlation with when we ran our DevOps pipelines to apply changes. In short: every time we updated the AppGw, the clients were disconnected.

I reached out to Microsoft (thank you to Ashutosh who was very helpful!). Ashutosh dug into the platform logs and explained the issue to me.

The WebSocket sessions were handled by the “data plane” of the AppGw data resource. Every time a new configuration is applied, a new data plane is created. The old data plane is maintained for a short period of time – 30 seconds by default – before being dropped. That means when we applied a change, the handling for existing WebSocket connections was dropped 30 seconds later.


The timeout for the data plane can be adjusted up to 1 hour (3600 seconds) from the default of 30 seconds. This would not solve our issue – 1 minute or 1 hour just delays the disconnect instead of avoiding it.

The solution we have come up with is to isolate the “production” workloads into a stable WAF while unstable workloads are migrated to a “pre-staging” WAF. Any changes to the “production” WAF must be done out of hours unless there is an emergency that demands a change and we acknowledge that disconnects will happen.

The Azure IaaS Book Of News – December 2022

Here’s all the news that I thought was interesting for Ops and Security folks working with Azure IaaS from December 2022.

Azure VMware Solution

  • Azure VMware Solution Advanced Monitoring: This solution add-on deploys a virtual machine running Telegraf in Azure with a managed identity that has contributor and metrics publisher access to the Azure VMware Solution private cloud object. Telegraf then connects to vCenter Server and NSX-T Manager via API and provides responses to API metric requests from the Azure portal.

Azure Kubernetes Service

  • Microsoft and Isovalent partner to bring next generation eBPF dataplane for cloud-native applications in Azure: Microsoft announces the strategic partnership with Isovalent to bring Cilium’s eBPF-powered networking data plane and enhanced features for Kubernetes and cloud-native infrastructure. Azure Kubernetes Services (AKS) will now be deployed with Cilium open-source data plane and natively integrated with Azure Container Networking Interface (CNI). Microsoft and Isovalent will enable Isovalent Cilium Enterprise as a Kubernetes container App offering onto Azure Container Marketplace. This will provide a one-click deployment solution to Azure Kubernetes clusters with Isovalent Cilium Enterprise advanced features.
  • Generally Available: Kubernetes 1.25 support in AKS: AKS support for Kubernetes release 1.25 is now generally available. Kubernetes 1.25 delivers 40 enhancements. This release includes new changes such as the removal of PodSecurityPolicy.

Azure Backup

Azure Virtual Desktop

Virtual Machines

  • Public preview: New Memory Optimized VM sizes – E96bsv5 and E112ibsv5: The new E96bsv5 and E112ibsv5 VM sizes part of the Azure Ebsv5 VM series offer the highest remote storage performances of any Azure VMs to date.  The new VMs can now achieve even higher VM-to-disk throughput and IOPS performance with up to 8,000 MBps and 260,000 IOPS.
  • Generally Available: Azure Dedicated Host – Restart: Azure Dedicated Host gives you more control over the hosts you deployed by giving you the option to restart any host. When undergoing a restart, the host and its associated VMs will restart while staying on the same underlying physical hardware.


  • Public preview: Use tag inheritance for cost management: You no longer need to ensure that every resource is tagged or rely on resource providers to support and emit tags in their billing pipeline for cost management. Aidan’s Note – Restricted to EA/MCA … which unreasonably sucks. The latest example of “cost management” excluding other customers.

App Services


Azure Site Recovery

  • Public Preview: Azure Site Recovery Higher Churn Support: Azure Site Recovery (ASR) has increased its data churn limit by approximately 2.5x to 50 MB/s per disk. With this, you can configure disaster recovery (DR) for Azure VMs having data churn up to 100 MB/s. This helps you to enable DR for more IO intensive workloads.



Your Azure Migration Project Is Doomed To Fail

That heading is a positive way to get your holiday spirit going, eh? I’m afraid I’m going to be your Festive Tech Calendar 2022 Grinch and rain doom and gloom on you for the next while. But hang on – The Grinch story ends with a happy-ever-after.

In this article, I want to explain why a typical “Azure migration” project is doomed to fail to deliver what the business ultimately expects: an agile environment that delivers on the promises of The Cloud. The customers that are affected are typically larger: those with IT operations and one or more development teams that create software for the organisation. If you don’t fall into that market, then keep reading because this might still be interesting to you.

My experience is Microsoft-focused, but the tale that I will tell is just as applicable to AWS, GCP, and other alternatives to Microsoft Azure.

Migration Versus Adoption

I have been working with mid-large organisations entering The Cloud for the last 4 years. The vast majority of these projects start with the same objectives:

  1. Migrate existing workloads into Azure, usually targeting migration to virtual machines because of timelines and complexities with legacy or vendor-supplied software.
  2. Build new workloads and, maybe, rearchitect some migrated workloads.

Those two objectives are two different projects:

  1. Migration: Get started in Azure and migrate the old stuff
  2. Adoption: Start to use Azure to build new things

Getting Started

The Azure journey typically starts in the IT Operations department. The manager that runs infrastructure architecture/operations will kick off the project either in-house or with a consulting firm. Depending on who is running things, there will be some kind of process to figure out how to prepare the landing zone(s) (the infrastructure) in Azure, some form of workloads assessment will be performed, and existing workloads will be migrated to The Cloud. That project, absent of (typically organisation) issues, should end well:

  • The Azure platform is secure and stable
  • The old workloads are running well in their landing zone(s)

The Migration project is over, there is a party, people are happy, and things are … great?

Adoption Failure

Adoption starts after Migration ends. Operations has built their field of dreams and now the developers will come – cars laden with Visual Studio, containers, code, and company aspirations will be lining up through the corn fields of The Cloud, eager to get going … or not. In reality, if Migration has happened like I wrote above, things have already gone wrong and it is already probably too late to change the path forward. That’s been my experience. It might take 1 or 2 years, but eventually, there will be a meeting where someone will ask: why do the developers not want to use Azure?

The first case of this that I saw happened with a large multinational. Two years after we succeeded with a successful Azure deployment – the customer was very happy at the time – we were approached by another department, the developers, in the company. They loved what we built in Azure but they hated how it was run by IT. IT had taken all their ITIL processes, ticketing systems, and centralised control that were used for on-premises and applied them to what we delivered in Azure. The developers were keen to use agile techniques, tools, and resource types in Azure, but found that the legacy operations prevented this from working. They knew who built the Azure environment, they came to us, and asked us to build a shadow IT environment in Azure just for them, outside of the control of IT Operations!

That sounds crazy, right? But this is not a unique story. We’ve seen it again and again. I hear stories of “can we create custom roles to lock down what developers can do?” and then that adoption is failing. Or “developers find it easier to work elsewhere”. What has gone wrong?

Take a Drink!

Adoption is when the developers adopt what is delivered in Azure and start to build and innovate. They expect to find a self-service environment that enables cool things like GitOps, Kubernetes, and all those nerdy cool things that will make their job easier. But their adoption will not happen because they do not get that. They get a locked-down environment where permissions are restricted, access to platform resources might be banned because Operations only know & support Windows, and the ability to create firewall rules for Internet access or to get a VM is hidden behind a ticketing system with a 5-day turnaround.

Some of us in the community joke about taking a drink every time some Microsoft presenter says a certain phrase at a conference. One of those phrases in the early days of Microsoft Azure was “cloud is not where you work, it is how you work”. That phrase started reappearing in Microsoft events earlier this year. And it’s a phrase that I have been using regularly for nearly 18 months.

The problem that these organisations have is that they didn’t understand that how IT is delivered needs to change in order to provide the organisation with the agile platform that the Cloud can be. The Cloud is not magically agile; it requires that you:

  • Use the right tools & technology, which involves skills acquisition/development
  • People are reorganised
  • New processes are implemented

Let’s turn that into the Cloud Equation:

Cloud = Process(People + Tools)

Aidan Finn, failed mathematician and self-admitted cloud Grinch

With the current skills shortage, one might think that the tools/tech/skills part is the hard part of the Cloud Equation. Yes; developing or acquiring skills is not easy. But, that is actually the easiest part of the Cloud Equation.

Our business has had a split that has been there since the days of the first computer. It is worse than Linux versus Windows. It is worse than Microsoft versus Apple. It is worse than ex-Twitter employees versus Elon. The split that I talk of is developers (Devs) versus operations (Ops). I’ve been there. I have been the Ba*tard Operator From Hell – I took great joy in making the life of a developer miserable so that “my” systems were stable and secure (what I mistakenly viewed as my priority). I know that I was not unique – I see it still happens in lots of organisations.

People must be reorganised to work together, plan together, build and secure together, to improve workloads together … together. Enabling that “together” requires that old processes are cast aside with the physical data centre, and new processes that are suitable for The Cloud are used.

Why can’t all this be done after Migration has been completed? I can tell you that for these organisations, the failure happened a long time before Migration started.

The Root Cause

According to my article, who initiated the cloud journey? It was a manager in IT Operations. Who owns the old processes? The manager in IT Operations. Who is not responsible for the developers? The manager in IT Operations. Who is not empowered, and maybe not motivated, to make the required Tools(People + Process) changes that can enable an agile Cloud.

In my experience, a cloud journey project will fail at the Adoption phase if the entire journey does not start at the board level. Someone in the C-suite needs to kick that project off – and remain involved.

A Solution

Mcrosoft Cloud Adoption Framework 2022
Microsoft Cloud Adoption Framework 2022

The Microsoft Cloud Adoption Framework (AKA CAF and AWS and GCP have their own versions) recognises that  Cloud Journey must start at the top. The first thing that should be done is to sit down with organisation leadership to discuss and document business motivations. The results of the conversation(s) should result in:

  1. A documented cloud strategy plan
  2. A clear communication to the business: this is how things will be with the implied “or else”

Only the CEO, CTO (or equivalent) can give the order that Operations and Development must change how things are done DevOps. Ideally, Security is put into that mix considering all the threats out there and that Security should be a business priority: DevSecOps. Only when all this is documented and clearly communicated, should the rest of the process start.

The Plan Phase of the CAF has some important deliverables:

  • Initial organisation alignment: Organise staff with their new roles in DevSecOps – The People part of the Cloud Equation.
  • Skills Readiness: Create the skills for the Tools part of the Cloud Equation.

Engineering starts in the Ready phase; this is when things get turned on and the techies (should) start to enjoy the project. The very first phase in the recently updated CAF is the Operating Model; this is a slight change that recognises the need for process change. This is when the final part of the Cloud Equation can be implemented: the new processes multiply the effect of tools/skills and organisation. With that in place, we should be ready, as a unified organisation, to build things for the organisation – not for IT Operations or the Developers – with agility and security.

I have skipped over all the governance, management, compliance, and finance stuff that should also happen to highlight the necessary changes that are usually missing.

Whether the teams are real or virtual, in the org chart or not, each workload will have a team made up of operations, security, and development skills. That team will share a single backlog that starts out with a mission statement or epic that will have a user story that says something like “the company requires something that does XYZ”. The team will work together to build out that backlog with features and tasks (swap in whatever agile terminology you want) as the project progresses to plan out tasks across all the required skill sets in the team and with external providers. This working together means that there are no surprises, Devs aren’t “dumping” code on IT, IT isn’t locking down Devs, and Security is built-in and not a hope-for-the-best afterthought.

Final Thoughts

The problem I have described is not unique. I have seen it repeatedly and it has happened and is happening all around the world. The fix is there. I am not saying that it is easy. Getting access to the C-Level is hard for various reasons. Changing momentum that has been there for decades is a monumental mission. Even when everyone is aligned, the change will be hard and take time to get right but it will be rewarding.  Bon, voyage!

Ignite 2022 IaaS Blog Post Of News

This post is my alternative to the Microsoft Ignite “Book of News”.

You’ve probably heard of or even read the Ignite Book of News. This is a PDF that is sent out to those under NDA (media, MVPs, and so on) before Microsoft Ignite starts. After the kickoff, the document is shared publicly. The Book of News is heavily shaped by Marketing, focusing on highlights and the “message” of the conference. The Book of News is not complete, despite all claims by those who are poorly informed – over the years, I’ve found countless announcements from sessions and product group blog posts that were not in the Book of News.

I’m taking part in an “Ignite After Party” to discuss the Book of News. The organiser has encouraged going “off book” so I’ve summarised all the IaaS stuff that I could find (and a little PaaS) – most of this stuff was not in the Book of News. Here you will find all the announcements in that space from Ignite and the time since then (I stopped at November 30th when I wrote this post).

Ignite News

App Services

Go available on App Service

We are happy to announce that App Service now supports apps targeting Go 1.18 and 1.19 across all public regions on Linux App Service Plans through the App Service Early Access feature. By introducing native support for Go on App Services, we are making one of the top 10 best loved web app development languages available for our developers.

In development: Larger SKUs for App Service Environment v3

New Isolated v2 SKUs of 64GB/ 128GB/ 256GB provide compelling value to organizations that need a dedicated tenant to run their most sensitive and demanding applications. This is expected to be available in production in Q4 CY2022.

Public preview: Planned maintenance feature for App Service Environment v3

With planned maintenance notifications for App Service Environment v3, you can get a notification 15 days ahead of planned automatic maintenance and start the maintenance when it is convenient for you


Announcing Jumpstart ArcBox for DataOps

ArcBox for DataOps, is our road-tested automation providing our customers a way to get hands-on with the Azure Arc-enabled SQL Managed Instance set of capabilities and features.

Announcing Jumpstart HCIBox

HCIBox is a turnkey solution that provides a complete sandbox for exploring Azure Stack HCI capabilities and hybrid cloud integration in a virtualized environment. HCIBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for a user to get hands-on with Azure Stack HCI and Azure Arc technology without the need for physical hardware.


Announcing Landing Zone Accelerator for Azure Arc-enabled SQL Managed Instance

a proven set of guidance designed by subject matter experts across Microsoft to help customers create and implement the business and technology strategies necessary to succeed in the cloud as well as a way to automate a fully deployed Azure Arc-enabled SQL Managed Instance environment, making implementation faster.


Announcing general availability of support for Azure availability zones in the host pool deployment

I am pleased to announce that you can now automatically distribute your session hosts across any number of availability zones

New ways to optimize flexibility, improve security, and reduce costs with Azure Virtual Desktop

With the public preview of new integrations with Azure Active Directory, you can use single sign-on and passwordless authentication, leveraging FIDO2 standards and Windows Hello for Business to securely streamline the authentication experience for today’s remote and hybrid workforce.

Now in public preview, customers can use cloud storage to host FSLogix and modern Azure Active Directory authentication for their session hosts (more on that later).

Public preview for confidential virtual machine options for Azure Virtual Desktop is also available now—specifically for Windows 11 virtual machines—with Windows 10 support planned in the future.

customers who require their information to remain on trusted private networks will have the option to use Private Link to enable access to their session hosts and workspaces over a private endpoint in their virtual network.

Cost Management

Optimize and maximize cloud investment with Azure savings plan for compute

Today, we are announcing Azure savings plan for compute. With this new pricing offer, customers will have an easy and flexible way to save up to 65%* on compute costs, compared to pay-as-you-go pricing, in addition to existing offers in market including Azure Hybrid Benefit and Reservations.


General availability: Azure Premium SSD v2 Disk Storage

In summary, Premium SSD v2 offers the following key benefits:

  • Ability to increase disk storage capacity in 1 GiB increments.
  • The capability to separately provision IOPS, throughput, and disk storage capacity.
  • Consistent sub-millisecond latency.
  • Easier maintenance with scaling performance up and down without downtime.
  • Up to 64TiBs, 80,000 IOPS and 1200 MB/s on a single disk.

Public preview: Azure Elastic SAN

With Elastic SAN, you can deploy, manage, and host workloads on Azure with an end-to-end experience similar to on-premises SAN. The solution also enables bulk provisioning of block storage that can achieve massive scale, up to millions of IOPS, double-digit GB/s of throughput, and low single-digit millisecond latencies with built-in resiliency to minimize downtime.


Generally available: Azure Automanage for Azure Virtual Machines and Arc-enabled servers

Azure Automanage is a service that automates configuration of virtual machines (VMs) to best-practice Azure services, as well as continuous security and management operations across the entire lifecycle of VMs in Azure or hybrid environments enabled through Azure Arc. This allows you to save time, reduce risk, and improve workload uptime by automating day-to-day configuration and management tasks– all with point-and-click simplicity, at scale.

Generally available: Azure Monitor agent support for Windows clients

The Azure Monitor agent and data collection rules now support Windows 10 and 11 client devices via the new Windows MSI installer. Extend the use of the same agent for telemetry and security management (using Sentinel) across your service and device landscape.

Generally available: Azure Monitor agent migration tools

Per earlier communication, you must migrate from log analytics agent (MMA or OMS agents) to this agent before August 2024. You can use agent migration tools now generally available to make this process easier for you.

Public preview: Azure Monitor Logs – create granular level RBAC for custom tables

The Log Analytics product team added two additional capabilities to enable workspace admins to manage more granular data access, supporting read permission at the table level both for Azure tables and customer tables.  

Cost-effective solution for high-volume verbose logs

Basic Logs is a new flavor of logs that enables a lower-cost collection of high-volume verbose logs that you use for debugging and troubleshooting, but not for analytics and alerts. This data, which might have been historically stored outside of Azure Monitor Logs, can now be available inside your Log Analytics workspace, enabling one solution for all your log data.

Low-cost long-term storage of your log data

Log Archive is an in-place solution to store your data for long-term retention of up to seven years at a cost-effective price point. This lets you store all your data in Azure Monitor Logs, without having to manage an external data store for archival purposes, and query or import data in and out of Azure Monitor Logs. You can access archived data by running a search job or restoring it for a limited time for investigation, as detailed below. 

Search through large volumes of log data

A search job can run from a few minutes to hours, scanning log data and fetching the relevant records into a new persistent search job results table. The search job results table supports the full set of analytics capabilities to enable further analysis and investigation of these records.

Investigate archived logs

Restore is another tool for investigating your archived data. Unlike the search job, which accesses data based on specific criteria, restore makes a given time range of the data in a table available for high-performance queries. Restore is a powerful operation, with a relatively high cost, so it should be used in extreme cases when you need direct access to your archived data with the full interactive range of analytics capabilities.

Generally available: Windows Admin Center for Azure Virtual Machines

Windows Admin Center lets you manage the Windows Server Operating System of your Azure Virtual Machines, natively in the Azure Portal. You can perform maintenance and troubleshooting tasks such as managing your files, viewing your events, monitoring your performance, getting an in-browser RDP and PowerShell session, and much more, all within Azure.

Set up alerts faster with our new and simplified alerting experience (in preview)

Recommended alert rules provides customers with an easy way to enable a set of best practice alert rules on their Azure resources. This feature, which previously supported only virtual machines, is now being extended to also support AKS and Log Analytics Workspace resources.

Azure VMware Solution

Public preview: Customer-managed keys for Azure VMware Solution

Customer-managed keys (CMK) for Azure VMware Solution (AVS) provides you with control over your encrypted vSAN data on Azure VMware Solution. With this feature, you can use Azure Key Vault to generate customer-managed keys as well as centralize and streamline the key management process.

Public preview: Stretched clusters for Azure VMware Solution

provides 99.99% uptime for mission critical applications. Stretched cluster benefits:

  • Improve application availability.
  • Provide a zero-recovery point objective (RPO) capability for enterprise applications without needing to redesign or deploy expensive disaster recovery (DR) solutions.
  • A private cloud with stretched clusters is designed to provide 99.99% availability due to its resilience to availability zone failures.
  • Enables you to focus on core application requirements and features, instead of infrastructure availability.


Generally available: Azure Hybrid Benefit for AKS and Azure Stack HCI

At Ignite, we are expanding Azure Hybrid Benefit to further reduce costs for on-premises and edge locations. Customers with Windows Server Software Assurance (SA) can use Azure Hybrid Benefit for Azure Kubernetes Service (AKS) and Azure Stack HCI to:

  • Run AKS on Windows Server and Azure Stack HCI at no additional cost in datacenter and edge locations. With this, you can deploy and manage containerized Linux and Windows applications from cloud to edge with a consistent, managed Kubernetes service. This applies to Windows Server Datacenter and Standard Software Assurance and Cloud Solution Provider (CSP) customers.
  • Use first-party Arc-enabled infrastructure, Azure Stack HCI, at no additional cost. Windows Server Datacenter Software Assurance customers can modernize their existing datacenter and edge infrastructure to run their VM and container-based applications on modern infrastructure with industry-leading price-performance and built-in connectivity to Azure.

Public preview: Azure Kubernetes Service hybrid deployment options

Azure Kubernetes Service (AKS) on Azure Stack HCI, Windows Server 2019, and 2022 Datacenter can be provisioned from the Azure Portal/CLI. Additionally, AKS is now in public preview on Windows devices and Windows IoT for lightweight Kubernetes orchestration.

Generally available: 5,000 node scale in AKS

Azure Kubernetes Service is increasing the maximum node limit per cluster from 1,000 nodes to 5,000 nodes for customers using the uptime-SLA feature.

Generally available: Windows server 2022 host support in AKS

With this generally available feature, Windows Server 2022 is now supported on AKS. Among other improvements related to security, Windows Server 2022 also provides several platform improvements for Windows Containers and Kubernetes. Windows Server 2022 is available for Kubernetes v1.23 and higher.

Public preview: Kubernetes apps on Azure Marketplace

You can now browse the catalog of solutions specialized for Kubernetes platforms under Kubernetes apps offer in marketplace and select a solution for click through deployment to Azure Kubernetes Service (AKS) with automated Azure billing.

Public preview: Azure CNI Overlay mode in Azure Kubernetes Service

Azure CNI Overlay mode is a new CNI network plugin that allocates pod IPs from an overlay network space, rather than from the virtual network IP space.

General availability: AMD-based confidential VMs for Azure Kubernetes Service

With the general availability of confidential virtual machines featuring AMD 3rd Gen EPYC™ processors, with Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) security features, organizations get VMs with isolated, encrypted memory, and genuine confidentiality attestation rooted to the hardware.

Public preview: Rules for Azure Kubernetes Service and Log Analytic workspace resources

Enable a set of best practice alert rules on an unmonitored AKS and Log Analytics workspace resource with just a few clicks.

Public preview: Azure Monitor managed service for Prometheus

The new fully managed Prometheus compatible service from Azure Monitor delivers the best of what you like about the open-source ecosystem while automating complex tasks such as scaling, high-availability, and long-term data retention. It is available to use as a standalone service from Azure Monitor or as an integrated component of Container Insights and Azure Managed Grafana.

Generally available: ARM64 support in AKS

Announcing the general availability of ARM64 node pool support in AKS. ARM64 provides a better price and compute comparison due to its lower power utilization.


Public preview: IP Protection SKU for Azure DDoS Protection

Instead of enabling DDoS protection on a per virtual network basis, including all public IP resources associated with resources in those virtual networks, you now have the flexibility to enable DDoS protection on an individual public IP.

General availability: Azure DNS Private Resolver – hybrid name resolution and conditional forwarding

Azure DNS Private Resolver is a cloud-native, highly available, and DevOps-friendly service. It provides a simple, zero- maintenance, reliable, and secure DNS service to resolve and conditionally forward DNS queries from a virtual network, on-premises, and to other target DNS servers without the need to create and manage a custom DNS solution

WordPress on Azure App Service supports Azure Front Door Integration

We are happy to announce the preview of WordPress on Azure App Service powered by Azure Front Door which enables faster page loads, enhanced security, and increased reliability for your global apps with no configuration or additional code required.

General availability: Custom network interface name configurations of private endpoints

This feature allows you to define your own string name at the time of creation of the private endpoint NIC deployed.

General availability: Static IP configurations of private endpoints

This feature allows you to add customizations to your deployments. Leverage already reserved IP addresses and allocate them to your private endpoint without relying on the randomness of Azure’s dynamic IP allocation.

Public preview: ExpressRoute Traffic Collector

ExpressRoute Traffic Collector enables sampling of network flows sent over your ExpressRoute Direct circuits. Flow logs get sent to a Log Analytics workspace where you can create your own log queries for further analysis, export the data to any visualization tool or SIEM (Security Information and Event Management) of your choice

In development: Introducing ExpressRoute Metro

ExpressRoute Metro offers you the ability to create private connections via an ExpressRoute Circuit with dual connections from a Service provider (AT&T, Equinix, Verizon etc.,) or connecting directly with ExpressRoute Direct over a dual 10 Gbps or 100 Gbps physical port in two different Microsoft Edge location in a metropolitan area offering higher redundancy and resiliency. 

Virtual Machines

General availability: New Azure proximity placement groups feature

With the addition of the new optional parameter, intent, you can now specify the VM sizes intended to be part of a proximity placement group when it is created. An optional zone parameter can be used to specify where you want to create the proximity placement group. This capability allows the proximity placement group allocation scope (datacenter) to be optimally defined for the intended VM sizes, reducing deployment failures of compute resources due to capacity unavailability.

General availability: Confidential VM option for SQL Server on Azure Virtual Machines

With the confidential VM option for SQL Server on Azure Virtual Machines, you can now run your SQL Server workloads on the latest AMD-backed confidential virtual machines.

General availability: AMD confidential VM guest attestation

It lets you do the following:

  • Use the guest attestation feature to verify that a confidential VM is running on a hardware-based trusted execution environment (TEE) with security features (isolation, integrity, secure boot) enabled.
  • Allow application deployment decisions (whether to launch a sensitive workload) based on the hardware state returned by the library call.
  • Use remote attestation artifacts (token and claims) received from another system (on a confidential VM) to enable relying parties to gain trust to make transactions with the other system.
  • Receive recommendations and alerts of unhealthy confidential VMs in Microsoft Defender for Cloud.

Announcing the new Ebsv5 VM sizes offering 2X remote storage performance with NVMe-Public Preview

Today, we are announcing the Public Preview of two additional Virtual Machine (VM) sizes, E96bsv5 and E112ibsv5, to the Ebsv5 VM family. The two new sizes are developed with the NVMe protocol and provide exceptional remote storage performance offering up to 260,000 IOPS and 8,000 MBps throughput.

General availability: Azure Monitor predictive autoscale for Azure Virtual Machine Scale Sets

Predictive autoscale uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts overall CPU load to your virtual machine scale set based on your historical CPU usage patterns. By observing and learning from historical usage, it predicts the overall CPU load ensuring scale-out occurs in time to meet demand.


Public preview: Microsoft Azure Deployment Environments

Azure Deployment Environments has entered public preview. Azure Deployment Environments help dev teams create and manage all types of environments throughout the application lifecycle with features such as:

  • On-demand environments enable developer to spin up environments with each feature branch to enable higher quality code reviews and ensure devs can view and test their changes in a prod-like environment. 
  • Sandbox environments can be used as greenfield environments for experimentation and research.
  • CI/CD pipeline environments integrate with your CI/CD deployment pipeline to automatically create dev, test (regression, load, integration), staging and production environments at specified points in the development lifecycle.
  • Environment types enable dev infra and IT teams to create preconfigured mappings that automatically apply the right subscriptions, permissions and identities to environments deployed by developers based on their current stage of development.
  • Template catalogues housed in a code repo that can be accessed and edited by developers and IT admins to propagate best practices while maintaining security and governance.

Generally available: Azure Site Recovery update rollup 64 – October 2022

Modernized VMware to Azure DR is now generally available.  Added support for: 

  • Protecting physical machines using the modernized experience.
  • Enabling modernized experience with managed identity and private endpoint turned on.

Azure PowerShell Ignite 2022 announcements

  • general availability of Azure PowerShell modules version 9
  • added 12 modules supporting new services and added more than 500 cmdlets
  • With Az 9 we are providing an actionable error message that indicates why a cmdlet is not found.
  • With Az Config you can CENTRALLY CONFIGURE Azure PowerShell settings

Active Directory Connector (ADC) for Arc-enabled SQL Managed Instance is now generally available!

Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). The Arc-enabled SQL Managed Instance uses an existing on-premises Active Directory (AD) domain for authentication.

Azure Backup

Public preview: Immutable vaults for Azure Backup

With immutable vaults, Azure Backup provides you an option to ensure that recovery points that are once created cannot be deleted before their intended expiry time.

Public preview: Multi-user authorization for Backup vaults

Multi-user authorization (MUA) for Backup adds an additional layer of protection for critical operations on your Backup vaults, providing greater security for your backups. To provide multi-user authorization, Backup uses a resource guard to ensure critical operations are performed with proper authorization

Public preview: Enhanced soft delete for Azure Backup

With enhanced soft delete, you get the ability to make soft delete irreversible, which protects soft delete from being disabled by any malicious actors. Hence, enhanced soft delete provides better protection for your backups against various threats. With enhanced soft delete, you get the ability to make soft delete irreversible, which protects soft delete from being disabled by any malicious actors. Hence, enhanced soft delete provides better protection for your backups against various threats.

General availability: Zone-redundant storage support by Azure Backup

With the general availability of this feature, you have a broader set of redundancy or storage replication options to choose from for your backup data. Based on your data residency, data resiliency and total cost of ownership (TCO) requirements, you can select either locally redundant storage (LRS), zone-redundant storage (ZRS) or geo-redundant storage (GRS).

After Ignite – Up To November 30th

Cost Management

General availability: Azure savings plan for compute

The savings plan unlocks lower prices on select compute services when customers commit to spend a fixed hourly amount for one or three years. Choose whether to pay all up front or monthly at no extra cost.

General availability: Virtual Machine software reservations

You can now save on Virtual Machine software from third-party publishers by purchasing software reservations.


Generally available: Auto Extension upgrade for Arc enabled Servers

Automatic Extension upgrade is now generally available for Arc enabled Servers using eligible VM extensions. With this release we are adding support for Azure Portal, PowerShell, CLI, and automatic rollback of failed upgrades


Visualize and monitor Azure & hybrid networks with Azure Network Watcher

Azure Network Watcher provides an entire suite of tools to visualize, monitor, diagnose, and troubleshoot network issues across Azure and Hybrid cloud environments.

Azure Virtual WAN simplifies networking needs

  • Multipool user group support preview
  • Secure hub routing intent preview
  • Hub routing preference (HRP) is generally available
  • Bypass next hop IP for workloads within a spoke VNet connected to the virtual WAN hub generally available
  • Border Gateway Protocol (BGP) Peering with a virtual hub is generally available
  • BGP dashboard is now generally available
  • Virtual Network Gateway VPN over ExpressRoute private peering (AZ and non-AZ regions) is generally available
  • Custom traffic selectors (portal)–generally available
  • High availability for Azure VPN client using secondary profile is generally available
  • ExpressRoute circuit with visibility of Virtual WAN connection
  • Fortinet SDWAN is generally available
  • Aruba EdgeConnect Enterprise SDWAN preview
  • Checkpoint NG Firewall preview

Generally available: Block domain fronting behavior on newly created customer resources

beginning November 8, 2022, all newly created Azure Front Door, Azure Front Door (classic) or Azure CDN Standard from Microsoft (classic) resources will block any HTTP request that exhibits domain fronting behavior.

General availability: Default Rule Set 2.1 for Azure Web Application Firewall

Increase your security posture and reduce false positives with Default Rule Set 2.1, now generally available on Azure’s global Web Application Firewall running on Azure Front Door.

Evolving networking with a DPU-powered edge

SmartNICs or Data Processing Units (DPUs) bring an opportunity to double down on the benefits of a software-defined infrastructure without sacrificing the host resources needed by your line-of-business apps in your (virtual machines) VMs or containers. With a DPU, we can enable SR-IOV usage removing the host CPU consumption incurred by the synthetic datapath, alongside the SDN benefits.

Public preview: Azure Front Door zero downtime migration

You can use this feature to migrate Azure Front Door (classic) to Azure Front Door Standard and Premium with zero downtime.

Public preview: Azure Front Door integration with managed identities

Azure Front Door Standard and Premium supports enabling managed identities for Azure Front Door to access Azure Key Vault.

Public preview: Upgrade from Azure Front Door Standard to Premium tier

You can now use this feature to upgrade your Azure Front Door Standard profile to Premium tier without downtime.

General availability: Per Rule Actions on regional Web Application Firewall

Azure’s regional Web Application Firewall (WAF) with Application Gateway running the Bot Protection rule set and Core Rule Set (CRS) 3.2 or higher now supports setting actions on a rule-by-rule basis.

General availability: TLS 1.3 with Application Gateway

Start using the new policies with TLS 1.3 for your Azure Application Gateway to improve security and performance.

Announcing new capabilities for Azure Firewall

  • New GA regions in Qatar central, China East, and China North
  • IDPS Private IP ranges now generally available.
  • Single Click Upgrade/Downgrade now in preview.
  • Enhanced Threat Intelligence now in preview.
  • KeyVault with zero internet exposure now in preview.


Dapr v1.9.0 now available in the Dapr extension for AKS and Arc-enabled Kubernetes

The Dapr v1.9.0 release offers several new features, including pluggable components, resiliency metrics, and app health checks, as well as many fixes in the core runtime and components.

Generally available: Premium SSD v2 disks available on Azure Disk CSI driver

Premium SSD v2 support is now generally available on AKS.

Public preview: AKS image cleaner

You can now more easily remove unused and vulnerable images stored on AKS nodes.

Public preview: IPVS load balancer support in AKS

You can now use the IP Virtual Server (IPVS) load balancer with AKS, with configurable connection scheduling and TCP/UDP timeouts.

Public preview: Azure CNI Powered by Cilium

Leverage next generation eBPF dataplane for pod networking, Kubernetes network policies and service load balancing.

Public preview: Rotate SSH keys on existing AKS nodepools

You can now update SSH keys on existing AKS nodepools post deployment.

Azure VMware Solution

Generally available: New node sizing for Azure VMware Solution

Optimize workloads with new node sizes, AV52, and AV36P, now generally available in Azure VMware Solution.

Generally available: Azure NetApp Files datastores for Azure VMware Solution

Azure NetApp Files datastores is now generally available to run your storage intensive workloads on Azure VMware Solution (AVS).

Virtual Machines

General availability: Ephemeral OS disk support for confidential virtual machines

Create confidential VMs using Ephemeral OS disks for your stateless workloads.

General availability: New cost recommendations for Virtual Machine Scale Sets

Azure Advisor has expanded recommendations to include cost optimisation recommendation for Virtual Machine Scale Sets too.

Microsoft Intune user scope configuration for Azure Virtual Desktop multi-session VMs is now GA

This new update enables you to configure user scope policies using settings catalog, configure user certificates, and configure PowerShell scripts in user context.

Generally available: Encrypt managed disks with cross-tenant customer-managed keys

Many service providers building Software as a Service (SaaS) offerings on Azure want to give their customers the option of managing their own encryption keys.

General availability: Bot Manager Rule Set 1.0 on regional Web Application Firewall

This rule set provides you enhanced protection against bots and provides granular control over bots detected by WAF by categorizing bot traffic as good, bad, or unknown bots.

Public preview: Azure Bastion now support shareable links

Shareable links allows users to connect to target resources via Azure Bastion without access to the Azure portal.


Generally available: SFTP support for Azure Blob Storage

Azure Blob Storage now supports provisioning an SFTP endpoint with just one click.

Public preview: Availability zone volume placement for Azure NetApp Files

Deploy new Azure NetApp Files volumes in Azure availability zones (AZs) of your choice to support workloads across multiple availability zones.

App Services

App Service Environment version 1 and version 2 will be retired on 31 August 2024

Migrate to App Service Environment version 3 by 31 August 2024

Generally available: Azure Static Web Apps now fully supports .NET 7

Azure Static Web Apps now supports building and deploying full-stack .NET 7.0 isolated applications.

Public preview: Azure Static Web Apps now Supports Node 18

Azure Static Web Apps now supports building and deploying full-stack Node 18 applications.

Generally available: Static Web Apps support for skipping API builds

Azure Static Web Apps provides the option to skip the default API builds via GitHub Actions and Azure pipelines. While setting up the YAML build configuration, you can set the skip_api_build flag to true in order to skip building the APIs.

Generally available: Static Web Apps support for stable URLs for preview environments

Use stable URLs with Azure Static Web Apps preview environments.

Generally available: Static Web Apps support for Gitlab and Bitbucket

Deploy Static Web Apps using Gitlab and Bitbucket as CI/CD providers.

Generally available: Static Web Apps support for preview environments in Azure DevOps

Deploy applications to staging environments using Azure DevOps.

Public preview: Go language support on Azure App Service

Go language (v1.18 and v1.19) is natively supported on Azure App Service, helping developers innovate faster using the best fully managed app platform for cloud-centric web apps. The language support is available as an experimental language release on Linux App Service in November 2022.

Generally available Day 0 support for .NET 7.0 on App Service

developers are immediately unblocked to try, test, and deploy .NET apps targeting the version of .NET accelerating time-to-market on the platform they know and use today. It is expected to be available in Q2 FY23.


Secure your digital payment system in the cloud with Azure Payment HSM—now generally available

the general availability of Azure Payment HSM, a BareMetal Infrastructure as a service (IaaS) that enables customers to have native access to payment HSM in the Azure cloud. With Azure Payment HSM, customers can seamlessly migrate PCI workloads to Azure and meet the most stringent security, audit compliance, low latency, and high-performance requirements needed by the Payment Card Industry (PCI).

Automated Key Rotation Generally Available on Azure Key Vault Managed HSM

The feature allows you to set up an auto-rotation policy that automatically generates a new key version of the customer-managed key (CMK) stored in the HSM at a specified frequency.

General availability: Azure Automation supports Availability zones

Azure Automation now supports Availability zones to provide improved resiliency and reliability to the service, runbooks and other automation assets.

Public preview: Microsoft Azure Managed HSM TLS Offload Library

Azure Managed HSM now supports SSL/TLS Offload for F5 and Nginx.

Generally available: Additional Always Free Services for Azure Free Account and PAYG

With an Azure free account, you can explore with free amounts of 55+ always free services.


Announcing general availability of FSLogix profiles for Azure AD-joined VMs in Azure Virtual Desktop

By leveraging Azure AD Kerberos with Azure Files, you can seamlessly access file shares from Azure AD-joined VMs and use them to store your FSLogix profile containers.


General availability: Manage your Log Analytics Tables in Azure Portal

announcing the general availability of a new experience for managing Azure Log Analytics table metadata from the Azure Portal. With this new UI you can view and edit table properties directly from Azure Portal in Log Analytics workspaces experience.

New Project Flash Update: Advancing Azure Virtual Machine availability monitoring

  • General availability of VM availability information in Azure Resource Graph
  • Preview of a VM availability metric in Azure Monitor
  • Preview of VM availability status change events via Azure Event Grid

General availability: Azure Monitor agent custom and IIS logs

This new capability is designed to enable customers to collect their text-based logs generated in their service or application. Likewise, Internet Information Service (IIS) logs for a customers’ service can be collected and transferred into a Log Analytics Workspace table for analysis.

General availability: Azure Monitor Logs, custom log API and ingestion-time transformations

With these new features, you will be able to add a custom ingestion-time transformation to data following into Azure Monitor Logs. These transformations can be used to set up ingestion-time extraction of fields and parsing of complex logs, obfuscation of sensitive data, removal of unneeded fields or even dropping full events for cost control, and many more advanced possibilities.

Announcing GA of revamped Custom Logs features

  • GA of the Log Ingestion API
  • GA of the Ingestion-time Transformations feature
  • A nominal fee per GB dropped will be charged for any data dropped beyond 50% of incoming data, calculated daily

Azure Backup

Limited preview: Azure Backup support for confidential VMs using Platform Managed Keys

You can use this feature to back up confidential VMs using Platform Managed Keys.

Public preview: Cross Subscription Restore for Azure Virtual Machines

Cross Subscription Restore allows you to restore Azure Virtual Machine, through create new or restore disks, to any subscription (honoring the RBAC capabilities) from the restore point created by Azure Backup.