Network Rules Versus Application Rules for Internal Traffic

This post is about using either Network Rules or Application Rules in Azure Firewall for internal traffic. I’m going to discuss a common scenario, a “problem” that can occur, and how you can deal with that problem.

The Rules Types

There are three kinds of rules in Azure Firewall:

  • DNAT Rules: Control traffic originating from the Internet, directed to a public IP address attached to Azure Firewall, and translated to a private IP Address/port in an Azure virtual network. This is implicitly applied as a Network Rule. I rarely use DNAT Rules – most modern applications are HTTP/S and enter the virtual network(s) via an Application Gateway/WAF.
  • Application Rules: Control traffic going to HTTP, HTTPS, or MSSQL (including Azure database services).
  • Network Rules: Control anything going anywhere.

The Scenario

We have an internal client, which could be:

  • On-premises over private networking
  • Connecting via point-to-site VPN
  • Connected to a virtual network, either the same as the Azure Firewall or to a peered virtual network.

The client needs to connect to a server, with SSL authentication, to a server. The server is connected to another virtual network/subnet. The route to the server goes through the Azure Firewall. I’ll complicate things by saying that the server is a PaaS resource with a Private Endpoint – this doesn’t affect the core problem but it makes troubleshooting more difficult πŸ™‚

NSG rules and firewall rules have been accounted for and created. The essential connection is either HTTPS or MSSQL and is implemented as an Application Rule.

The Problem

The client attempts a connection to the server. You end up with some type of application error stating that there was either a timeout or a problem with SSL/TLS authentication.

You begin to troubleshoot:

  • Azure Firewall shows the traffic is allowed.
  • NSG Flow Logs show nothing – you panic until you remember/read that Private Endpoints do not generate flow logs – I told you that I’d complicate things πŸ™‚ You can consider VNet Flow Logs to try to capture this data and then you might discover the cause.

You try and discover two things:

  • If you disconnect the NSG from the subnet then the connection is allowed. Hmm – the rules are correct so the traffic should be allowed: traffic from the client prefix(es) is permitted to the server IP address/port. The rules match the firewall rules.
  • You change the Application Rule to a Network Rule (with the NSG still associated and unchanged) and the connection is allowed.

So, something is going on with the Application Rules.

The Root Cause

In this case, the Application Rule is SNATing the connection. In other words, when the connection is relayed from the Azure Firewall instance to the server, the source IP is no longer that of the client; the source IP address is a compute instance in the AzureFirewallSubnet.

That is why:

  • The connection works when you remove the NSG
  • The connection works when you use a Network Rule with the NSG – the Network Rule does not SNAT the connection.

Solutions

There are two solutions to the problem:

Using Application Rules

If you want to continue to use Application Rules then the fix is to modify the NSG rule. Change the source IP prefix(es) to be the AzureFirewallSubnet.

The downsides to this are:

  • The NSG rules are inconsistent with the Azure Firewall rules.
  • The NSG rules are no longer restricting traffic to documented approved clients.

Using Network Rules

My preference is to use Network Rules for all inbound and east-west traffic. Yes, we lose some of the “Layer-7” features but we still have core features, including IDPS in the Premium SKU.

Contrary to using Application Rules:

  • The NSG rules are consistent with the Azure Firewall rules.
  • The NSG rules restrict traffic to the documented approved clients.

When To Use Application Rules?

In my sessions/classes, I teach:

  • Use DNAT rules for the rare occasion where Internet clients will connect to Azure resources via the public IP address of Azure Firewall.
  • Use Application Rules for outbound connections to Internet, including Azure resources via public endpoints, through the Azure Firewall.
  • Use Network Rules for everything else.

This approach limits “weird sh*t” errors like what is described above and means that NSG rules are effectively clones of the Azure Firewall rules, with some additional stuff to control stuff inside of the Virtual Network/subnet.

Azure Virtual Network Manager – Routing Configuration Preview

Microsoft recently announced a public preview of User-Defined Route (UDR) management using Azure Virtual Network Manager. I’ve taken some time to play with it, and here are my thoughts.

Azure Virtual Network Manager (AVNM)

AVNM has been around for a while but I have mostly ignored it up to now because:

  • The connectivity configuration feature (centrally manage VNet connections) was pointless to me without route management – what’s the point of a hub & spoke in a business setting without a firewall?
  • I liked the Security Admin Rule configuration (same tech as NSG rules in the Hyper-V switch port, but processed before NSG rules) but pricing of AVNM was too much – more on this later.

Connectivity was missing something – the ability to deploy UDRs or BGP routes from a central policy that would force a next hop to a routing/firewall appliance.

AVNM is deployed centrally but can operate potentially across all virtual networks in your tenant (defined by a scope at the time of deployment) and even across other tenants via mutually agreed guest access – the latter would be useful in acquisition or managed services scenarios.

Routing Configuration Preview

Routing Configuration was introduced as a preview on May 2nd. Immediately I was drawn to it. But I needed to spend some time with it – to dig a little deeper and not just start spouting off without really understanding what was happening. I spent quite a bit of time reading and playing last week and now I feel happier about it.

Network Groups

Network Groups power everything in AVNM. A Network Group is a listing of either subnets or virtual networks. It can be a static list that you define or it can be a dynamic query.

At first, dynamic query looks cool. You can build up a dynamic query using one or a number of parameters:

  • Name
  • Id
  • Tags
  • Location
  • Subscription Name
  • Subscription ID
  • Subscription Tags
  • Resource Group Name
  • Resource Group Id

When you add members via a query, an Azure Policy is created.

When that policy (re)evaluates a notification is sent to AVNM and any policies that target the updated network group are applied to the group members. That creates a possible negative scenario:

  • You build a workload in code from VNet all the way through to resource/code
  • You deploy the workload IaC
  • The VNet is deployed, without any peering/routing configurations because that’s the job of AVNM
  • The workload components that rely on routing/peering fail and the deployment fails
  • Azure Policy runs some time later and then you can run your code.

Ick! You don’t want to code peering/routing if it’s being deployed by AVNM – you could end up with a mess when code runs and then AVNM runs and so on.

What do you do? AVNM has a very nice code structure under the covers. The AVNM resource is simple – all the configurations, the rules collections, and rules, and groups are defined as sub-resources. One could build the group membership using static membership and place the sub-resource with the workload code. That will mean that the app registration used by the pipeline will require rights in the central AVNM – that could be an issue because AVNM is supposed to be a governance tool.

Ideally, Azure Policy would trigger much faster than it does (not scientific, but it was taking 15-ish minutes in my tests) and update the group membership with less latency. Once the membership is updated, configurations are deployed nearly instantly – faster than I could measure it.

Routing Configuration

I like how AVNM has structured the configurations for Security Admin Rules and Routing Configurations. It reminds me of how Azure Firewall has handled things.

Rule Collection

A Routing Configuration is deployed to a scope. The configuration is like a bucket – it has little in the way of features – all that happens in the Routing Rule Collections. The configuration contains one or more Rule Collections. Each collection targets a specific group. So I could have three groups defined:

  • Production
  • Dev
  • Secure

Each would have a different set of routes, the rules, defined. I have only one deployment (the configuration) which automatically applies to the correct VNets/subnets based on the group memberships. If I am using dynamic group membership, I can use governance features like tags (which can be controlled from management groups, subscriptions, resource groups or at the resource level) for large-scale automation and control.

There are 3 kinds of Local Route Setting – this configures:

  • How many Route Tables are deployed per VNet and how they are associated
  • Whether or not a route to the prefix of the target resource is created
None SpecifiedDirect Routing Within Virtual NetworkDirect Routing Within Subnet
How Many Route Tables?One per VNetOne per VNetOne per subnet
AssociationAll subnets in the VNetAll subnets in the VNetWith the subnet
Local_0 RouteN/AYes > VNet PrefixYes > Subnet Prefix
Local Route Setting in a Rule Collection

The Route Tables are created an AVNM-managed resource group in the target subscription. If you choose one of the “Direct …” Local Route Setting options then a UDR is created for the target prefix:

Direct Routing Within Virtual NetworkDirect Routing Within Subnet
Address PrefixThe VNet PrefixThe subnet prefix
Next Hop TypeVirtual NetworkVirtual Network
Using the Direct Routing options

The concept is that you can force routing to stay within the target VNet/Subnet if the destination is local, while routing via a different next hop when leaving the target. For example, force traffic to the local VNet via the firewall while staying in the subnet (Direct Routing Withing Subnet). Note that the Default rules for VNet via Virtual Network are not deactivated by default, which you can see below – localRoute_0 is created by AVNM to implement the “Direct …” option.

You have the option to control BGP propagation – which is important when using a firewall to isolate site-to-site connections from your Azure services.

Some Notes

AVNM isn’t meant to be the “I’ll manage all the routes centrally” solution. It manages what is important to the organisation – the governance of the network security model. You have the ability to edit routes in the resulting Route Table. So if I need to create custom routes for PaaS services or for a special network design then I can do that. The resulting Route Tables are just regular Azure Route Tables so I can add/edit/remove routes as I desire.

If you manually create a route in the Route Table and AVNM then tries to create a route to the same destination then AVNM will ignore the new rule – it’s a “what’s the point?” situation.

If someone updates an AVNM-managed rule then AVNM will not correct it until there is a change to the Rule Collection. I do not like this. I deem this to be a failure in the application of governance.

Pricing

This is the graveyard of AVNM. If you run Azure like a small business then you lump lots of workloads into a few subscriptions. If you, like I started doing years ago, have a “1 workload per subscription” model (just like in the Azure Cloud Adoption Framework) then AVNM is going to be pricey!

AVNM costs $0.10/subscription/hour. At 730 hours per average month, AVNM for a single subscription will cost $73/month. Let’s say that I have 100 workloads. That will cost me $7300/month! Azure Firewall Premium (compute only) costs $1277.50/month so how could some policy tool cost nearly 6 times more!?!?!

Quite honestly, I would have started to use AVNM last year for a customer when we wanted to roll out “NSG rules” to every subnet in Azure. I didn’t want to do an IaC edit and a DevOps pull request for every workload. That would have taken days/hours (and it did take days). I could have rolled out the change using AVNM in minutes. But the cost/benefit wasn’t worth it – so I spent days doing code and pull requests.

I hear it again and again. AVNM is not perfect, but its usable (feature improvements will come). But the pricing kills it before customer evaluation can even happen.

Conclusion

If a better triggering system for dynamic member Network Groups can be created then I think the routing solution is awesome. But with the pricing structure that is there today, the product is dead to me, which makes me sad. Come on Microsoft, don’t make me sad!

Why Are There So Many Default Routes In Azure?

Have you wondered why an Azure subnet with no route table has so many default routes? What the heck is 25.176.0.0/13? Or What is 198.18.0.0/15? And why are they routing to None?

The Scenario

You have deployed a virtual machine. The virtual machine is connected to a subnet with no Route Table. You open the NIC of the VM and view Effective Routes. You expect to see a few routes for the non-RFC1918 ranges (10.0.0.0/8, 172.16.0.0/12, etc) and “quad zero” (0.0.0.0/0) but instead you find this:

What in the nelly is all that? I know I was pretty freaked out when I first saw it some time ago. Here are the weird addresses in text, excluding quad zero and the virtual network prefix:

10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
100.64.0.0/10
104.146.0.0/17
104.147.0.0/16
127.0.0.0/8
157.59.0.0/16
198.18.0.0/15
20.35.252.0/22
23.103.0.0/18
25.148.0.0/15
25.150.0.0/16
25.152.0.0/14
25.156.0.0/16
25.159.0.0/16
25.176.0.0/13
25.184.0.0/14
25.4.0.0/14
40.108.0.0/17
40.109.0.0/16

Next Hop = None

The first thing that you might notice is the next hop which is sent to None.

Remember that there is no “router” by default in Azure. The network is software-defined so routing is enacted by the Azure NIC/the fabric. When a packet is leaving the VM (and everything, including “serverless”, is a VM in the end unless it is physical) the Azure NIC figures out the next hop/route.

When traffic hits a NIC, the best route is selected. If that route has a next hop set to None then the traffic is dropped like it disappeared into a black hole. We can use this feature as a form of “firewall – we don’t want the traffic so “Abracadabra – make it go away”.

A Microsoft page (and some googling) gives us some more clues.

RFC-1918 Private Addresses

We know these well-known addresses, even if we don’t necessarily know the RFC number:

  • 10.0.0.0/8
  • 172.16.0.0/12
  • 192.168.0.0/16

These addresses are intended to be used privately. But why is traffic to them dropped? If your network doesn’t have a deliberate route to other address spaces then there is no reason to enable routing to them. So Azure takes a “secure by default” stance and drops the traffic.

Remember that if you do use a subset of one of those spaces in your VNet or peered VNets, then the default routes for those prefixes will be selected ahead of the more general routes that dropping the traffic.

RFC-6598 Carrier Grade NAT

The subnet, 100.64.0.0/10, is defined as being used for carrier-grade NAT. This block of addresses is specifically meant to be used by Internet service providers (or ISPs) that implement carrier-grade NAT, to connect their customer-premises equipment (CPE) to their core routers. Therefore we want nothing to do with it – so drop traffic to there.

Microsoft Prefixes

20.35.252.0/22 is registered in Redmond, Washington, the location of Microsoft HQ. Other prefixes in 20.235 are used by Exchange Online for the US Government. That might give us a clue … maybe Microsoft is firewalling sensitive online prefixes from Azure? It’s possible someone could hack a tenant, fire up lots of machines to act as bots and then attack sensitive online services that Microsoft operates. This kind of “route to None” approach would protect those prefixes unless someone took the time to override the routes.

104.146.0.0/17 is a block that is owned by Microsoft with a location registered as Boydton, Virginia, the home of the East US region. I do not know why it is dropped by default. The zone that resolves names is hosted on Azure Public DNS. It appears to be used by Office 365, maybe with sharepoint.com.

104.147.0.0/16 is also owned by Microsoft which is also in Boydton, Virginia. This prefix is even more mysterious.

Doing a google search for 157.59.0.0/16 on the Microsoft.com domain results in the fabled “google whack”: a single result with no adverts. That links to a whitepaper on Microsoft.com which is written in Russian. The single mention translates to “Redirecting MPI messages of the MyApp.exe application to the cluster subnet with addresses 157.59.x.x/255.255.0.0.” . This address is also in Redmond.

23.103.0.0/18 has more clues in the public domain. This prefix appears to be split and used by different parts of Exchange Online, both public and US Government.

The following block is odd:

  • 25.148.0.0/15
  • 25.150.0.0/16
  • 25.152.0.0/14
  • 25.156.0.0/16
  • 25.159.0.0/16
  • 25.176.0.0/13
  • 25.184.0.0/14
  • 25.4.0.0/14

They are all registered to Microsoft in London and I can find nothing about them. But … I have a sneaky tin (aluminum) foil suspicion that I know what they are for.

40.108.0.0/17 and 40.109.0.0/16 both appear to be used by SharePoint Online and OneDrive.

Other Special Purpose Subnets

RFC-5735 specifies some prefixes so they are pretty well documented.

127.0.0.0/8 is the loopback address. The RFC says “addresses within the entire 127.0.0.0/8 block do not legitimately appear on any network anywhere” so it makes sense to drop this traffic.

198.18.0.0/15 “has been allocated for use in benchmark tests of network interconnect devices … Packets with source addresses from this range are not
meant to be forwarded across the Internet”.

Adding User-Defined Routes (UDRs)

Something interesting happens if you start to play with User-Defined Routes. Add a table to the subnet. Now add a UDR:

  • Prefix: 0.0.0.0/0
  • Next Hop: Internet

When you check Effective Routes, the default route to 0.0.0.0/0 is deactivated (as expected) and the UDR takes over. All the other routes are still in place.

If you modify that UDR just a little, something different happens:

  • Prefix: 0.0.0.0/0
  • Next Hop: Virtual Appliance
  • Next Hop IP Address: {Firewall private IP address}

All the mysterious default routes are dropped. My guess is that the Microsoft logic is “This is a managed network – the customer put in a firewall and that will block the bad stuff”.

The magic appears only to happen if you use the prefix 0.0.0.0/0 – try a different prefix and all the default routes re-appear.

Script – Document All Azure Private DNS Zones

This post contains a script that will find all Azure Private DNS Zones in a tenant and export information on screen and as markdown in a file.

I found myself in a situation where I needed to document a lot of Azure Private DNS Zones. I needed the following information:

  • Name of the zone
  • Subscription name
  • Resource group name
  • Name of associated virtual networks

The list was long so a copy and paste from the Azure Portal was going to take too long. Instead, I put a few minutes into a script to do the job – it even writes the content as a Markdown table in a .md file, making it super simple to copy/paste the entire piece of text into my documentation in VS Code.

cls

$subs = Get-AzSubscription

$outString = "| Zone Name | Subscription Name | Resource Group Name | Linked Virtual Network |"
Write-Host $outString
$outString | Out-File "dnsLinks.md"

$outString = "| --------- | ----------------- | ------------------- | ---------------------- |"
Write-Host $outString
$outString | Out-File "dnsLinks.md" -Append

foreach ($sub in $subs)
{

    try
    {
        $context = Set-AzContext -subscription $sub.id
   
        $zones = Get-AzPrivateDnsZone

        foreach ($zone in $zones)
        {
            if ($sub.Name -eq "connectivity" -or $sub.Name -eq "connectivity-canary")
    {
        break
    }
            try
            {

                $links = Get-AzPrivateDnsVirtualNetworkLink -ResourceGroupName $zone.ResourceGroupName -ZoneName $zone.Name
       
                foreach ($link in $links)
                {
                    try
                    {
                        $vnetName = ($link.VirtualNetworkId.Split("/")) | Select-Object -Last 1

                        $outString  = "| " + $zone.name + " | " + $context.Subscription.Name + " | " + $zone.ResourceGroupName + " | " + $vnetName + " |"
                        Write-Host $outString
                        $outString | Out-File "dnsLinks.md" -Append
                    }
                    catch {}
                }
             }
             catch {}
        }  
    }
    catch {}
}

It probably wouldn’t take a whole lot more work to add any DNS records if you needed that information too.

Designing Network Security To Combat Modern Threats

In this post, I want to discuss how one should design network security in Microsoft Azure, dispensing with past patterns and combatting threats that are crippling businesses today.

The Past

Network security did not change much for a very long time. The classic network design is focused on an edge firewall.”All the bad guys are trying to penetrate our network from the Internet” so we’ll put up a very strong wall at the edge. With that approach, you’ll commonly find the “DMZ” network; a place where things like web proxies and DNS proxies isolate interior users and services from the Internet.

The internal network might be made up of two/more VLANs. For example, one or more client device VLANs and a server VLAN. While the route between those VLANs might pass through the firewall, it probably didn’t; they really “routed” through a smart core switch stack and there was limited to no firewall isolation between those VLANs.

This network design is fertile soil for malware. Ports usually are not let open to attack on the edge firewall. Hackers aren’t normally going to brute force their way through a firewall. There are easier ways in such as:

  • Send an “invoice” PDF to the accounting department that delivers a trojan horse.
  • Impersonate someone, ideally someone that travels and shouts a lot, to convince a helpful IT person to reset a password.
  • Target users via phishing or spear phishing.
  • Cimpromise some upstream include that developers use and use it to attack from the servers.
  • Use a SQL injection attack to open a command prompt on an internal server.
  • And on and on and …

In each of those cases, the attack comes from within. The spread of the blast (the attack) is unfettered. The blast area (a term used to describe the spread of an attack) is the entire network.

Secure Zones To The Rescue!

Government agencies love a nice secure zone architecture. This is a design where sensitive systems, such as GDRP data or secrets are stored on an isolated network.

Some agencies will even create a whol duplicate network that is isolated, forcing users to have two PCs – one “regular” one on the Internet-connected network and a “secure” PC that is wired onto an isolated network with limited secret services.

Realistically, that isolated network is of little value to most, but if you have that extreme a need – then good luck. By the way, that won’t work in The Cloud πŸ™‚ Back to the more regular secure zone …

A special VLAN will be deployed and firewall rules will block all traffic into and out of that secure zone. The user experience might be to use Citrix desktops, hosted in the secure zone, to access services and data in that secure zone. But then reality starts cracking holes in the firewall’s deny all rules. No line of business app lives alone. They all require data from somewhere. Or there are integrations. Printers must be used. Scanners need to scan and share data. And legacy apps often use:

  • Domain (ADDS) credentials (how many ports do you need for that!!!)
  • SMB (TCP 445) for data transfer and integration

Over time, “deny all” becomes a long list of allow * from X to *, and so on, with absolutely no help from the app vendors.

The theory is that if an attack is commenced, then the blast area will be limited to the client network and, if it reaches the servers, it will be limtied to the Internal network. But this design fails to understand that:

  • An attack can come from within. Consider the scneario where compromised runtimes are used or a SQL injection attack breaks out from a database server.
  • All the required integrations open up holes between the secure zone and the other networks, including those legacy protocols that things like ransomware live on.
  • If one workload in the secure zone is compromised, they all are because there is no network segmentation inside of the VLAN.

And eventually, the “secure zone” is no more secure than the Internal network.

Don’t Block The Internet!!!

I’m amazed how many organisations do not block outbound access to the Internet. It’s just such hard work to open up firewall rules for all these applications that have Internet dependencies. I can understand that for a client VLAN. But the server VLAN such be a controlled space – if it’s not known & controlled (i.e. governed) then it should not be permitted.

A modern attack, an advanced persistent threat (APT), isn’t just some dumb blast, grab, and run. It is a sneaky process of:

  1. Penetration
  2. Discovery, often manually controlled
  3. Spread, often manually controlled
  4. Steal
  5. Destroy/encrypt/etc

Once an APT gets in, it usually wants to call home to pull instructions down from a rogue IP address or compromised bot. When the APT wants to steal data, to be used as blackmail and/or to be sold on the Darknet, the malware will seek to upload data to the Internet. Both of these actions are taking advantage of the all-too-common open access to the Internet.

Azure is Different

Years of working with clients has taught me that there are three kinds of people when it comes to Azure networking:

  1. Those who managed on-premises networks: These folks struggle with Azure networking.
  2. Those who didn’t do on-premises networking, but knew what to ask for: These folks take to Azure networking quite quickly.
  3. Everyone else: Irrelevant to this topic

What makes Azure networking so difficult for the network admins? There is no cabling in the fabric – obviously there is cabling in the data centres but it’s all abstracted by the VXLAN software-defined networks. Packets are encapsulated on the source virtual machine’s host, transmitted over the physical network, decapstulated on the destination virtual machine host, and presented to the destination virtual machine’s NIC. In short, packets leave the source NIC and magically arrive on the destination NIC with no hops in between – this is why traceroute is pointless in Azure and why the default gateway doesn’t really exist.

I’m not going to use virtual machines, Aidan. I’m doing PaaS and serverless computing. In Azure, everything is based on virtual machines, unless they are explcitly hosted on physical hosts (Azure VMware Services and some SAP stuff, for example). Even Functions run on a VM somewhere hidden in the platform. Serverless means that you don’t need to manage it.

The software-defined thing is why:

  • Partitioned subnets for a firewall appliance (front, back, VPN, and management) offer nothing from a security perspective in Azure.
  • ICMP isn’t as useful as you’d imagine in Azure.
  • The concept of partitioning workloads for security using subnets is not as useful as you might think – it’s actually counter-productive over time.

Transformation

I like to remind people during a presentation or a project kickoff that going on a cloud journey is supposed to result in transformation. You now re-evaluate everything and find better ways to do old things using cloud-native concepts. And that applies to network security designs too.

Micro-Segmentation Is The Word

Forget “Greece”, get on board with what you need to counter today’s threats: micro-segmentation. This is a concept where:

  • We protect the edge, inbound and outbound, permitting only required traffic.
  • We apply network isolation within the workload, permitting only required traffic.
  • We route traffic between workloads through the edge firewall, , permitting only required traffic.

Yes, more work will be required when you migrate existing workloads to Azure. I’d suggest using Azure Migrate to map network flows. I never get to do that – I always get the “messy migration projects” and I never get to use Azure Migrate – so testing and accessing and understanding NSG Traffic Analytics and the Azure Firewall/firewall logs via KQL is a necessary skill.

Security Classification

Every workload should go through a security classification process. You need to weigh risk verus complexity. If you max the security, you will increase costs and difficulty for otherwise simple operations. For example, a dev won’t be able to connect Visual Studio straight to an App Service if you deploy that App Service on a private or isolated App Service Plan. You also will have to host your own DevOps agents/GitHub runners because the Microsoft-hosted containers won’t be able to reach your SCM endpoints.

Every piece of compute is a potential attack vector: a VM, an App Service, a Function, a Container, a Logic App. The question is, if it is compromised, will the attacker be able to jump to something else? Will the data that is accessible be secret, subject to regulation, or reputational damage?

This measurement process will determine if a workload should use resources that:

  • Have public endpoints (cheapest and easiest).
  • Use private endpoints (medium levels of cost, complexity, and security).
  • Use full VNet integration, such as an App Service Environment or a virtual machine (highest cost/complexity but most secure).

The Virtual Network & Subnet

Imagine you are building a 3-tier workload that will be isolated from the Internet using Azure virtual networking:

  • Web servers on the Internet
  • Middle tier
  • Databases

Not that long ago, we would have deployed that workload on 3 subnets, one for each tier. Then we would have built isolation using Network Security Groups (NSGs), one for each subnet. But you just learned that a SD-network routes packets directly from NIC to NIC. An NSG is a Hyper-V Port ACL that is implemented at the NIC, even if applied at the subnet level. We can create all the isolation we want using an NSG within the subnet. That means we can flatten the network design for the workload to one subnet. A subnet-associated subnet will restrict communications between the tiers – and ideally between nodes within the same tier. That level of isolation should block everything … should πŸ™‚

Tips for virtual networks and subnets:

  • Deploy 1 virtual network per workload: Not only will this follow Azure Cloud Adoption Framework concepts, but it will help your overall security and governance design. Each workload is placed into a spoke virtual network and peered with a hub. The hub is used only for external connectivity, the firewall, and Azure Bastion (assuming this is not a vWAN hub).
  • Assign a single prefix to your hub & spoke: Firewall and NSG rules will be easier.
  • Keep the virtual newtorks small: Don’t waste your address space.
  • Flatten your subnets: Only deploy subnets when there is a technical need, for example VMs and private endpoints are in one subnet, VNet integration for an App Services plan is in another, a SQL managed instance, is in a third.

Resource Firewalls

It’s sad to see how many people disable operating system firewalls. For example, Group Policy is used to diable Windows Firewall. Don’t you know that Microsoft and Linux added those firewalls to protect machines from internal attacks? Those firewalls should remain operational and only permit required traffic.

Many Azure resources also offer firewalls. App Services have firewalls. Azure SQL has a firewall. Use them! The one messy resource is the storage account. The location of the endpoints for storage clusters is in a weird place – and this causes interesting situations. For example, a Logic App’s storage account with a configured firewall will prevent workflows from being created/working correctly.

Network Security Groups

Take a look at the default inbound rules in an NSG. You’ll find there is a Deny All rule which is the lowest possible priority. Just up from that rule, is a built in rule to allow traffic from VirtualNetwork. VirtualNetwork includes the subnet, the virtual network, and all routed networks, including peers and site-to-site connections. So all traffic from internal networks is … permitted! This is why every NSG that I create has a custom DenyAll rule with a priority of 4000. Higher priority rules are created to permit required traffic and only that required traffic.

Tips with your NSGs:

  • Use 1 NSG per subnet: Where the subnet resources will support an NSG. You will reduce your overall complexity and make troubleshooting easier. Remember, all NSG rules are actually applied at the source (outbound rules) or target (inbound rules) NIC.
  • Limit the use of “any”: Rules should be as accurate as possible. For example: Allow TCP 445 from source A to destination B.
  • Consider the use of Application Security Groups: You can abstract IP addresses with an Application Security Group (ASG) in an NSG rule. ASGs can be used with NICs – virtual machines and private endpoints.
  • Enable NSG Flow Logs & Traffic Analytics: Great for troubleshooting networking (not just firewall stuff) and for feeding data to a SIEM. VNet Flow Logs will be a superior replacement when it is ready for GA.

The Hub

As I’ve implied already, you should employ a hub & spoke design. The hub should be simple, small and free of compute. The hub:

  • Makes connections using site-to-site networking using SD-WAN, VPN, and/or ExpressRoute.
  • Hosts the firewall. The firewall blocks everything in every direction by default,
  • Hosts Azure Bastion, unless you are running Azure Virtual WAN – then deploy it to a spoke.
  • Is the “Public IP” for egress traffic for workloads trying to reach the Internet. All egress traffic is via the firewall. Azure Policy should be used to restrict Public IP Addresses to just those requires that require it – things like Azure Bastion require a public IP and you should create a policy override for each required resource ID.

My preference is to use Azure Firewall. That’s a long conversation so let’s move on to another topic; Azure Bastion.

Most folks will go into Azure thinking that they will RDP/SSH straight to their VMs. RDP and SSH are not perfect. This is something that the secure zone concept recognised. It was not unusual for admins/operators to use a bastion host to hop via RDP or SSH from their PC to the required server via another server. RDP/SSH were not open directly to the protected machines.

Azure Bastion should offer the same isolation. Your NSG rules should only permit RDP/SSH from:

  • The AzureBastionSubnet
  • Any other bastion hosts that might be employed, typically by developers who will deploy specialist tools.

Azure Bastion requires:

  • An Entra ID sign-in, ideally protected by features such as conditional access and MFA, to access the bastion service.
  • The destination machine’s credentials.

Routing

Now we get to one of my favourite topics in Azure. In the on-prem world we can control how packets get from A to B using cables. But as you’ve learned, we can run cables in Azure. But we can control the next hop of a packet.

We want to control flows:

  • Ingress from site-to-site networking to flow through the hub firewall: A route in the GatewaySubnet to use the hub firewall as the next hop.
  • All traffic leaving a spoke (workload virtual network) to flow through the hub firewall: A route to 0.0.0.0/0 using the firewall backend/private IP as the next hop.
  • All traffic between hub & spokes to flow through the remote hub firewall: A route to the remote hub & spoke IP prefix (see above tip) with a next hop of the remote hub firewall.

If you follow my tips, especially with the simple hub, then the routing is actually quite easy to implement and maintain.

Tips:

  • Keep the hub free of compute.
  • NSG Traffic Analytics helps to troubleshoot.

Web Application Firewall

The hub firewall shold not be used to present web applications to the Internet. If a web app is classified as requireing network security, then it should be reverse proxied using a Web Application Firewall (WAF). This specialised firewall inspects traffic at the application layer and can block threats.

The WAF will have a lot of false positives. Heavy traffic applications can produce a lot of false positives in your logs; in the case of Log Analytics, the ingestion charge can be huge so get to optimising those false positives as quickly as you can.

My preference is to route the WAF through the hub firewall to the backend applications. The WAF is a form of compte, even the Azure WAF. If you do not need end-to-end TLS, then the firewall could be used to inspect the HTTP traffic from the WAF to the backend using Intrusion Detection Prevention System (IDPS), offering another layer of protection.

Azure offers a couple of WAF options. Front Door with WAF is architecturally interesting, but the default design is that the backend has a public endpoint that limits access to your Front Door instance at the application layer. What if the backend is network connected for max protection? Then you get into complexities with Private Link/Private Endpoint.

A regional WAF is network connected and offers simpler networking, but it sacrifices the performance boosts from Front Door. You can combine Front Door with a regional WAF, but there are more costs with this.

Third party solutions are posisble Services such as Cloud Flare offer performance and security features. One could argue that Cloud Flare offers more features. From the performance perspective, keep in mind that Cloud Flare has only a few peering locations with the Microsoft WAN, so a remote user might have to take a detour to get to your Azure resources, increasing latency.

You can seek out WAF solutions from the likes of F5 and Citrix in the Azure Marketplace. Keep in mind that NVAs can continue skills challenges by siloing the skill – native cloud skills are easier to develop and contract/hire.

Summary

I was going to type something like “this post gives you a quick tour of the micro-segmentation approach/features that you can use in Azure” but then I reaslised that I’ve had keyboard diarrheaΒ and this post is quite Sinofskian. What I’ve tried to explain is that the ways of the past:

  • Don’t do much for security anymore
  • Are actually more complex in architecture than Azure-native patterns and solutions that will work.

If you implement security at three layers, assuming that a breach will happen and could happen anywhere then you limit the blast area of a threat:

  • The edge, using the firewall and a WAF
  • The NIC, using a Network Security Group
  • The resource, using a guest OS/resource firewall

This trust-no-one approach that denies all but the minimum required traffic will make life much harder for an attacker. Including logging and the use of a well configured SIEM will create trip wires that an attacker must trip over to attempt an expansion. You will make their expansion harder & slower, and make it easier to detect them. You will also limit how much they can spread and how much the damage that the attack can create. Furthermore, you will be following the guidance the likes of the FBI are recommending.

There is so much more to consider when it comes to security, but I’ve focused on micro-segmentation in a network context. People do think about Entra ID and management solutions (such as Defender for Cloud and/or SIEM) but they rarely think through the network design by assuming that what they did on-prem will still be fine. It won’t because on-prem isn’t fine right now! So take my advice, transform your network, and protect your assets, shareholders, and your career.

What is a Managed Private Endpoint?

Something new appeared in recent times: the “Managed Private Endpoint”. What the heck is it? Why would I use it? How is it different from a “Private Endpoint”?

Some Background

As you are probably aware, most PaaS services in Azure have a public endpoint by default. So if I use a Storage Account or Azure SQL, they have a public interface. If I have some security or compliance concerns, I can either:

  • Switch to a different resource type to solve the problem
  • Use a Private Endpoint

Private Endpoint is a way to interface with a PaaS resource from a subnet in a virtual network. The resource uses the Private Link service to receive connections and respond – this stateful service does not allow outbound connections providing a form of protection against some data leakage vectors.

Say I want to make a Storage Account only accessible on a VNet. I can set up a Private Endpoint for the particular API that I care about, such as Blob. A Private Endpoint resource is created and a NIC is created. The NIC connects to my designated subnet and uses an IP configuration for that subnet. Name resolution (DNS) is updated and now connections from my VNet(s) will go to the private IP address instead of the public endpoint. To enforce this, I can close down the public endpoint.

The normal process is that this is done from the “target resource”. In the above case, I created the Private Endpoint from the storage account.

Managed Private Endpoint

This is a term I discovered a couple of months ago and, to be honest, it threw me. I had no idea what it was.

So far, Managed Private Endpoints are features of:

The basic concept of a Managed Private Endpoint has not changed. It is used to connect to a PaaS resource, also referred to as the target resource (ah, there’s a clue!) over a private connection.

Microsoft: Azure Data Factory Integration Runtime connecting privately to other PaaS targets

What is different is that you create the Managed Private Endpoint from a client resource. Say, for example, I want Azure Synapse Analytics to connect privately to an Azure Cosmos DB resource. The Synapse Analytics resource doesn’t do normal networking so it needs something different. I can go to the Synapse Analytics resource and create a Managed Private Endpoint to the target Cosmos DB resource. This is a request – because the operator of the Cosmos DB resource must accept the Private Endpoint from their target resource.

Once done, Synapse Analytics will use the private Azure backbone instead of the public network to connect to the Cosmos DB resource.

Managed Virtual Network

Is your head wrecked yet? A Managed Private Endpoint uses a Managed Virtual Network. As I said above, a resource like Synapse Analytics doesn’t do normal networking. But a Managed Private Endpoint is going to require a Virtual Network and a subnet to connect the Managed Private Endpoint and NIC.

These are PaaS resources so the goal is to push IaaS things like networking into the platform to be managed by Microsoft. That’s what happens here. When you want to use a Managed Private Endpoint, a Managed Virtual Network is created for you in the same region as the client resource (Synapse Analytics in my example). That means that data engineers don’t need to worry about VNets, subnets, route tables, peering, and all the stuff when creating integrations.

Cosmos DB Replicas With Private Endpoint

This post explains how to make Cosmos DB replicas available using Private Endpoint.

The Problem

A lot of (most) Azure documentation and community content assumes that PaaS resources will be deployed using public endpoints. Some customers have the common sense not to use public endpoints – who wants to be a zero-day target for well-armed attackers?!

Cosmos DB is a commonly considered database for born-in-the-cloud workloads. One of the cool things about Cosmos DB is the ability to use any number of globally dispersed read-only or write replicas with pretty low replication latency.

But there is a question – what happens if you use Private Endpoint? The Cosmos DB account is created in a “primary” region. That Private Endpoint connects to a virtual network in the primary region. If the primary region goes offline (it does happen!) then how will clients redirect to another replica? Or if you are building a workload that will exist in many regions, how will a remote footprint connection to the local Cosmos DB replica?

I googled and landed on a Microsoft forum post that asked such a question. The answer was (in short) “The database will be available, how you connect to it is your and Azure Network’s problem”. Very helpful!

Logically, what we want is:

What I Figured Out

I’ve deployed some Cosmos DB using Private Endpoint as code (Terraform) in the recent past. I noticed that the DNS configuration was a little more complex than you usually find – I needed to create a Private DNS Zone for:

  • The Cosmos DB service type
  • Each Azure region that the replica exists in for that service type

I fired up a lab to simulate the scenario. I created Cosmos DB account in North Europe. I replicated the Cosmos DB account to East US. I created a VNet in North Europe and connected the account to the VNet using a Private Endpoint.

Here’s what the VNet connected devices looks like:

As you can see, the clients in/peered with the North Europe VNet can access their local replica and the East US replica via the local Private Endpoint.

I created a second VNet in East US. Now the important bit: I connected the same Cosmos Account to the VNet in East US. When you check out the connected devices in the East US VNet then you can see that clients in/peered to the North America VNet can connect to the local and remote replica via the local Private Endpoint:

DNS

Let’s have a look at the DNS configurations in Private Endpoints. Here is the one in North Europe:

If we enable the DNS zone configuration feature to auto-register the Private Endpoint in Azure Private DNS, then each of the above FQDNs will be registered and they will resolve to the North Europe NIC. Sounds OK.

Here is the one in East US:

If we enable the DNS zone configuration feature to auto-register the Private Endpoint in Azure Private DNS, then each of the above FQDNs will be registered and they will resolve to the East US NIC. Hmm.

If each region has its own Private DNS Zones then all is fine. If you use Private DNS zones per workload or per region then you can stop reading now.

But what if you have more than just this workload and you want to enable full name resolution across workloads and across regions? In that case, you probably (like me) run central Private DNS Zones that all Private Endpoints register with no matter what region they are deployed into. What happens now?

Here I have set up a DNS zone configuration for the North Europe Private Endpoint:

Now we will attempt to add the East US Private Endpoint:

Uh-oh! The records are already registered and cannot be registered again.

WARNING: I am not a Cosmos DB expert!

It seems to me that using the DNS Zone configuration feature will not work for you in the globally shared Private DNS Zone scenario. You are going to have to configure DNS as follows:

  • The global account FQDN will resolve to your primary region.
  • The North Europe FQDN will resolve to the North Europe Private Endpoint. Clients in North Europe will use the North Europe FQDN.
  • The East US FQDN will resolve to the East US Private Endpoint. Clients in East US will use the East US FQDN.

This means that you must manage the DNS record registrations, either manually or as code:

  1. Register the account record with the “primary” resource/Private Endpoint IP address: 10.1.04.
  2. Register the North Europe record with the North Europe Private Endpoint IP: 10.1.0.5.
  3. Register the East US record with the East US Private Endpoint IP: 10.2.0.6.

This will mean that clients in one region that try to access another region (via failover) will require global VNet peering and NSG/firewall access to the remote Private Endpoint.

Cannot Remove Subnet Because of App Service VNet Integration

This post explains how to unlock a subnet when you have deleted an App Service/Function App with Regional VNet Integration.

Here I will describe how you can deal with an issue where you cannot delete a subnet from a VNet after deleting an Azure App Service or Function App with Regional VNet Integration.

Scenario

You have an Azure App Service or Function App that has Regional VNet Integration enabled to connect the PaaS resource to a subnet. You are doing some cleanup or redeployment work and want to remove the PaaS resources and the subnet. You delete the PaaS resources and then find that you cannot:

  • Delete the subnet
  • Disable subnet integration for Microsoft.Web/serverFarms

The error looks something like this:

Failed to delete resource group workload-network: Deletion of resource group ‘workload-network’ failed as resources with identifiers ‘Microsoft.Network/virtualNetworks/workload-network-vnet’ could not be deleted. The provisioning state of the resource group will be rolled back. The tracking Id is ‘iusyfiusdyfs’. Please check audit logs for more details. (Code: ResourceGroupDeletionBlocked) Subnet IntegrationSubnet is in use by /subscriptions/sdfsdf-sdfsdfsd-sdfsdfsdfsd-sdfs/resourceGroups/workload-network/providers/Microsoft.Network/virtualNetworks/workload-network-vnet/subnets/IntegrationSubnet/serviceAssociationLinks/AppServiceLink and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet. (Code: InUseSubnetCannotBeDeleted, Target: /subscriptions/sdfsdf-sdfsdfsd-sdfsdfsdfsd-sdfs/resourceGroups/workload-network/providers/Microsoft.Network/virtualNetworks/workload-network-vnet)

It turns out that deleting the PaaS resource leaves you in a situation where you cannot disable the integration. You have lost permission to access the platform mechanism.

In my situation, Regional VNet integration was not cleanly disabling so I did the next logical thing (in a non-production environment): started to delete resources, which I could quickly redeploy using IaC … but I couldn’t because the subnet was effectively locked.

Solutions

There are 2 solutions:

  1. Call support.
  2. Recreate the PaaS resources

Option 1 is a last resort because that’s days of pain – being frankly honest. That leaves you with Option 2. Recreate the PaaS resources exactly as they were before with Regional VNet Integration Enabled. Then disable the integration (go into the PaaS resource, go into Networking, and disconnect the integration).

That process cleans things up and now you can disable the Microsoft.Web/serverFarms delegation and/or delete the subnet.

Enabling DevSecOps with Azure Firewall

In this post, I will share how you can implement DevSecOps with Azure Firewall, with links to a bunch of working Bicep files to deploy the infrastructure-as-code (IaC) templates.

This example uses a “legacy” hub and spoke – one where the hub is VNet-based and not based on Azure Virtual WAN Hub. I’ll try to find some time to work on the code for that one.

The Concept

Hold on, because there’s a bunch of things to understand!

DevSecOps

The DevSecOps methodology is more than just IaC. It’s a combination of people, processes, and technology to enable a fail-fast agile delivery of workloads/applications to the business. I discussed here how DevSecOps can be used to remove the friction of IT to deliver on the promises of the Cloud.

The Azure features that this design is based on are discussed in concept here. The idea is that we want to enable Devs/Ops/Security to manage firewall rules in the workload’s Git repository (repo). This breaks the traditional model where the rules are located in a central location. The important thing is not the location of the rules, but the processes that manage the rules (change control through Git repo pull request reviews) and who (the reviewers, including the architects, firewall admins, security admins, etc).

So what we are doing is taking the firewall rules for the workload and placing them in with the workload’s code. NSG rules are probably already there. Now, we’re putting the Azure Firewall rules for the workload in the workload repo too. This is all made possible thanks to changes that were made to Azure Firewall Policy (Azure Firewall Manager) Rules Collection Groups – I use one Rules Collection Group for each workload and all the rules that enable that workload are placed in that Rules Collection Group. No changes will make it to the trunk branch (deployment action/pipelines look for changes here to trigger a deployment) without approval by all the necessary parties – this means that the firewall admins are still in control, but they don’t necessarily need to write the rules themselves … and the devs/operators might even write the rules, subject to review!

This is the killer reason to choose Azure Firewall over NVAs – the ability to not only deploy the firewall resource, but to manage the entire configuration and rule sets as code, and to break that all out in a controlled way to make the enterprise more agile.

Other Bits

If you’ve read my posts on Azure routing (How to Troubleshoot Azure Routing? and BGP with Microsoft Azure Virtual Networks & Firewalls) then you’ll understand that there’s more going on than just firewall rules. Packets won’t magically flow through your firewall just because it’s in the middle of your diagram!

The spoke or workload will also need to deploy:

  • A peering connection to the hub, enabling connectivity with the hub and the firewall. All traffic leaving the spoke will route through the firewall thanks to a user-defined route in the spoke subnet route table. Peering is a two-way connection. The workload will include some bicep to deploy the spoke-hubΒ and the hub-spoke connections.
  • A route for the GatewaySubnet route table in the hub. This is required to route traffic to the spoke address prefix(es) through the Azure Firewall so on-premises>spoke traffic is correctly inspected and filtered by the firewall.

The IaC

In this section, I’ll explain the code layout and placement.

My Code

You can find my public repo, containing all the Bicep code here. Please feel free to download and use.

The Git Repo Design

You will have two Git repos:

  1. The first repo is for the hub. This repo will contain the code for the hub, including:
    • The hub VNet.
    • The Hub VNet Gateway.
    • The GatewaySubnet Route Table.
    • The Azure Firewall.
    • The Azure Firewall Policy that manages the Azure Firewall.
  2. The second repo is for the spoke. This skeleton example workload contains:

Action/Pipeline Permissions

I have written a more detailed update on this section, which can be found here.Β 

Each Git repo needs to authenticate with Azure to deploy/modify resources. Each repo should have a service principal in Azure AD. That service principal will be used to authenticate the deployment, executed by a GitHub action or a DevOps pipeline. You should restrict what rights the service principal will require. I haven’t worked out the exact minimum permissions, but the high-level requirements are documented below:

 

Trunk Branch Protection &Β  Pull Request

Some of you might be worried now – what’s to stop a developer/operator working on Workload A from accidentally creating rules that affect Workload X?

This is exactly why you implement standard practices on the Git repos:

  • Protect the Trunk branch: This means that no one can just update the version of the code that is deployed to your firewall or hub. If you want to create an updated, you have to create a branch of the trunk, make your edits in that trunk, and submit the changes to be merged into trunk as a pull request.
  • Enable pull request reviews: Select a panel of people that will review changes that are submitted as pull requests to the trunk. In our scenario, this should include the firewall admin(s), security admin(s), network admin(s), and maybe the platform & workload architects.

Now, I can only submit a suggested set of rules (and route/peering) changes that must be approved by the necessary people. I can still create my code without delay, but a change control and rollback process has taken control. Obviously, this means that there should be SLAs on the review/approval process and guidance on pull request, approval, and rejection actions.

And There You Have It

Now you have the design and the Bicep code to enable DevSecOps with Azure Firewall.

Azure App Service, Private Endpoint, and Application Gateway/WAF

In this post, I will share how to configure an Azure Web App (or App Service) with Private Endpoint, and securely share that HTTP/S service using the Azure Application Gateway, with the optional Web Application Firewall (WAF) feature. Whew! That’s lots of feature names!

Background

Azure Application (App) Services or Web Apps allows you to create and host a web site or web application in Azure without (directly) dealing with virtual machines. This platform service makes HTTP/S services easy. By default, App Services are shared behind a public/ & shared frontend (actually, load-balanced frontends) with public IP addresses.

Earlier this year, Microsoft released Private Link, a service that enables an Azure platform resource (or service shared using a Standard Tier Load Balancer) to be connected to a virtual network subnet. The resource is referred to as the linked resource. The linked resource connects to the subnet using a Private Endpoint. There is a Private Endpoint resource and a special NIC; it’s this NIC that shares the resource with a private IP address, obtained from the address space of the subnet. You can then connect to the linked resource using the Private Endpoint IPv4 address. Note that the Private Endpoint can connect to many different “subresources” or services (referred to as serviceGroup in ARM) that the linked resource can offer. For example, a storage account has serviceGroups such as file, blob, and web.

Notes: Private Link is generally available. Private EndpointΒ for App Services is still in preview. App Services Premium V2 is required for Private Endpoint.

The Application Gateway allows you to share/load balance a HTTP/S service at the application layer with external (virtual network, WAN, Internet) clients. This reverse proxy also offers an optional Web Application Firewall (WAF), at extra cost, to protect the HTTP/S service with the OWASP rule set and bot protection. With the Standard Tier of DDoS protection enabled on the Application Gateway virtual network, the WAF extends this protection to Layer-7.

Design Goal

The goal of this design is to ensure that all HTTP/S (HTTPS in this example) traffic to the Web App must:

  • Go through the WAF.
  • Reverse proxy to the App Service via the Private Endpoint private IPv4 address only.

The design will result in:

  • Layer-4 protection by an NSG associated with the WAF subnet. NSG Traffic Analytics will send the data to Log Analytics (and optionally Azure Sentinel for SIEM) for logging, classification, and reporting.
  • Layer-7 protection by the WAF. If the Standard Tier of DD0S protection is enabled, then the protection will be at Layer-4 (Application Gateway Public IP Address) and Layer-7 (WAF). Logging data will be sent to Log Analytics (and optionally Azure Sentinel for SIEM) for logging and reporting.
  • Connections directly to the web app will fail with a “HTTP Error 403 – Forbidden” error.

Note: If you want to completely prevent TCP connections to the web app then you need to consider App Service Environment/Isolated Tier or a different Azure platform/IaaS solution.

Design

Here is the design – you will want to see the original image:

There are a number of elements to the design:

Private DNS Zone

You must be able to resolve the FQDNs of your services using the per-resource type domain names. App Services use a private DNS zone called privatelink.azurewebsites.net. There are hacks to get this to work. The best solution is to create a central Azure Private DNS Zone called privatelink.azurewebsites.net.

If you have DNS servers configured on your virtual network(s), associate the Private DNS Zone with your DNS servers’ virtual network(s). Create a conditional forwarder on the DNS servers to forward all requests to privatelink.azurewebsites.net to 168.63.129.16 (https://docs.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16). This will result in:

  1. A network client sending a DNS resolution request to your DNS servers for *.privatelink.azurewebsites.net.
  2. The DNS servers forwarding the requests for *.privatelink.azurewebsites.net to 168.63.129.16.
  3. The Azure Private DNS Zone will receive the forwarded request and respond to the DNS servers.
  4. The DNS servers will respond to the client with the answer.

App Service

As stated before the App Service must be hosted on a Premium v2 tier App Service Plan. In my example, the app is called myapp with a default URI of https://myapp.azurewebsites.net. A virtual network access rule is added to the App Service to permit access from the subnet of the Application Gateway. Don’t forget to figure out what to do with the SCM URI for DevOps/GitHub integration.

Private Endpoint

A Private Endpoint was added to the App Service. The service/subresource/serviceGroup is sites. Automatically, Microsoft will update their DNS to modify the name resolution of myapp.azurewebsites.net to resolve to myapp.privatelink.azurewebsites.net. In the above example, the NIC for the Private Endpoint gets an IP address of 10.0.64.68 from the AppSubnet that the App Service is now connected to.

Add an A record to the Private DNS Zone for the App Service, resolving to the IPv4 address of the Private Endpoint NIC. In my case, myapp.privatelink.azurewebsites.net will resolve to 10.0.64.68. This in turn means that myapp.azurewebsites.net > myapp.privatelink.azurewebsites.net > 10.0.64.68.

Application Gateway/WAF

  1. Add a new Backend Pool with the IPv4 address of the Private Endpoint NIC, which is 10.0.64.68 in my example.
  2. Create a multisite HTTPS:443 listener for the required public URI, which will be myapp.joeelway.com in my example, adding the certificate, ideally from an Azure Key Vault. Use the public IP address (in my example) as the frontend.
  3. Set up a Custom Probe to test https://myapp.azurewebsites.net:443 (using the hostname option) with acceptable responses of 200-399.
  4. Create an HTTP Setting (the reverse proxy) to forward traffic to https://myapp.azurewebsites.net:443 (using the hostname option) using a well-known certificate (accepting the default cert of the App Service) for end-to-end encryption.
  5. Bind all of the above together with a routing rule.

Public DNS

Now you need to get traffic for https://myapp.joeelway.com to go to the (public, in my example) frontend IP address of the Application Gateway/WAF. There are lots of ways to do this, including Azure Front Door, Azure Traffic Manager, and third-party solutions. The easy way is to add an A record to your public DNS zone (joeelway.com, in my example) that resolves to the public IP address of the Application Gateway.

The Result

  1. A client browses https://myapp.joeelway.com.
  2. The client name resolution goes to public DNS which resolves myapp.joeelway.com to the public IP address of the Application Gateway.
  3. The client connects to the Application Gateway, requesting https://myapp.joeelway.com.
  4. The Listener on the Application Gateway receives the connection.
    • Any WAF functionality inspects and accepts/rejects the connection request.
  5. The Routing Rule in the Application Gateway associates the request to https://myapp.joeelway.com with the HTTP Setting and Custom Probe for https://myapp.azurewebsites.net.
  6. The Application Gateway routes the request for https://myapp.joeelway.com to https://myapp.azurewebsites.net at the IPv4 address of the Private Endpoint (documented in the Application Gateway Backend Pool).
  7. The App Service receives and accepts the request for https://myapp.azurewebsites.net and responds to the Application Gateway.
  8. The Application Gateway reverse-proxies the response to the client.

For Good Measure

If you really want to secure things:

  • Deploy the Application Gateway as WAFv2 and store SSL certs in a Key Vault with limited Access Policies
  • The NSG on the WAF subnet must be configured correctly and only permit the minimum traffic to the WAF.
  • All resources will send all logs to Log Analytics.
  • Azure Sentinel is associated with the Log Analytics workspace.
  • Azure Security Center Standard Tier is enabled on the subscription and the Log Analytics Workspace.
  • If you can justify the cost, DDoS Standard Tier is enabled on the virtual network with the public IP address(es).

And that’s just the beginning πŸ™‚