Azure Firewall DevSecOps in Azure DevOps

In this post, I will share the details for granting the least-privilege permissions to GitHub action/DevOps pipeline service principals for a DevSecOps continuous deployment of Azure Firewall.

Quick Refresh

I wrote about the design of the solution and shared the code in my post, Enabling DevSecOps with Azure Firewall. There I explained how you could break out the code for the rules of a workload and manage that code in the repo for the workload. Realistically, you would also need to break out the gateway subnet route table user-defined route (legacy VNet-based hub) and the VNet peering connection. All the code for this is shared on GitHub – I did update the repo with some structure and with working DevOps pipelines.

This Update

There were two things I wanted to add to the design:

  • Detailed permissions for the service principal used by the workload DevOps pipeline, limiting the scope of change that is possible in the hub.
  • DevOps pipelines so I could test the above.

The Code

You’ll find 3 folders in the Bicep code now:

  • hub: This deploys a (legacy) VNet-based hub with Azure Firewall.
  • customRoles: 4 Azure custom roles are defined. This should be deployed after the hub.
  • spoke1: This contains the code to deploy a skeleton VNet-based (spoke) workload with updates that are required in the hub to connect the VNet and route ingress on-prem traffic through the firewall.

DevOps Pipelines

The hub and spoke1 folders each contain a folder called .pipelines. There you will find a .yml file to create a DevOps pipeline.

The DevOps pipeline uses Azure CLI tasks to:

  • Select the correct Azure subscription & create the resource group
  • Deploy each .bicep file.

My design uses 1 sub for the hub and 1 sub for the workload. You are not glued to this bu you would need to make modifications to how you configure the service principal permissions (below).

To use the code:

  1. Create a repo in DevOps for (1 repo) hub and for (1 repo) spoke1 and copy in the required code.
  2. Create service principals in Azure AD.
  3. Grant the service principal for hub owner rights to the hub subscription.
  4. Grant the service principal for the spoke owner rights to the spoke subscription.
  5. Create ARM service connections in DevOps settings that use the service principals. Note that the names for these service connections are referred to by azureServiceConnection in the pipeline files.
  6. Update the variables in the pipeline files with subscription IDs.
  7. Create the pipelines using the .yml files in the repos.

Don’t do anything just yet!

Service Principal Permissions

The hub service principal is simple – grant it owner rights to the hub subscription (or resource group).

The workload is where the magic happens with this DevSecOps design. The workload updates the hub suing code in the workload repo that affects the workload:

  • Ingress route from on-prem to the workload in the hub GatewaySubnet.
  • The firewall rules for the workload in the hub Azure Firewall (policy) using a rules collection group.
  • The VNet peering connection between the hub VNet and the workload VNet.

That could be deployed by the workload DevOps pipeline that is authenticated using the workload’s service principal. So that means the workload service principal must have rights over the hub.

The quick solution would be to grant contributor rights over the hub and say “we’ll manage what is done through code reviews”. However, a better practice is to limit what can be done as much as possible. That’s what I have done with the customRoles folder in my GitHub share.

Those custom roles should be modified to change the possible scope to the subscription ID (or even the resource group ID) of the hub deployment. There are 4 custom roles:

  • customRole-ArmValidateActionOperator.json: Adds the CUSTOM – ARM Deployment Operator role, allowing the ARM deployment to be monitored and updated.
  • customRole-PeeringAdmin.json: Adds the CUSTOM – Virtual Network Peering Administrator role, allowing a VNet peering connection to be created from the hub VNet.
  • customRole-RoutesAdmin.json: Adds the CUSTOM – Azure Route Table Routes Administrator role, allowing a route to be added to the GatewaySubnet route table.
  • customRole-RuleCollectionGroupsAdmin.json: Adds the CUSTOM – Azure Firewall Policy Rule Collection Group Administrator role, allowing a rules collection group to be added to an Azure Firewall Policy.

Deploy The Hub

The hub is deployed first – this is required to grant the permissions that are required by the workload’s service principal.

Grant Rights To Workload Service Principals

The service principals for all workloads will be added to an Azure AD group (Workloads Pipeline Service Principals in the above diagram). That group is nested into 4 other AAD security groups:

  • Resource Group ARM Operations: This is granted the CUSTOM – ARM Deployment Operator role on the hub resource group.
  • Hub Firewall Policy: This is granted the CUSTOM – Azure Firewall Policy Rule Collection Group Administrator role on the Azure Firewalll Policy that is associated with the hub Azure Firewall.
  • Hub Routes: This is granted the CUSTOM – Azure Route Table Routes Administrator role on the GattewaySubnet route table.
  • Hub Peering: This is granted the CUSTOM – Virtual Network Peering Administrator role on the hub virtual network.

Deploy The Workload

The workload now has the required permissions to deploy the workload and make modifications in the hub to connect the hub to the outside world.

Enabling DevSecOps with Azure Firewall

In this post, I will share how you can implement DevSecOps with Azure Firewall, with links to a bunch of working Bicep files to deploy the infrastructure-as-code (IaC) templates.

This example uses a “legacy” hub and spoke – one where the hub is VNet-based and not based on Azure Virtual WAN Hub. I’ll try to find some time to work on the code for that one.

The Concept

Hold on, because there’s a bunch of things to understand!

DevSecOps

The DevSecOps methodology is more than just IaC. It’s a combination of people, processes, and technology to enable a fail-fast agile delivery of workloads/applications to the business. I discussed here how DevSecOps can be used to remove the friction of IT to deliver on the promises of the Cloud.

The Azure features that this design is based on are discussed in concept here. The idea is that we want to enable Devs/Ops/Security to manage firewall rules in the workload’s Git repository (repo). This breaks the traditional model where the rules are located in a central location. The important thing is not the location of the rules, but the processes that manage the rules (change control through Git repo pull request reviews) and who (the reviewers, including the architects, firewall admins, security admins, etc).

So what we are doing is taking the firewall rules for the workload and placing them in with the workload’s code. NSG rules are probably already there. Now, we’re putting the Azure Firewall rules for the workload in the workload repo too. This is all made possible thanks to changes that were made to Azure Firewall Policy (Azure Firewall Manager) Rules Collection Groups – I use one Rules Collection Group for each workload and all the rules that enable that workload are placed in that Rules Collection Group. No changes will make it to the trunk branch (deployment action/pipelines look for changes here to trigger a deployment) without approval by all the necessary parties – this means that the firewall admins are still in control, but they don’t necessarily need to write the rules themselves … and the devs/operators might even write the rules, subject to review!

This is the killer reason to choose Azure Firewall over NVAs – the ability to not only deploy the firewall resource, but to manage the entire configuration and rule sets as code, and to break that all out in a controlled way to make the enterprise more agile.

Other Bits

If you’ve read my posts on Azure routing (How to Troubleshoot Azure Routing? and BGP with Microsoft Azure Virtual Networks & Firewalls) then you’ll understand that there’s more going on than just firewall rules. Packets won’t magically flow through your firewall just because it’s in the middle of your diagram!

The spoke or workload will also need to deploy:

  • A peering connection to the hub, enabling connectivity with the hub and the firewall. All traffic leaving the spoke will route through the firewall thanks to a user-defined route in the spoke subnet route table. Peering is a two-way connection. The workload will include some bicep to deploy the spoke-hub and the hub-spoke connections.
  • A route for the GatewaySubnet route table in the hub. This is required to route traffic to the spoke address prefix(es) through the Azure Firewall so on-premises>spoke traffic is correctly inspected and filtered by the firewall.

The IaC

In this section, I’ll explain the code layout and placement.

My Code

You can find my public repo, containing all the Bicep code here. Please feel free to download and use.

The Git Repo Design

You will have two Git repos:

  1. The first repo is for the hub. This repo will contain the code for the hub, including:
    • The hub VNet.
    • The Hub VNet Gateway.
    • The GatewaySubnet Route Table.
    • The Azure Firewall.
    • The Azure Firewall Policy that manages the Azure Firewall.
  2. The second repo is for the spoke. This skeleton example workload contains:

Action/Pipeline Permissions

I have written a more detailed update on this section, which can be found here

Each Git repo needs to authenticate with Azure to deploy/modify resources. Each repo should have a service principal in Azure AD. That service principal will be used to authenticate the deployment, executed by a GitHub action or a DevOps pipeline. You should restrict what rights the service principal will require. I haven’t worked out the exact minimum permissions, but the high-level requirements are documented below:

 

Trunk Branch Protection &  Pull Request

Some of you might be worried now – what’s to stop a developer/operator working on Workload A from accidentally creating rules that affect Workload X?

This is exactly why you implement standard practices on the Git repos:

  • Protect the Trunk branch: This means that no one can just update the version of the code that is deployed to your firewall or hub. If you want to create an updated, you have to create a branch of the trunk, make your edits in that trunk, and submit the changes to be merged into trunk as a pull request.
  • Enable pull request reviews: Select a panel of people that will review changes that are submitted as pull requests to the trunk. In our scenario, this should include the firewall admin(s), security admin(s), network admin(s), and maybe the platform & workload architects.

Now, I can only submit a suggested set of rules (and route/peering) changes that must be approved by the necessary people. I can still create my code without delay, but a change control and rollback process has taken control. Obviously, this means that there should be SLAs on the review/approval process and guidance on pull request, approval, and rejection actions.

And There You Have It

Now you have the design and the Bicep code to enable DevSecOps with Azure Firewall.

Testing Azure Firewall IDPS

In this post, I will show you how to test IDPS in Azure Firewall Premium, including test exploits and how to search the logs for alerts.

Azure Firewall Setup

You are going to need a few things:

  • Ideally a hub and spoke deployment of some kind, with a virtual machine in two different spokes. My lab is Azure Virtual WAN, using a VNet as the “compromised on-premises” and a second VNet as the target.
  • Azure Firewall Premium SKU with logging enabled to a Log Analytics Workspace.
  • Azure Firewall Policy Premium SKU, with IDPS enabled for Alert & Deny.

Make sure that you have firewall rules and NSG rules open to allow your “attacks” – the point of IDPS is to stop traffic on legitimate protocols/ports.

Compromised On-Premises Machine

One can use Kali Linux from the Azure Marketplace but I prefer to work in Windows. So I deployed a Windows Server VM and downloaded/deployed Metasploit Opensource, which is installed into C:\metasploit-framework.

The console that you’ll use to run the commands is C:\metasploit-framework\bin\msfconsole.bat.

If you want to trying something simpler, then all you will need is the normal Windows Command prompt.

The Exploit Test

If you are using Metasploit, in the console, run the following to search for “coldfusion” tests:

search coldfusion

Select a test:

use auxiliary/scanner/http/coldfusion_locale_traversal

Set the RHOST (remote host to target) option:

set RHOST <IP address to target>

Verify that all required options are set:

show options

Execute the test:

run

Otherwise, you can run the following CURL command in Windows Command Prompt for a simpler test to do a web request to your target IP using the well-known Blacksun user agent:

curl -A “BlackSun” <IP address to target>

Check Your Logs

It can take a little time for data to appear in your logs. Give it a few minutes and then run this query in Log Analytics:

AzureDiagnostics | where ResourceType == “AZUREFIREWALLS” | where OperationName == “AzureFirewallIDSLog” | parse msg_s with Protocol ” request from” SourceIP “:” SourcePort ” to ” TargetIP “:” TargetPort “. Action:” Action”. Signature: ” Signature “. IDS:” Reason | project TimeGenerated, Protocol, SourceIP, SourcePort, TargetIP, TargetPort, Action, Signature, Reason | sort by TimeGenerated

That should highlight anything that IDPS alerted on & denied – and can also be useful for creating incidents in Azure Sentinel.

Service Definitions in Azure Firewall

In this post, I will discuss how you can use Rules Collection Groups in Azure Firewall to aggregate your Rules Collections and Rules to be aligned with a service definition or workload definition.

Workload and Service Definitions

In an organised environment, every workload (or service) has a definition. That definition describes all of the components that make up and facilitate the workload. That includes your firewall rules.

So it would make sense if you had a way of grouping firewall rules together if those rules are used to make a workload possible. For example, if I was running an Azure Virtual Desktop pool, I might treat that pool as a workload, document it as a workload, and want all of the rules that make that pool possible to be grouped together and managed as a unit.

Challenges

The challenges I have faced with Azure Firewall and aligning rules along the model of a workload definition have been:

Realisation & Conceptualisation

I’ve always used some kind of workload-based approach but my approach wasn’t perfect. I decided to try to align rules with NSGs, placing inbound rules with the workload that was the destination. But then some workloads, such as Windows Admin Center or ADDS, require more reach, and then you get messy. And what if you have a dozen or more workloads that are atomic units but are also extremely integrated? Where do the rules go?

I’ve come to realise that rules should go with the service that they empower, regardless of destination. What made me think like that? Documentation of workloads for a forced-ad-hoc migration project that I’ve been working on for the last year. We didn’t get the chance to assess and plan in detail, so everything was an on-the-fly discovery. Our method of “place the rule with the target workload” has created very complicated ARM templates for Azure Firewall; defining a workload from a firewall perspective is very hard with that approach.

If we said that “all network rules that empower a workload go with the workload’s network Rules Collection” then things would get a bit better – a bit.

Rules Groups Collections Limitations

It is a year since Firewall Policy became generally available. Just like last year, I had some hours to experiment on Azure Firewall. I tried out the new tier, Rules Collection Groups. A reminder:

  1. Rules > Rules Collections (typed, based on DNAT, Network, or Application)
  2. Rules Collections > Rules Collection Groups

But at the time, the new tech was immature. Mixing Rules Collection types between Rules Groups was a disaster. The advice I got from the product group was “don’t do it, it’s not ready, stick with the default groups for now”.

So I did just that. That means that the DNAT rules collection,  the Network Rules Collection, and the Application Rules Collection for any workload were split into 3 deployments, the default rules group collections for:

  • DNAT
  • Network
  • Application

Each of those deployments could have dozens or hundreds of rules collections for each workload. And when you combine that with my previous approach to rules placement:

  1. It’s a mess
  2. Deploying a rule change required re-deploying all Rules Collections of a type for all workloads in using that type of Rules Collection.

A Better Approach

I have had some time to play and things are better.

Rules Alignment With Workload

I’ve discussed this already – place rules that empower a workload with the workload, not the destination workload.

If Workload X requires TCP 445 access to Workload Y, then I will create a Network Rule to Workload Y on TCP 445 in a Network Rules Collection for Workload X. The result is that all rules that make Workload X function will be in rules collections for Workload X. That makes documentation easier and makes the next step work.

Rules Collection Groups For Workload

This is the big change in Azure from this time last year. I can now create lots of Rules Collection Groups, each with a priority (for processing order). I will create 1 Rules Collection Group per workload. Workload X will get 1 Rules Collection Group.

All rules that make Workload X go into Rules Collection Groups for Workload X. I might have, depending on rules requirements, up to 6 Rules Collection Groups:

  • DNAT-Allow-WorkloadX
  • Network-Allow-WorkloadX
  • Application-Allow-WorkloadX
  • DNAT-Deny-WorkloadX
  • Network-Deny-WorkloadX
  • Application-Deny-WorkloadX

The Rules Collection Group is its own Deployment from an ARM perspective. If I’m managing the firewall as code (I do) then I can have 1 template (or parameters file) that defines the Rules Collection Group (and contained Rules Collections and Rules) for the entire workload and just the workload. Each workload will have its own template or parameter file. A change to a workload definition will affect 1 file only, and require 1 deployment only.

If you want to see an ARM template for deploying one of the workloads in my screenshot, then head on over to my GitHub.

Wrap Up

This approach should leave the firewall much better organised, easier to manage in smaller chunks if using infrastructure-as-code, be easier to document, and more suitable for organisations that like to create/maintain service definitions.

Defending Against Supply Chain Attacks

In this post, I will discuss the concepts of supply chain attacks and some thoughts around defending against them.

What Is A Supply Channel Attack?

The recent ransomware attack through Kaseya made the news but the concept of a supply chain attack isn’t new at all. Without doing any research I can think of two other examples:

  • SolarWinds: In December 2020, attackers used compromised code in SolarWinds monitoring solutions to compromise customers of SolarWinds.
  • RSA: In 2011, the Chinese PLA (or hackers sponsored by them) compromised RSA and used that access to attack customers of RSA.

What is a supply chain attack? It’s pretty hard to break into a network, especially one that has hardened itself. Users can be educated – ok, some will never be educated! Networks can be hardened and micro-segmented. Identity protections such as MFA and threat detection can be put in place.

But there remains a weakness – or several of them. There’s always a way into a network – the third party! Even the most secure network deployments require some kind of monitoring system – something where a piece of software is deployed onto “every VM”.  Or there’s some software vendor that’s deep into your network that has openings all over the place. Those are your threats. If an attacker compromises the software from one of those vendors then they will get into your network during your next update and they will use the existing firewall holes & permissions that are required by the software to probe, spread, and attack.

Protection

You still need to have your first lines of defense, ideally using tools that are designed for protection against advanced persistent threats – not your regular AV package, dumby:

  1. Identity
  2. Email
  3. Firewall
  4. Backup with isolated offline storage protected by MFA

That’s a start, but, but a supply chain attack bypasses all that by using existing channels to enter your network as if it is from a trusted source – because the attack is embedded in the code from a trusted source.

Micro-Segmentation

The first step should be micro-segmentation (AKA multi-segementation). No two nodes on your network should be able to communicate unless:

  1. They have to
  2. They are restricted to the required directions, protocols, and ports.
  3. That traffic passes through a firewall – and ideally several firewalls.

In Microsoft Azure, that means using:

  • A central firewall, in the form of a network firewall and/or web application firewall (Azure or NVA). This firewall controls connections between the outside world and your workloads, between your workloads, and importantly from your workloads to the outside world (prevents malware from talking to its human controller).
  • Network Security Groups at the subnet level that protect the subnet and even isolate nodes inside the subnet (use a custom Deny All rule because the default Deny All rule is useless when you understand the logic of how it works).
  • Resource firewalls – that’s the guest OS firewall and Azure resource firewalls.

If you have a Windows ADDS domain, use Group Policy to force the use of Windows Firewall – lazy admins and those very same vendors that will be the channel of attack will be the first to attempt to disable the firewall on machines that they are working on.

For Azure resources, consider the use of Azure Policy to force/audit the use of the firewalls in your resources and a default route to 0.0.0.0/0 via your central firewall.

An infrastructure-as-code approach to the central firewall (Azure Firewall) and NSGs brings documentation, change control, and rollback to network security.

Security Monitoring

This is where most organisations fail, and even where IT security officers really don’t get it.

Syslog is not security monitoring. Your AV is not security monitoring. You need something bigger, that is automated, and can filter through the noise – I regularly use the term “be your Neo to read the Matrix”. That’s because even in a small network, there is a lot of noise. Something needs to filter through that noise and identity the threats.

For example, there’s a lot of TCP 445 connection attempts coming from one IP address. Or there are lots of failed attempts to sign in as a user from one IP address. Or there are lots of failed connections logged by NSG rules. Or even better – all of the above. These are the sorts of things that malware that is attempting to spread will do. This is the sort of work that Azure Sentinel is perfect for – Sentinel connects to many data sources, pulls that data to a central place where complex queries can be run to look for threats that a human won’t be able to do. Threats can create incidents, incidents can trigger automated flows to eliminate the noise, and the remaining incidents can create alerts that humans will act upon.

But some malware is clever and not so noisy. The malware that hit the HSE (the Irish national health service) uses a lot of manual control to quietly spread over a very long time. Restricting outbound access to the Internet to just the required connections for business needs will cripple this control mechanism. But there’s still an automated element to this malware.

Other things to implement in Azure will include:

  • IDPS: An intrusion detection & prevention in the firewall, for example Azure Firewall Premium. When known malware/attack flows pass through the firewall, the firewall can log an alert or alert/deny the flows.
  • Security Center: Enabling Security Center “Azure Defender” (the tier previously known as the Azure Security Center Standard) provides you with oodles of new features, including some endpoint protections that are very confusingly packaged and licensed by Microsoft.

Managed Services Providers

MSPs are a part of the supply chain for their customers. MSP staff typically have credentials that allow them into many customer networks/services. That makes the identities of those staff very valuable.

A managed service provider should be a leader in identity security process, tooling, and governance. In the Microsoft world, that means using Azure AD Premium with MFA enabled for all staff. In the Azure world, Lighthouse should be used to gain access to customers’ cloud implementations. And that access should be zero-trust, powered by Privileged Identity Management (PIM).

Oh Cr@p!

These attackers are not script kiddies. They are professional organisations with big budgets, very skilled programmers and operators, and a lot of time and will. They know that with some persistent effort targeting a vendor, they can enter a lot of networks with ease. Hitting a systems management company, or more scarily, a security vendor, reaps BIG rewards because we invest in these products to secure our entire networks. The other big worry is those vendors that are deeply embedded with certain verticals such as finance or government. Imagine a vendor that is in every branch of a national government – one successful attack could bring down that entire government after  a wave of upgrades! Or hitting a well known payment vendor could open up every bank in the EU.

Enable FQDN-Based Network Rules In Azure Firewall

In this post, I will discuss how the DNS features, DNS Servers and DNS Proxy, can be used to enable FQDN-based rules in the Azure Firewall.

Can’t Azure Firewall Already Do FQDN-based Rules?

Yes – and no. One of the rules types is Application Rules, which control outbound access to services which can be based on a URI (a DNS name) for HTTP/S and SQL (including SQL Server, Azure SQL, etc) services. But this feature is not much use if:

  • You have some service in one of your VNets that needs to make an outbound connection on TCP 25 to something like smtp.office365.com.
  • You need to allow an inbound connection to an FQDN, such as a platform resource that is network-connected using Private Endpoint.

FQDN-Based Network Rules

Network rules allow you to control flows in/out from source to destinations on a particular protocol and source/destination port. Originally this was, and out of the box this is, done using IPv4 addresses/CIDRs. But what if I need to have some network-connected service reach out to smtp.office365.com to send an email? What IP address is that? Well, it’s lots of addresses:

nslookup smtp.office365.com

Non-authoritative answer:
Name: DUB-efz.ms-acdc.office.com
Addresses: 2603:1026:c02:301e::2
2603:1026:c02:2860::2
2603:1026:6:29::2
2603:1026:c02:4010::2
52.97.183.130
40.101.72.162
40.101.72.242
40.101.72.130
Aliases: smtp.office365.com
outlook.office365.com
outlook.ha.office365.com
outlook.ms-acdc.office.com

And that list of addresses probably changes all of the time – so do you want to manage that in your firewall(s) rules and in the code/configuration of your application? It would be better to use the abtsraction provided by the FQDN.

Network Rules allow you to do this now, but you must first enable DNS in the firewall.

Azure Firewall DNS

With this feature enabled, the Azure Firewall can support FQDNs in the Network Rules, opening up the possibility of using any of the supported protocol/port combinations, expanding your name-based rules beyond just HTTP/S and SQL.

By default, the Azure Firewall will use Azure DNS. That’s “OK” for traffic that will only ever be outbound and simple. But life is not normally that simple unless you host a relatively simple point solution behind your firewall. In reality:

  • You might want to let on-premises/remote locations connect to Private Endpoint-enabled PaaS services via site-to-site networking.
  • You might hit an interesting issue, which I will explain in a moment.

Before I move on, for the Private Endpoint scenario:

  1. Configure DNS servers (VMs) on you VNet configuration
  2. Configure conditional forwarders for each Private Endpoint DNS Zone to forward to Azure Private DNS via 168.63.129.16, the virtual public IP address that is used to facilitate a communication channel to Azure platform resources.
  3. Set the Azure Firewall DNS Server settings to point at these DNS servers
  4. Route traffic from your site-to-site gateway(s) to the firewall.

Split DNS Results

If two different machines attempt to resolve smtp.office365.com they will end up with different IP addresses – as you can see in the below diagram.

The result is that the client attempts to connect to smtp.office365.com on IP address A, and the firewall is permitting access on IP address B, so the connection attempt is denied.

DNS Proxy

To overcome this split DNS result (the Firewall and client getting two different resolved IP addresses for the FQDN) we can use DNS Proxy.

The implementation is actually pretty simple:

  1. Your firewall is set up to use your preferred DNS server(s).
  2. You enable DNS Proxy (a simple on/off setting).
  3. You configure your VNet/resources to use the Azure Firewall as their DNS server.

What happens now? Every time your resource attempts to resolve a DNS FQDN, it will send the request to the Azure Firewall. The Azure Firewall proxies/relays the request to your DNS server(s) and waits for the result which is cached. Now when your resource attempts to reach smtp.office365.com, it will use the IP address that the firewall has already cached. Simples!

And yes, this works perfectly well with Active Directory Domain Controllers running as DNS servers in a VNet or spoke VNet as long as your NSG rules allow the firewall to be a DNS client.

Azure Virtual WAN ARM – The Resources

In this post, I will explain the types of resources used in Azure Virtual WAN and the nature of their relationships.

Note, I have not included any content on the recently announced preview of third-party NVAs. I have not seen any materials on this yet to base such a post on and, being honest, I don’t have any use-cases for third-party NVAs.

As you can see – there are quite a few resources involved … and some that you won’t see listed at all because of the “appliance-like” nature of the deployment. I have not included any detail on spokes or “branch offices”, which would require further resources. The below diagram is enough to get a hub operational and connected to on-premises locations and spoke virtual networks.

The Virtual WAN – Microsoft.Network/virtualWans

You need at least one Virtual WAN to be deployed. This is what the hub will connect to, and you can connect many hubs to a common Virtual WAN to get automated any-to-any connectivity across the Microsoft physical WAN.

Surprisingly, the resource is deployed to an Azure region and not as a global resource, such as other global resources such as Traffic Manager or Azure DNS.

The Virtual Hub – Microsoft.Network/virtualHubs

Also known as the hub, the Virtual Hub is deployed once, and once only, per Azure region where you need a hub. This hub replaces the old hub virtual network (plus gateway(s), plus firewall, plus route tables) deployment you might be used to. The hub is deployed as a hidden resource, managed through the Virtual WAN in the Azure Portal or via scripting/ARM.

The hub is associated with the Virtual WAN through a virtualWAN property that references the resource ID of the virtualWans resource.

In a previous post, I referred to a chicken & egg scenario with the virtualHubs resource. The hub has properties that point to the resource IDs of each deployed gateway:

  • vpnGateway: For site-to-site VPN.
  • expressRouteGateway: For ExpressRoute circuit connectivity.
  • p2sVpnGateway: For end-user/device tunnels.

If you choose to deploy a “Secured Virtual Hub” there will also be a property called azureFirewall that will point to the resource ID of an Azure Firewall with the AZFW_Hub SKU.

Note, the restriction of 1 hub per Azure region does introduce a bottleneck. Under the covers of the platform, there is actually a virtual network. The only clue to this network will be in the peering properties of your spoke virtual networks. A single virtual network can have, today, a maximum of 500 spokes. So that means you will have a maximum of 500 spokes per Azure region.

Routing Tables – Microsoft.Network/virtualHubs/hubRouteTables & Microsoft.Network/virtualHubs/routeTables

These are resources that are used in custom routing, a recently announced as GA feature that won’t be live until August 3rd, according to the Azure Portal. The resource control the flows of traffic in your hub and spoke architecture. They are child-resources of the virtualHubs resource so no references of hub resource IDs are required.

Azure Firewall – Microsoft.Network/azureFirewalls

This is an optional resource that is deployed when you want a “Secured Virtual Hub”. Today, this is the only way to put a firewall into the hub, although a new preview program should make it possible for third-parties to join the hub. Alternatively, you can use custom routing to force north-south and east-west traffic through an NVA that is running in a spoke, although that will double peering costs.

The Azure Firewall is deployed with the AZFW_Hub SKU. The firewall is not a hidden resource. To manage the firewall, you must use an Azure Firewall Policy (aka Azure Firewall Manager). The firewall has a property called firewallPolicy that points to the resource ID of a firewallPolicies resource.

Azure Firewall Policy – Microsoft.Network/firewallPolicies

This is a resource that allows you to manage an Azure Firewall, in this case, an AZFW_Hub SKU of Azure Firewall. Although not shown here, you can deploy a parent/child configuration of policies to manage firewall configurations and rules in a global/local way.

VPN Gateway – Microsoft.Network/vpnGateways

This is one of 3 ways (one, two or all three at once) that you can connect on-premises (branch) sites to the hub and your Azure deployment(s). This gateway provides you with site-to-site connectivity using VPN. The VPN Gateway uses a property called virtualHub to point at the resource ID of the associated hub or virtualHubs resource. This is a hidden resource.

Note that the virtualHubs resource must also point at the resource ID of the VPN gateway resource ID using a property called vpnGateway.

ExpressRoute Gateway – Microsoft.Network/expressRouteGateways

This is one of 3 ways (one, two or all three at once) that you can connect on-premises (branch) sites to the hub and your Azure deployment(s). This gateway provides you with site-to-site connectivity using ExpressRoute. The ExpressRoute Gateway uses a property called virtualHub to point at the resource ID of the associated hub or virtualHubs resource. This is a hidden resource.

Note that the virtualHubs resource must also point at the resource ID of the ExpressRoute gateway resource ID using a property called p2sGateway.

Point-to-Site Gateway – Microsoft.Network/p2sVpnGateways

This is one of 3 ways (one, two or all three at once) that you can connect on-premises (branch) sites to the hub and your Azure deployment(s). This gateway provides users/devices with connectivity using VPN tunnels. The Point-to-Site Gateway uses a property called virtualHub to point at the resource ID of the associated hub or virtualHubs resource. This is a hidden resource.

The Point-to-Site Gateway inherits a VPN configuration from a VPN configuration resource based on Microsoft.Network/vpnServerConfigurations, referring to the configuration resource by its resource ID using a property called vpnServerConfiguration.

Note that the virtualHubs resource must also point at the resource ID of the Point-to-Site gateway resource ID using a property called p2sVpnGateway.

VPN Server Configuration – Microsoft.Network/vpnServerConfigurations

This configuration for Point-to-Site VPN gateways can be seen in the Azure WAN and is intended as a shared configuration that is reusable with more than one Point-to-Site VPN Gateway. To be honest, I can see myself using it as a per-region configuration because of some values like DNS servers and RADIUS servers that will probably be placed per-region for performance and resilience reasons. This is a hidden resource.

The following resources were added on 22nd July 2020:

VPN Sites – Microsoft.Network/vpnSites

This resource has a similar purpose to a Local Network Gateway for site-to-site VPN connections; it describes the on-premises location, AKA “branch office”.  A VPN site can be associated with one or many hubs, so it is actually connected to the Virtual WAN resource ID using a property called virtualWan. This is a hidden resource.

An array property called vpnSiteLinks describes possible connections to on-premises firewall devices.

VPN Connections – Microsoft.Network/vpnGateways/vpnConnections

A VPN Connections resource associates a VPN Gateway with the on-premises location that is described by an associated VPN Site. The vpnConnections resource is a child resource of vpnGateways, so there is no actual resource; the vpnConnections resource takes its name from the parent VPN Gateway, and the resource ID is an extension of the parent VPN Gateway resource ID.

By necessity, there is some complexity with this resource type. The remoteVpnSite property links the vpnConnections resource with the resource ID of a VPN Site resource. An array property, called vpnSiteLinkConnections, is used to connect the gateway to the on-premises location using 1 or 2 connections, each linking from vpnSiteLinkConnections to the resource/property ID of 1 or 2 vpnSiteLinks properties in the VPN Site. With one site link connection, you have a single VPN tunnel to the on-premises location. With 2 link connections, the VPN Gateway will take advantage of its active/active configuration to set up resilient tunnels to the on-premises location.

Virtual Network Connections – Microsoft.Network/virtualHubs/hubVirtualNetworkConnections

The purpose of a hub is to share resources with spoke virtual networks. In the case of the Virtual Hub, those resources are gateways, and maybe a firewall in the case of Secured Virtual Hub. As with a normal VNet-based hub & spoke, VNet peering is used. However, the way that VNet peering is used changes with the Virtual Hub; the deployment is done using the hub/VirtualNetworkConnections child resource, whose parent is the Virtual Hub. Therefore, the name and resource ID are based on the name and resource ID of the Virtual Hub resource.

The deployment is rather simple; you create a Virtual Network Connection in the hub specifying the resource ID of the spoke virtual network, using a property called remoteVirtualNetwork. The underlying resource provider will initiate both sides of the peering connection on your behalf – there is no deployment required in the spoke virtual network resource. The Virtual Network Connection will reference the Hub Route Tables in the hub to configure route association and propagation.

More Resources

There are more resources that I’ve yet to document, including:

Azure Virtual WAN ARM – Secured Virtual Hub Azure Firewall

I have spent quite a few hours figuring out how to deploy Azure’s new Secured Virtual Hub, an extension of Azure Virtual WAN, deployed using ARM templates (JSON). A lot of the bits are either not documented or incorrectly documented. One of the frustrating bits to deploy was the Azure Firewall resource – and the online examples did not help.

The issue was that the 2 sources I could find did not include public IP addresses on the firewall:

  • The quick start for Secured Virtual Hub on docs.microsoft.com
  • The new Enterprise-Scale “well-architected” Framework, found in Cloud Adoption Framework

Digging to solve that uncovered:

  • The examples used quite an old API version, 2019-08-01, to deploy the Microsoft.Network/azureFirewalls resource.
  • There was no example of how to add a public IP address to the firewall in Secured Virtual Hub because it was not possible with that API – SVH is quite different from a VNet deployment because you do have direct access to the underlying hub virtual network.
  • Being an old API, we lose features such as SNAT for non-RFC1918 addresses (important in universities and public sector) and the newer custom & proxy DNS features.

In my digging, I did uncover that the ARM reference for the Azure Firewall was incorrect, but I did uncover a new, barely-documented property called hubIPAddresses; I knew this property was the key to solving the public IP address issue. So I thought about what was going on and how I was going to solve it.

I ended up doing what I would normally do if I did not have a quick start template to start with:

  1. Deploy the resource(s) by hand in the Azure Portal
  2. Observe the options – there was a slide control for the quantity of firewall public IP addresses
  3. Export the resulting template

And … there was the solution:

  1. There is a new, undocumented API version for the Azure Firewall resource: 2020-05-01
  2. There is a new object property called hubIPAddresses that contains an object sub-property called publicIps. You can set a string value called count to control how many public IP addresses that Azure will assign (on your behalf) to the firewall – you do not need to create the public IP address resources.
        "hubIPAddresses": {
          "publicIPs": {
            "count": "[parameters('firewallPublicIpQuantity')]",
          }
        }

Sorted!

Rethinking Firewall Management With Azure Firewall Manager

Microsoft has just announced the general availability a feature that I’ve been waiting for since I first learned about it last Autumn, called Azure Firewall Manager. Azure Firewall Manager allows you to centrally manage one or more Azure Firewall instances through a central, policy-driven, user interface. And it’s those policies, Azure Firewall Policies, that made me re-think Azure Firewall management a few months ago when I was writing my Cloud Mechanix course (running next ONLINE on July 30th) “Securing Azure Services & Data Through Azure Networking”.

Azure Firewall Policy

This is a new resource type that is generally available today. Azure Firewall Policy outsources the configuration and management of the firewall to a policy resource; that means that the usual settings in the Azure Firewall for things like rules and Threat Intelligence move from the firewall resource to a policy when a policy is associated with the firewall.

Policies can be created in a hierarchy. You can create a parent/global policy that will contain configurations and rules that will apply to all/a number of firewall instances. Then you create a child policy that inherits from the parent; note that rules changes in the parent instantly appear in the child. The child is associated with a firewall and applies configurations/rules from the parent policy and the child policy instantly to the firewall.

Problem

I’ve deployed and configured multiple customers where we have virtual data centers (VDCs, which are governed & secured hub and spoke architectures) across multiple regions. Creating rules configurations to allow flows from a spoke/service in one region to another spoke/service in another region is a royal pain in the tushie. Here’s the network flow (as I documented with routing here):

  1. Source device
  2. Outbound NSG rules in source spoke
  3. Firewall in source hub
  4. Firewall in destination hub
  5. Inbound NSG rules at destination spoke
  6. Destination device

There are potentially 4 sets of rules to configure for a simple service running on a single protocol/port. Today I configured Microsoft Identity Management for this scenario and there were dozens of protocol/port combinations across three spokes. The work took hours to complete – which I did in code and it provided a working result for the identity consulting team.

I minimise the work by controlling outbound flows in the local hub firewall, not in the NSG. So the NSGs do not control outbound flows at all. I could allow all via the firewall, even to other private networks, but that goes against the idea of compartmentalisation or micro-segmentation to combat modern network threats – so I need to configure both firewalls for a flow.

Solution

Re-think the firewall for a moment. Imagine you had one virtual firewall that spanned all of your Azure regional deployments. You can control all global flows with one configuration in that global virtual firewall. The global virtual firewall has instances in each Azure region. Any local flows can be configured just in that instance. That’s what Firewall Policy allows.

  • Parent Policy: Place all your global configurations in here. Some configurations will be company-wide, such as Threat Intelligence. Some rules, like allowing access to Microsoft URIs or Azure services (service tags) will be global too. And this is where you put the rules to allow flows between one regional deployment and another. This global management takes all your local Azure Firewall resources and treats them as a single security service.
  • Child Policies: A child policy will be created for each Azure Firewall instance. This policy will inherit the above from the parent applying the global configuration. Local rules, to allow north-south access to/from local services (Internet or on-prem) or east-west (spoke-to-spoke in the same regional deployment) will be configured here. RBAC can be enabled to allow local network admins to do their own thing, but unable to undo what the parent has done.

I haven’t had a chance to test Azure Firewall Policy out yet since the GA announcement, but I’m hoping that the third tier in rules (Rules Groups) made it from preview to GA. I do have groupings of rules collections based on buckets of priorities. This organisation would be awesome in my vision of Azure Firewall management.

Connecting Azure Hub-And-Spoke Architectures Together

In this post, I will explain how you can connect multiple Azure hub-and-spoke (virtual data centre) deployments together using Azure networking, even across different Azure regions.

There is a lot to know here so here is some recommended reading that I previously published:

If you are using Azure Virtual WAN Hub then some stuff will be different and that scenario is not covered fully here – Azure Virtual WAN Hub has a preview (today) feature for Any-to-Any routing.

The Scenario

In this case, there are two hub-and-spoke deployments:

  • Blue: Multiple virtual networks covered by the CIDR of 10.1.0.0/16
  • Green: Another set of multiple virtual networks covered by the CIDR of 10.2.0.0/16

I’m being strategic with the addressing of each hub-and-spoke deployment, ensuring that a single CIDR will include the hub and all spokes of a single deployment – this will come in handy when we look at User-Defined Routes.

Either of these hub-and-spoke deployments could be in the same region or even in different Azure regions. It is desired that if:

  • Any spoke wishes to talk to another spoke it will route through the local firewall in the local hub.
  • All traffic coming into a spoke from an outside source, such as the other hub-and-spoke, must route through the local firewall in the local hub.

That would mean that Spoke 1 must route through Hub 1 and then Hub 2 to talk to Spoke 4. The firewall can be a third-party appliance or the Azure Firewall.

Core Routing

Each subnet in each spoke needs a route to the outside world (0.0.0.0/0) via the local firewall. For example:

  • The Blue firewall backend/private IP address is 10.1.0.132
  • A Route Table for each subnet is created in the Blue deployment and has a route to 0.0.0.0/0 via a virtual appliance with an IP address of 10.1.0.132
  • The Greenfirewall backend/private IP address is 10.2.0.132
  • A Route Table for each subnet is created in the Green deployment and has a route to 0.0.0.0/0 via a virtual appliance with an IP address of 10.2.0.132

Note: Some network-connected PaaS services, e.g. API Management or SQL Managed Instance, require additional routes to the “control plane” that will bypass the local firewall.

Site-to-Site VPN

In this scenario, the organisation is connecting on-premises networks to 1 or more of the hub-and-spoke deployments with a site-to-site VPN connection. That connection goes to the hub of Blue and to Green hubs.

To connect Blue and Green you will need to configure VNet Peering, which can work inside a region or across regions (using Microsoft’s low latency WAN, the second-largest private WAN on the planet). Each end of peering needs the following settings (the names of the settings change so I’m not checking their exact naming):

  • Enabled: Yes
  • Allow Transit: Yes
  • Use Remote Gateway: No
  • Allow Gateway Sharing: No

Let’s go back and do some routing theory!

That peering connection will add a hidden Default (“system”) route to each subnet in the hub subnets:

  • Blue hub subnets: A route to 10.2.0.0/24
  • Green hub subnets: A route to 10.1.0.0/24

Now imagine you are a packet in Spoke 1 trying to get to Spoke 4. You’re sent to the firewall in Blue Hub 1. The firewall lets the traffic out (if a rule allows it) and now the packet sits in the egress/frontend/firewall subnet and is trying to find a route to 10.2.2.0/24. The peering-created Default route covers 10.2.0.0/24 but not the subnet for Spoke 4. So that means the default route to 0.0.0.0/0 (Internet) will be used and the packet is lost.

To fix this you will need to add a Route Table to the egress/frontend/firewall subnet in each hub:

  • Blue firewall subnet Route Table: 10.2.0.0/16 via virtual appliance 10.2.0.132
  • Red firewall subnet Route Table: 10.1.0.0/16 via virtual appliance 10.1.0.132

Thanks to my clever addressing of each hub-and-spoke, a single route will cover all packets leaving Blue and trying to get to any spoke in Red and vice-versa.

ExpressRoute

Now the customer has decided to use ExpressRoute to connect to Azure – Sweet! But guess what – you don’t need 1 expensive circuit to each hub-and-spoke.

You can share a single circuit across multiple ExpressRoute gateways:

  • ExpressRoute Standard: Up to 10 simultaneous connections to Virtual Network Gateways in 1+ regions in the same geopolitical region.
  • ExpressRoute Premium: Up to 100 simultaneous connections to Virtual Network Gateways in 1+ regions in any geopolitical region.

FYI, ExpressRoute connections to the Azure Virtual WAN Hub must be of the Premium SKU.

ExpressRoute is powered by BGP. All the on-premises routes that are advertised propagate through the ISP to the Microsoft edge router (“meet-me”) in the edge data centre. For example, if I want an ExpressRoute circuit to Azure West Europe (Middenmeer, Netherlands – not Amsterdam) I will probably (not always) get a circuit to the POP or edge data centre in Amsterdam. That gets me a physical low-latency connection onto the Microsoft WAN – and my BGP routes get to the meet-me router in Amsterdam. Now I can route to locations on that WAN. If I connect a VNet Gateway to that circuit to Blue in Azure West Europe, then my BGP routes will propagate from the meet-me router to the GatewaySubnet in the Blue hub, and then on to my firewall subnet.

BGP propagation is disabled in the spoke Route Tables to ensure all outbound flows go through the local firewall.

But that is not the extent of things! The hub-and-spoke peering connections allow Gateway Sharing from the hub and Use Remote Gateway from the spoke. With that configuration, BGP routes to the spoke get propagated to the GatewaySubnet in the hub, then to the meet-me router, through the ISP and then to the on-premises network. This is what our solution is based on.

Let’s imagine that the Green deployment is in North Europe (Dublin, Ireland). I could get a second ExpressRoute connection but:

  • That will add cost
  • Not give me the clever solution that I want – but I could work around that with ExpressRoute Global Reach

I’m going to keep this simple – by the way, if I wanted Green to be in a different geopolitical region such as East US 2 then I could use ExpressRoute Premium to make this work.

In the Green hub, the Virtual Network Gateway will connect to the existing ExpressRoute circuit – no more money to the ISP! That means Green will connect to the same meet-me router as Blue. The on-premises routes will get into Green the exact same way as with Blue. And the routes to the Green spokes will also propagate down to on-premises via the meet-me router. That meet-me router knows all about the subnets in Blue and Green. And guess what BGP routers do? They propagate – so, the routes to all of the Blue subnets propagate to Green and vice-versa with the next hop (after the Virtual Network Gateway) being the meet-me router. There are no Route Tables or peering required in the hubs – it just works!

Now the path from Blue Spoke 1 to Green Spoke 4 is Blue Hub Firewall, Blue Virtual Network Gateway, <the Microsoft WAN>, Microsoft (meet-me) Router, <the Microsoft WAN>, Green Virtual Network Gateway, Green Hub Firewall, Green Spoke 4.

There are ways to make this scenario more interesting. Let’s say I have an office in London and I want to use Microsoft Azure. Some stuff will reside in UK South for compliance or performance reasons. But UK South is not a “hero region” as Microsoft calls them. There might be more advanced features that I want to use that are only in West Europe. I could use two ExpressRoute circuits, one to UK South and one to West Europe. Or I could set up a single circuit to London to get me onto the Microsoft WAN and connected this circuit to both of my deployments in UK South and West Europe. I have a quicker route going Office > ISP > London edge data center > Azure West Europe than from Office > ISP > Amsterdam edge data center > Azure West Europe because I have reduced the latency between me and West Europe by reducing the length of the ISP circuit and using the more-direct Microsoft WAN. Just like with Azure Front Door, you want to get onto the Microsoft WAN as quickly as possible and let it get you to your destination as quickly as possible.