Azure Virtual WAN Introducing A New Kind Of Route Table

In this post, I will quickly introduce you to a new kind of Route Table in Microsoft Azure that has been recently introduced by Azure Virtual WAN – and hence is included in the newly generally available Secured Virtual Hub.

The Old “Subnet” Route Table

This Route Table, which I will call “Subnet Route Table” (derived from the ARM name) is a simple resource that we associate with a subnet. It contains User-Defined Routes that force traffic to flow in desirable directions, typically when we use some kind of firewall appliance (Azure Firewall or third-party) or a third-party routing appliance. route The design is simple enough:

  • Name: A user-friendly name
  • Prefix: The CIDR you want to get to
  • Next Hop Type: What kind of “router” is the next hop, e.g. Virtual Network, Internet, or Virtual Appliance
  • Next Hop IP Address: Used when Next Hop Type is Virtual Appliance (any firewall or third-party router)

Azure Virtual WAN Hub

Microsoft introduced Azure Virtual WAN quite a while ago (by Cloud standards), but few still have heard of it, possibly because of how it was originally marketed as an SD-WAN solution compatible originally with just a few on-prem SD-WAN vendors (now a much bigger list). Today it supports IKEv1 and IKEv2 site-to-site VPN, point-to-site VPN, and ExpressRoute Standard (and higher). You might already be familiar with setting up a hub in a hub-and-spoke: you have to create the virtual network, the Route Table for inbound traffic, the firewall, etc. Azure Virtual WAN converts the hub into an appliance-like experience surfacing just two resources: the Virtual WAN (typically 1 global resource per organisation) and the hub (one per Azure region). Peering, routing, connectivity are all simplified.

A more recent change has been the Secured Virtual Hub, where Azure Firewall is a part of the Virtual WAN Hub; this was announced at Ignite and has just gone GA. Choosing the Secured Virtual Hub option adds security to the Virtual WAN Hub. Don’t worry, though, if you prefer a third-party firewall; the new routing model in Azure Virtual WAN Hub allows you to deploy your firewall into a dedicated spoke virtual network and route your isolated traffic through there.

The New Route Tables

There are two new kinds of route table added by the Virtual WAN Hub, or Virtual Hub, both of which are created in the Virtual Hub as sub-resources.

  • Virtual Wan Hub Route Table
  • Virtual WAN Route Table

Virtual WAN Hub Route Table

The Virtual Hub Hub Route Table affects traffic from the Virtual Hub to other locations.  A possible scenario is when you want to route traffic to a CIDR block of virtual network(s) through a third-party firewall (network virtual appliance/NVA):

AzureVirtualWanHubHubRouteTable

The routing rule setup here is similar to the Subnet Route Table, specifying where you want to get to (CIDR, resource ID, or service), the next hop, and a next hop IP address.

Virtual WAN Route Table

The Virtual WAN Route Table is created as a sub resource of the Virtual Hub but it has a different purpose. The Virtual Hub is assigned to connections and affects routing from the associated branch offices or virtual networks. Whoa, Finn! There is a lot of terminology in that sentence!

A connection is just that; it is a connection between the hub and another network. Each spoke connected directly to the hub has a connection to the hub – a Virtual WAN Route Table can be associated with each connection. A Virtual WAN Route Table can be associated with 1 virtual network connection, a subset of them, or all of them.

The term “branch offices” refers to sites connected by ExpressRoute, site-to-site VPN, or point-to-site VPN. Those sites also have connections that a Virtual WAN Route Table can be associated with.

This is a much more interesting form of route table. I haven’t had time to fully get under the covers here, but comparing ARM to the UI reveals two methodologies. The Azure Portal reveals one way of visualising routing that I must admit that I find difficult to scale in my mind. The ARM resource looks much more familiar to me, but until I get into a lab and fully test (which I hope I will find some hours to do soon), I cannot completely document.

Here are the basics of what I have gleaned from the documentation, which covers the Azure Portal method:

The linked documentation is heavy reading. I’m one of those people that needs to play with this stuff before writing too much in detail – I never trust the docs and, to be honest, this content is complicated, as you can see above.

Photo by Viktor Forgacs on Unsplash

Rethinking Firewall Management With Azure Firewall Manager

Microsoft has just announced the general availability a feature that I’ve been waiting for since I first learned about it last Autumn, called Azure Firewall Manager. Azure Firewall Manager allows you to centrally manage one or more Azure Firewall instances through a central, policy-driven, user interface. And it’s those policies, Azure Firewall Policies, that made me re-think Azure Firewall management a few months ago when I was writing my Cloud Mechanix course (running next ONLINE on July 30th) “Securing Azure Services & Data Through Azure Networking”.

Azure Firewall Policy

This is a new resource type that is generally available today. Azure Firewall Policy outsources the configuration and management of the firewall to a policy resource; that means that the usual settings in the Azure Firewall for things like rules and Threat Intelligence move from the firewall resource to a policy when a policy is associated with the firewall.

Policies can be created in a hierarchy. You can create a parent/global policy that will contain configurations and rules that will apply to all/a number of firewall instances. Then you create a child policy that inherits from the parent; note that rules changes in the parent instantly appear in the child. The child is associated with a firewall and applies configurations/rules from the parent policy and the child policy instantly to the firewall.

Problem

I’ve deployed and configured multiple customers where we have virtual data centers (VDCs, which are governed & secured hub and spoke architectures) across multiple regions. Creating rules configurations to allow flows from a spoke/service in one region to another spoke/service in another region is a royal pain in the tushie. Here’s the network flow (as I documented with routing here):

  1. Source device
  2. Outbound NSG rules in source spoke
  3. Firewall in source hub
  4. Firewall in destination hub
  5. Inbound NSG rules at destination spoke
  6. Destination device

There are potentially 4 sets of rules to configure for a simple service running on a single protocol/port. Today I configured Microsoft Identity Management for this scenario and there were dozens of protocol/port combinations across three spokes. The work took hours to complete – which I did in code and it provided a working result for the identity consulting team.

I minimise the work by controlling outbound flows in the local hub firewall, not in the NSG. So the NSGs do not control outbound flows at all. I could allow all via the firewall, even to other private networks, but that goes against the idea of compartmentalisation or micro-segmentation to combat modern network threats – so I need to configure both firewalls for a flow.

Solution

Re-think the firewall for a moment. Imagine you had one virtual firewall that spanned all of your Azure regional deployments. You can control all global flows with one configuration in that global virtual firewall. The global virtual firewall has instances in each Azure region. Any local flows can be configured just in that instance. That’s what Firewall Policy allows.

  • Parent Policy: Place all your global configurations in here. Some configurations will be company-wide, such as Threat Intelligence. Some rules, like allowing access to Microsoft URIs or Azure services (service tags) will be global too. And this is where you put the rules to allow flows between one regional deployment and another. This global management takes all your local Azure Firewall resources and treats them as a single security service.
  • Child Policies: A child policy will be created for each Azure Firewall instance. This policy will inherit the above from the parent applying the global configuration. Local rules, to allow north-south access to/from local services (Internet or on-prem) or east-west (spoke-to-spoke in the same regional deployment) will be configured here. RBAC can be enabled to allow local network admins to do their own thing, but unable to undo what the parent has done.

I haven’t had a chance to test Azure Firewall Policy out yet since the GA announcement, but I’m hoping that the third tier in rules (Rules Groups) made it from preview to GA. I do have groupings of rules collections based on buckets of priorities. This organisation would be awesome in my vision of Azure Firewall management.

Year 13 As A Microsoft Valuable Professional Begins

I found out today that I have been renewed as a Microsoft Azure MVP! This is my 13th consecutive year as an MVP, with previous expertises covering Hyper-V and System Center Configuration Manager. It’s an honour to continue to be part of this program.

The profile of how I am contributing has changed a lot. In my early days, I wrote about everything I saw in my RSS feeds – and I mean everything. I was single with no responsibilities after work, and often living in a hotel with nothing better to do. I got involved in writing books (remember those?). I wrote my own whitepapers – a full week or two of work. Today, Microsoft does a much better (not perfect) job at documenting things. There are lots of people in the community sharing content today so things I might have written about are being done elsewhere and why create redundant content? ]Today, I have a wife and young family and they are my priority in my off-hours. So I have changed how I contribute to the community.

I prefer events where I can fly in out the same day, or at least limit my away time to a quick overnight visit. But that’s all ended for us lately, so the online medium has taken over, which makes me much more available. I haven’t written a tech book in years – my last book (on WS2012 Hyper-V) was when the content was relatively static; today, a book on any tech is out of date before you save your markdown file. I pick and choose my blog topics – I spend a LOT of time doing Azure networking/security and I’ve always skewed my blogging to what I work on. I’ve found even senior consultants/architects don’t know this stuff well so my recent posts have gone into the level-400 stuff on routing and security design/implementation that is usually misunderstood. A lot of my community contribution isn’t obvious.

The MVP program provides MVPs with direct contact with the program managers of the products in their expertise area, and other areas. I’ve taken advantage of that since my 2nd year in the program. There are things in the Hyper-V/clustering/storage world that I can point & say “I provided feedback to make that happen”. And that continues in Azure; like a lot of MVPs, I give feedback and end up in 1:1 email/Teams conversations with the relevant PMs. There are recent things that have launched (some still coming) that I have helped shape. That’s one of the cool parts of being a member of the program, being able to take my observations from work or feedback from customers/community members, and bring it to the product group to improve things for all of us. I think that’s where I’ve been most busy this last year.

There are cool things coming … some are there already but you might not know it! … and that’ll give us lots of things to talk about 🙂

July 30th – Online Training: Securing Azure Services & Data Through Azure Networking

My company, Cloud Mechanix, will be delivering a one-day online training course on July 30th called Securing Azure Services & Data Through Azure Networking.

I set up Cloud Mechanix a few years ago to deliver custom-written training in Azure. Year 1, 2018, was good; I ran two classes in London, and classes in Amsterdam and Frankfurt. I started a new job in 2019 and didn’t feel like I could put the time into Cloud Mechanix that paying customers deserve. With 2020, I felt like I could put together a new course on Azure networking/security based on my experience. I was approached by a company in Belgium to run a course with them; they would use their customer database to promote the class and I would teach. Unfortunately, their promotion failed and the course only sold a few seats. Timing-wise, the class was meant to happen at the start of the COVID-19 pandemic in Europe so cancellation wasn’t a completely bad thing. Then a few months ago, the European Cloud Conference asked if I would do a master class on Azure. I had most of the content I wanted for a class and this seemed like a good place to kick the class off before I run it again with Cloud Mechanix. That class was advertised by the European Cloud Conference as a free class for up to 15 attendees – over 400 applied! I was convinced – bring the class to the public.

So that’s what I am doing. Later today, I’m presenting a teeny-tiny snippet of the PowerPoint deck at E2EVC “Madrid” Online. On June 19th, I will be doing the Azure Masterclass that is sponsored by the European Cloud Conference. And as of now, my class for July 30th is on sale.

Security is always number 1 or 2 in any survey on the fears of cloud computing. Networking in The Cloud is very different from traditional physical networking … but in some ways, it is quite similar. The goal of this workshop is to teach you how to secure your services and data in Microsoft Azure using techniques and designs that are advocated by Microsoft Azure. Don’t fall into the trap of thinking that networking means just virtual machines; Azure networking plays a big (and getting bigger) role in offering security and compliance with platform and data services in The Cloud.

This online class takes you all the way back to the basics of Azure networking so you really understand the “wiring” of a secure network in the cloud. Only with that understanding do you understand that small is big. The topics covered in this class will secure small/mid businesses, platform deployments that require regulatory compliance, and large enterprises:

  • The Microsoft global network
  • Availability & SLA
  • Virtual network basics
  • Virtual network adapters
  • Peering
  • Service endpoints
  • Public IP Addresses
  • VNet gateways: VPN & ExpressRoute
  • Network Security Groups
  • Application Firewall
  • Route Tables
  • Platform services & data
  • Private Link & Private Endpoint
  • Third-Party Firewalls
  • Azure Firewall
  • Monitoring
  • Troubleshooting
  • Security management
  • Micro-Segmentation
  • Architectures

The class is online only – so no matter where you are you will be able to participate. There are no travel restrictions, no travel or accommodation costs, and all you will need is the ability to get online. I will be presenting using Microsoft Teams – you won’t need a license because of the licensing that I will be using. The ticket cost is low – €229 per person, or €209 for 2+ tickets.

I hope to see some of you there on July 30th, starting at 09:00 UK/Irish time (+5 from EST and -1 from CET)!

Connecting Azure Hub-And-Spoke Architectures Together

In this post, I will explain how you can connect multiple Azure hub-and-spoke (virtual data centre) deployments together using Azure networking, even across different Azure regions.

There is a lot to know here so here is some recommended reading that I previously published:

If you are using Azure Virtual WAN Hub then some stuff will be different and that scenario is not covered fully here – Azure Virtual WAN Hub has a preview (today) feature for Any-to-Any routing.

The Scenario

In this case, there are two hub-and-spoke deployments:

  • Blue: Multiple virtual networks covered by the CIDR of 10.1.0.0/16
  • Green: Another set of multiple virtual networks covered by the CIDR of 10.2.0.0/16

I’m being strategic with the addressing of each hub-and-spoke deployment, ensuring that a single CIDR will include the hub and all spokes of a single deployment – this will come in handy when we look at User-Defined Routes.

Either of these hub-and-spoke deployments could be in the same region or even in different Azure regions. It is desired that if:

  • Any spoke wishes to talk to another spoke it will route through the local firewall in the local hub.
  • All traffic coming into a spoke from an outside source, such as the other hub-and-spoke, must route through the local firewall in the local hub.

That would mean that Spoke 1 must route through Hub 1 and then Hub 2 to talk to Spoke 4. The firewall can be a third-party appliance or the Azure Firewall.

Core Routing

Each subnet in each spoke needs a route to the outside world (0.0.0.0/0) via the local firewall. For example:

  • The Blue firewall backend/private IP address is 10.1.0.132
  • A Route Table for each subnet is created in the Blue deployment and has a route to 0.0.0.0/0 via a virtual appliance with an IP address of 10.1.0.132
  • The Greenfirewall backend/private IP address is 10.2.0.132
  • A Route Table for each subnet is created in the Green deployment and has a route to 0.0.0.0/0 via a virtual appliance with an IP address of 10.2.0.132

Note: Some network-connected PaaS services, e.g. API Management or SQL Managed Instance, require additional routes to the “control plane” that will bypass the local firewall.

Site-to-Site VPN

In this scenario, the organisation is connecting on-premises networks to 1 or more of the hub-and-spoke deployments with a site-to-site VPN connection. That connection goes to the hub of Blue and to Green hubs.

To connect Blue and Green you will need to configure VNet Peering, which can work inside a region or across regions (using Microsoft’s low latency WAN, the second-largest private WAN on the planet). Each end of peering needs the following settings (the names of the settings change so I’m not checking their exact naming):

  • Enabled: Yes
  • Allow Transit: Yes
  • Use Remote Gateway: No
  • Allow Gateway Sharing: No

Let’s go back and do some routing theory!

That peering connection will add a hidden Default (“system”) route to each subnet in the hub subnets:

  • Blue hub subnets: A route to 10.2.0.0/24
  • Green hub subnets: A route to 10.1.0.0/24

Now imagine you are a packet in Spoke 1 trying to get to Spoke 4. You’re sent to the firewall in Blue Hub 1. The firewall lets the traffic out (if a rule allows it) and now the packet sits in the egress/frontend/firewall subnet and is trying to find a route to 10.2.2.0/24. The peering-created Default route covers 10.2.0.0/24 but not the subnet for Spoke 4. So that means the default route to 0.0.0.0/0 (Internet) will be used and the packet is lost.

To fix this you will need to add a Route Table to the egress/frontend/firewall subnet in each hub:

  • Blue firewall subnet Route Table: 10.2.0.0/16 via virtual appliance 10.2.0.132
  • Red firewall subnet Route Table: 10.1.0.0/16 via virtual appliance 10.1.0.132

Thanks to my clever addressing of each hub-and-spoke, a single route will cover all packets leaving Blue and trying to get to any spoke in Red and vice-versa.

ExpressRoute

Now the customer has decided to use ExpressRoute to connect to Azure – Sweet! But guess what – you don’t need 1 expensive circuit to each hub-and-spoke.

You can share a single circuit across multiple ExpressRoute gateways:

  • ExpressRoute Standard: Up to 10 simultaneous connections to Virtual Network Gateways in 1+ regions in the same geopolitical region.
  • ExpressRoute Premium: Up to 100 simultaneous connections to Virtual Network Gateways in 1+ regions in any geopolitical region.

FYI, ExpressRoute connections to the Azure Virtual WAN Hub must be of the Premium SKU.

ExpressRoute is powered by BGP. All the on-premises routes that are advertised propagate through the ISP to the Microsoft edge router (“meet-me”) in the edge data centre. For example, if I want an ExpressRoute circuit to Azure West Europe (Middenmeer, Netherlands – not Amsterdam) I will probably (not always) get a circuit to the POP or edge data centre in Amsterdam. That gets me a physical low-latency connection onto the Microsoft WAN – and my BGP routes get to the meet-me router in Amsterdam. Now I can route to locations on that WAN. If I connect a VNet Gateway to that circuit to Blue in Azure West Europe, then my BGP routes will propagate from the meet-me router to the GatewaySubnet in the Blue hub, and then on to my firewall subnet.

BGP propagation is disabled in the spoke Route Tables to ensure all outbound flows go through the local firewall.

But that is not the extent of things! The hub-and-spoke peering connections allow Gateway Sharing from the hub and Use Remote Gateway from the spoke. With that configuration, BGP routes to the spoke get propagated to the GatewaySubnet in the hub, then to the meet-me router, through the ISP and then to the on-premises network. This is what our solution is based on.

Let’s imagine that the Green deployment is in North Europe (Dublin, Ireland). I could get a second ExpressRoute connection but:

  • That will add cost
  • Not give me the clever solution that I want – but I could work around that with ExpressRoute Global Reach

I’m going to keep this simple – by the way, if I wanted Green to be in a different geopolitical region such as East US 2 then I could use ExpressRoute Premium to make this work.

In the Green hub, the Virtual Network Gateway will connect to the existing ExpressRoute circuit – no more money to the ISP! That means Green will connect to the same meet-me router as Blue. The on-premises routes will get into Green the exact same way as with Blue. And the routes to the Green spokes will also propagate down to on-premises via the meet-me router. That meet-me router knows all about the subnets in Blue and Green. And guess what BGP routers do? They propagate – so, the routes to all of the Blue subnets propagate to Green and vice-versa with the next hop (after the Virtual Network Gateway) being the meet-me router. There are no Route Tables or peering required in the hubs – it just works!

Now the path from Blue Spoke 1 to Green Spoke 4 is Blue Hub Firewall, Blue Virtual Network Gateway, <the Microsoft WAN>, Microsoft (meet-me) Router, <the Microsoft WAN>, Green Virtual Network Gateway, Green Hub Firewall, Green Spoke 4.

There are ways to make this scenario more interesting. Let’s say I have an office in London and I want to use Microsoft Azure. Some stuff will reside in UK South for compliance or performance reasons. But UK South is not a “hero region” as Microsoft calls them. There might be more advanced features that I want to use that are only in West Europe. I could use two ExpressRoute circuits, one to UK South and one to West Europe. Or I could set up a single circuit to London to get me onto the Microsoft WAN and connected this circuit to both of my deployments in UK South and West Europe. I have a quicker route going Office > ISP > London edge data center > Azure West Europe than from Office > ISP > Amsterdam edge data center > Azure West Europe because I have reduced the latency between me and West Europe by reducing the length of the ISP circuit and using the more-direct Microsoft WAN. Just like with Azure Front Door, you want to get onto the Microsoft WAN as quickly as possible and let it get you to your destination as quickly as possible.

Started Building A New Home Office

As I have posted before, I work from home. For a number of reasons, my wife and I have decided that I should move the office out of the house and into our garden. So that means that I need a new building in the garden. I spent months reading over the different options and providers. Eventually, I settled on one provider and option.

I’ve chosen to build a “log cabin”, actually a modern wooden building, in our back garden. I’ve chosen all the upgrades for thicker walls, wall/floor/roof insulation, upgraded roof, guttering & drainage, and so on, making it a warm place to work in the winter. If I had to explain it, I’d say it’s Nordic in design. The building is 6 x 4 meters. I went big because it gives me lots of space (better to have more than needed than retreat a smaller choice later) and gives my wife the option of from home too. We paid the deposit last week and are expecting delivery and installation in 5-7 weeks.

The first challenge was where to put the new office. Many of my American friends won’t think that a 6×4 meter building is big or that space shouldn’t be a problem. In Europe, that is big and space is very much a problem. However, I’m lucky, because our back garden was an accident of poor planning by the builder and ended up being ~65 meters long. We were lucky when we bought the house 4 years ago because the previous owner possibly undervalued the site – he didn’t sell through an estate agent.  We have lots of space, most of it hilly, but we have space. After some discussions, it was decided to put the office near the house at the bottom of the garden for 3 reasons:

  • Networking convenience
  • Power convenience
  • The view

The view question was interesting. From the rear of the house, the office will be mostly out of view, with the living parts of the house still looking out onto the back garden. The office will have windows and glass doors on the front, overlooking the garden. So depending on my seating angle, I will be looking out over the green view of the back garden. But there are a few issues.

The first was that there were three nearly-20-years-old birch trees in the exact corner that the office will be going into. I hated the thought of tearing down those trees. But the corner is not used – the trees create a haven for flies and my kids won’t go in there to play – that was my vision for the area. On the other hand, the left side of our garden is lined with a variety of mature trees, we’ve planted 4 more 18 months ago and they are doing well, and both sides are surrounded by trees outside our border wall. So we called up the local handyman who cut down the trees and removed the cuttings – he uses the wood after drying it out.

That was the easy bit! The tree stumps had to be removed next. You can’t just cut/grind a stump down and hope that’s that. Nature is tough. The roots would live on and a new tree could grow through the floor of the office. You have to cut all the roots from the trunk, get under it and lift it out. So out came shovels, an axe and a pickaxe. After 5 hours, 2 of the 3 stumps were removed on Saturday. That was back-breaking work. It turns out that the birch tree grows roots down deep. Those roots are in tick fibrous branches that quickly break out into a mesh of fibres that spread out 3cm to 30 cm deep, protecting the soil around the tree from tools such as shovels and pickaxes. You have to dig, tear and cut just to get through that first few CM of soil and then you face the roots that an axe will bend, not cut. And when you think you’ve cut the last root that is securing the trunk, you find that there are more. It’s Monday now and I’m facing 1 last stump to remove. It’s only in the last few hours that stiffness from Saturday has set in – so today will be fun!

Cutting down the trees revealed something that we had not noticed. The site where the office will reside – about 7m wide, is not level. There is a ~30cm slope going from left to right, and a lesser uneven slope from front to back. The office must be installed onto a level site. I evaluated the options – I hoped that I could dig out one side and use the soil to level out the other side. But that would mean digging under the boundary wall and weakening the foundations. There is no option other than to build a concrete base or pad. At my wife’s suggestion, I went onto the local community page on Facebook and asked for builder recommendations. I reached out to 4 builders – 2 are coming today to give me a quote and one is to call me about making an appointment. I’ll need the pad installed ASAP – concrete sets quickly but takes weeks to harden.

And finally … there’s the electrical installation – something that I know that I cannot do. The cabin manufacturers recommended an electrician. 10 double sockets, lights, a 1.5KW storage heater and connection to the fuse box will clock in at a sizeable sum of change, plus VAT (sales tax). We’ll try to get some alternative quotes for that next.

Deploying Azure ARM Templates From Azure DevOps – With A Complete Example

In this post, I will show you how to get those ARM templates sitting in an Azure DevOps repo deploying into Azure using a pipeline. With every merge, the pipeline will automatically trigger (you can disable this) to update the deployment. In other words, a complete CI/CD deployment where you manage your infrastructure/services as code.

Annoyance

I’m not a DevOps guru. I use DevOps every day. Every deployment I do for a customer runs from JSON that I’ve helped write into the customers’ Azure tenants. But we have people who are DevOps gurus and we have one seriously fancy deployment system that literally just uses a DevOps pipeline as a trigger mechanism and nothing more. But I use that, not develop it. I wanted to create & run a pipeline for my own needs (Cloud Mechanix Azure training). Admittedly, I’ve tried this before, lost patience, and abandoned it. This time, I persisted and succeeded.

What didn’t help? The dreadful Microsoft documentation. One doc, from DevOps was rubbish. Another had deprecated YAML code (pipelines are written in YAML). A third had an example that was full of errors. OK, let’s look at blogs. But as with many blogs on this topic, those few that were originals only showed how to push code into an existing App Service and the rest were copies and pastes of App Services posts or bad Microsoft examples.

When it comes to tech like this, I have the feeling that many who have the knowledge don’t like to share it.

Concept

What I’m dealing with here is infrastructure-as-code (Iac). The code (Azure JSON in ARM templates) will describe the resources and configurations of those resources that I want to deploy. In my example, it’s an Azure Firewall and its configuration, including the rules. I have created a repository (repo) in Azure DevOps and I edit the JSON using Visual Studio Code (VS Code), the free version of Visual Studio. When I make a change in VS Code, it will be done in a branch of the master copy of the code. I will sync that branch to the Cloud. To merge the changes, I will create a pull request. This pull request starts a change control process, where the owners of the repo can review the code and decide to accept or reject the changes. If the changes are accepted they are merged into the master copy of the code. And now the magic happens.

A pipeline is a description of a process that will take the master code from the repo and do stuff with it. In my case, deploy the code to a resource group in an Azure subscription. If the resources are already there, then the pipeline will do an update.

I will end up with an Azure Firewall that is managed as code. The rules and configuration are described in a parameter file so that’s all that I should normally need to touch. To make a rules change, I edit the parameter file and do a pull request. A security officer will review the change and approve/reject it. If the change is approved, the new firewall configuration will be deployed. And yes, this approach could probably be used with Azure Firewall Policy resources – I haven’t tested that yet. Now I can give people Read access only to my subscription and force all configuration changes through the pull request review process of Azure DevOps.

Your deployment can be any Azure resources that you can deploy using a template.

Azure Subscription

In Azure I have two resource groups:

  • [Resource Group] p-devops: Where I can do “DevOps stuff”
    • [Storage Account] pdevopsstorsjdhf983: I will use this to store access the code that I want to deploy using the pipeline
  • [Resource Group] p-we1fw: Where my hub virtual network is and the Azure Firewall will be
    • [Virtual Network]: p-we1fw-vnet: The virtual network that contains a subnet called AzureFirewallSubnet

Remember that storage account!

DevOps Repo

I created and configured a DevOps repo called AzureFirewall in a DevOps project. There are two files in there:

  • [Template] azurefirewall.json: The file that will deploy the Azure Firewall
  • [Parameter] azurefirewall-parameters.json: The configuration of the firewall, including the rules!

New DevOps Service Connection

DevOps will need a way to authenticate with your Azure tenant and get authorization to use your tenant, subscription, or resource group. You can get real fancy here. I’m going simple and using a feature of DevOps called a Service Connection, found in DevOps > [Project] >Project Settings > Service Connections (under Pipelines):

  1. Click New Service Connection
  2. Select Azure Resource Manager and hit Next
  3. Select Service Principal (Automatic) which is recommended by DevOps.
  4. Here I selected the subscription option and the Azure subscription that my resource groups are in.
  5. I granted access permission to all pipelines.
  6. I named the service connection after my subscription: p-we1net.

As I said, you can get real fancy here because there are lots of options.

New DevOps Pipeline

Now for the fun!

Back in the project, I went to Pipelines and created a new Pipeline:

  1. I selected Azure Repos Git because I’m storing my code in an Azure DevOps (Git) repo. The contents of this repo will be deployed by the pipeline.
  2. I selected my AzureFirewall repo.
  3. Then I selected “Starter Pipeline”.
  4. An editor appeared – now you’re editing a file called azure-pipelines.yml that resides in the root of your repo.

There is an option (instead of Starter Pipeline) where you choose an existing YAML file, maybe one from a folder called .pipelines in your repo.

Edit the Pipeline

Here is the code:

That is a working pipeline. It is made up of several pieces:

Trigger

This controls how the pipeline is started. You can set it to none to stop automatic executions – in the early days when you’re trying to get this right, automatic runs can be annoying.

Pool

Your pipeline is going to run in a container. I’m using a stock Microsoft container based on WS2019. You can supply your own container from Azure Container Registry, but that’s getting fancy!

Task: AzureFileCopy

Now we move into the Steps. The first task is to download the contents of the repo into a storage account. We need to do this because the following deployment task cannot directly access the raw files in Azure DevOps. A task is created with the human friendly name of Stage Files. There are a few settings to configure here:

  • azureSubscription: This is not the name of your subscription! Aint that tricky?! This is the name of the service connection that authenticates the pipeline against the subscription. So that’s my service connection called p-we1net, which I happened to name after my subscription.
  • storage: This is the storage account in my target Azure subscription in the p-devops resource group. My service connection has access to the subscription so it has access to the storage account – be careful with restricting access of the service connection to just a resource group and placing the staging storage account elsewhere.
  • ContainerName: This is the name of the container that will be created in your storage account. The contents of the repo will be downloaded into this container.
  • outputStorageUri: The URI/URL of the storage account/container will be stored in a variable which is called artifactsLocation in this example.
  • outputStorageContainerSasToken: A SAS token will be created to allow temporary secure access to the contents of the container. The token will be stored in a variable called artifactsLocationSasToken in this example.

Task: AzureResourceGroupDeployment

This task will take the contents of the repo from the storage account, and deploy them to a resource group in the target subscription. There are a few things to change:

  • azureSubscription: Once again, specify the name of the service connection, not the Azure subscription.
  • resourceGroupName: Enter the name of the target resource group.
  • location: Specify the Azure region that you are targeting.
  • csmFileLink: This is the URI of the template file that you want to deploy. More in a moment.
  • csmParametersFileLink: This is the URI of the parameters file that you want to deploy. More in a moment.
  • deploymentName: I have hard-set the deployment name so I don’t have to clean up versioned deployments from the resource group later. Every resource group has a hard set limit on deployment objects, and with a resource such as a firewall, that could be hit quite quickly.

csmFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the template file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall.json: This is the name of the template file that I want to deploy.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

csmParametersFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall-parameters.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the parameter file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall-parameters.json: This is the name of the parameter file that I want to use to customise the template deployment.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

Pipeline Execution

There are three ways to run the pipeline now:

  1. Do an update (or a merge) to the master branch of the repo thanks to my trigger.
  2. Manually run the pipeline from Pipelines.
  3. Save a change to the pipeline in the DevOps editor if the master is not locked – which will trigger option 1, to be honest.

You can open the pipeline, or historic runs of it, to view/track the execution:

You’ll also get an email to let you know the status of an ended pipeline run:

Happy pipelining!

Speaking Today At Global Azure Virtual (ONLINE)

I am presenting at 14:00 UK/Ireland, 3PM central Europe, 9am Eastern US in the Global Azure virtual/online Bootcamp. You can find the link to the session here on Day 3. Here is the session information that is missing from the event site:

Trust No-One Architecture For Services And Data

Security is always one of the top 3 fears of Cloud customers. In The Cloud, the customer is responsible for their network security design and operation. This session will walk you through the components of Azure network security, and how to architect a secure network for Azure virtual machines or platform services, including VNets, network security groups, routing tables, Private Link, VNet peering, web application gateway, DDoS protection, and firewall appliances.

Free Online Training – Azure Network Security

On June 19th, I will be teaching a FREE online class called Securing Azure Services & Data Through Azure Networking.

I’ve run a number of Cloud Mechanix training classes and I’ve had several requests asking if I would ever consider doing something online because I wasn’t doing the classes outside of Europe. Well … here’s your opportunity. Thanks to the kind folks at European Cloud Conference, I will be doing a 1-day training course online and for free for 20 lucky attendees.

The class, relevant to PaaS and IaaS, takes the best practices from Microsoft for securing services and data in Microsoft Azure, and teaches them based on real-world experience. I’ve been designing and implementing this stuff for enterprises and have learned a lot. The class contains stuff that people who live only in labs will not know … and sadly, based on my googling/reading, a lot of bloggers & copy/pasters fall into that bucket. I’ve learned that the basics of Azure virtual networking must be thoroughly understood before you can even attempt security. So I teach that stuff – don’t assume that you know this stuff already because I know that few really do. Then I move into the fun stuff, like firewalls, WAFs, Private Link/Private Endpoint, and more. The delivery platform will allow an interactive class – this will not be a webinar – I’ve been talking to different people to get advice on choosing the best platform for delivering this class.  I’ve some testing to do, but I think I’m set.

Here’s the class description:

Security is always number 1 or 2 in any survey on the fears of cloud computing. Networking in The Cloud is very different from traditional physical networking … but in some ways, it is quite similar. The goal of this workshop is to teach you how to secure your services and data in Microsoft Azure using techniques and designs that are advocated by Microsoft Azure. Don’t fall into the trap of thinking that networking means just virtual machines; Azure networking plays a big (and getting bigger) role in offering security and compliance with platform and data services in The Cloud.

This online class takes you all the way back to the basics of Azure networking so you really understand the “wiring” of a secure network in the cloud. Only with that understanding do you understand that small is big. The topics covered in this class will secure small/mid businesses, platform deployments that require regulatory compliance, and large enterprises:

  • The Microsoft global network
  • Availability & SLA
  • Virtual network basics
  • Virtual network adapters
  • Peering
  • Service endpoints
  • Public IP Addresses
  • VNet gateways: VPN & ExpressRoute
  • Network Security Groups
  • Application Firewall
  • Route Tables
  • Platform services & data
  • Private Link & Private Endpoint
  • Third-Party Firewalls
  • Azure Firewall
  • Monitoring
  • Troubleshooting
  • Security management
  • Micro-Segmentation
  • Architectures

Level: 400

Topic: Security

Category: IT Professionals

Those of you who have seen the 1-hour (and I rarely stuck to that time limit) conference version of this class will know what to expect. An older version of the session scored 99% at NIC 2020 in Oslo in February with a room packed to capacity. Now imagine that class where I had enough time to barely mention things and give me a full day to share my experience … that’s what we’re talking about here!

This class is one of 4 classes being promoted by the European Cloud Conference:

If you’re serious about participating, register your interest and a lucky few will be selected to join the classes.