Started Building A New Home Office

As I have posted before, I work from home. For a number of reasons, my wife and I have decided that I should move the office out of the house and into our garden. So that means that I need a new building in the garden. I spent months reading over the different options and providers. Eventually, I settled on one provider and option.

I’ve chosen to build a “log cabin”, actually a modern wooden building, in our back garden. I’ve chosen all the upgrades for thicker walls, wall/floor/roof insulation, upgraded roof, guttering & drainage, and so on, making it a warm place to work in the winter. If I had to explain it, I’d say it’s Nordic in design. The building is 6 x 4 meters. I went big because it gives me lots of space (better to have more than needed than retreat a smaller choice later) and gives my wife the option of from home too. We paid the deposit last week and are expecting delivery and installation in 5-7 weeks.

The first challenge was where to put the new office. Many of my American friends won’t think that a 6×4 meter building is big or that space shouldn’t be a problem. In Europe, that is big and space is very much a problem. However, I’m lucky, because our back garden was an accident of poor planning by the builder and ended up being ~65 meters long. We were lucky when we bought the house 4 years ago because the previous owner possibly undervalued the site – he didn’t sell through an estate agent.  We have lots of space, most of it hilly, but we have space. After some discussions, it was decided to put the office near the house at the bottom of the garden for 3 reasons:

  • Networking convenience
  • Power convenience
  • The view

The view question was interesting. From the rear of the house, the office will be mostly out of view, with the living parts of the house still looking out onto the back garden. The office will have windows and glass doors on the front, overlooking the garden. So depending on my seating angle, I will be looking out over the green view of the back garden. But there are a few issues.

The first was that there were three nearly-20-years-old birch trees in the exact corner that the office will be going into. I hated the thought of tearing down those trees. But the corner is not used – the trees create a haven for flies and my kids won’t go in there to play – that was my vision for the area. On the other hand, the left side of our garden is lined with a variety of mature trees, we’ve planted 4 more 18 months ago and they are doing well, and both sides are surrounded by trees outside our border wall. So we called up the local handyman who cut down the trees and removed the cuttings – he uses the wood after drying it out.

That was the easy bit! The tree stumps had to be removed next. You can’t just cut/grind a stump down and hope that’s that. Nature is tough. The roots would live on and a new tree could grow through the floor of the office. You have to cut all the roots from the trunk, get under it and lift it out. So out came shovels, an axe and a pickaxe. After 5 hours, 2 of the 3 stumps were removed on Saturday. That was back-breaking work. It turns out that the birch tree grows roots down deep. Those roots are in tick fibrous branches that quickly break out into a mesh of fibres that spread out 3cm to 30 cm deep, protecting the soil around the tree from tools such as shovels and pickaxes. You have to dig, tear and cut just to get through that first few CM of soil and then you face the roots that an axe will bend, not cut. And when you think you’ve cut the last root that is securing the trunk, you find that there are more. It’s Monday now and I’m facing 1 last stump to remove. It’s only in the last few hours that stiffness from Saturday has set in – so today will be fun!

Cutting down the trees revealed something that we had not noticed. The site where the office will reside – about 7m wide, is not level. There is a ~30cm slope going from left to right, and a lesser uneven slope from front to back. The office must be installed onto a level site. I evaluated the options – I hoped that I could dig out one side and use the soil to level out the other side. But that would mean digging under the boundary wall and weakening the foundations. There is no option other than to build a concrete base or pad. At my wife’s suggestion, I went onto the local community page on Facebook and asked for builder recommendations. I reached out to 4 builders – 2 are coming today to give me a quote and one is to call me about making an appointment. I’ll need the pad installed ASAP – concrete sets quickly but takes weeks to harden.

And finally … there’s the electrical installation – something that I know that I cannot do. The cabin manufacturers recommended an electrician. 10 double sockets, lights, a 1.5KW storage heater and connection to the fuse box will clock in at a sizeable sum of change, plus VAT (sales tax). We’ll try to get some alternative quotes for that next.

Deploying Azure ARM Templates From Azure DevOps – With A Complete Example

In this post, I will show you how to get those ARM templates sitting in an Azure DevOps repo deploying into Azure using a pipeline. With every merge, the pipeline will automatically trigger (you can disable this) to update the deployment. In other words, a complete CI/CD deployment where you manage your infrastructure/services as code.

Annoyance

I’m not a DevOps guru. I use DevOps every day. Every deployment I do for a customer runs from JSON that I’ve helped write into the customers’ Azure tenants. But we have people who are DevOps gurus and we have one seriously fancy deployment system that literally just uses a DevOps pipeline as a trigger mechanism and nothing more. But I use that, not develop it. I wanted to create & run a pipeline for my own needs (Cloud Mechanix Azure training). Admittedly, I’ve tried this before, lost patience, and abandoned it. This time, I persisted and succeeded.

What didn’t help? The dreadful Microsoft documentation. One doc, from DevOps was rubbish. Another had deprecated YAML code (pipelines are written in YAML). A third had an example that was full of errors. OK, let’s look at blogs. But as with many blogs on this topic, those few that were originals only showed how to push code into an existing App Service and the rest were copies and pastes of App Services posts or bad Microsoft examples.

When it comes to tech like this, I have the feeling that many who have the knowledge don’t like to share it.

Concept

What I’m dealing with here is infrastructure-as-code (Iac). The code (Azure JSON in ARM templates) will describe the resources and configurations of those resources that I want to deploy. In my example, it’s an Azure Firewall and its configuration, including the rules. I have created a repository (repo) in Azure DevOps and I edit the JSON using Visual Studio Code (VS Code), the free version of Visual Studio. When I make a change in VS Code, it will be done in a branch of the master copy of the code. I will sync that branch to the Cloud. To merge the changes, I will create a pull request. This pull request starts a change control process, where the owners of the repo can review the code and decide to accept or reject the changes. If the changes are accepted they are merged into the master copy of the code. And now the magic happens.

A pipeline is a description of a process that will take the master code from the repo and do stuff with it. In my case, deploy the code to a resource group in an Azure subscription. If the resources are already there, then the pipeline will do an update.

I will end up with an Azure Firewall that is managed as code. The rules and configuration are described in a parameter file so that’s all that I should normally need to touch. To make a rules change, I edit the parameter file and do a pull request. A security officer will review the change and approve/reject it. If the change is approved, the new firewall configuration will be deployed. And yes, this approach could probably be used with Azure Firewall Policy resources – I haven’t tested that yet. Now I can give people Read access only to my subscription and force all configuration changes through the pull request review process of Azure DevOps.

Your deployment can be any Azure resources that you can deploy using a template.

Azure Subscription

In Azure I have two resource groups:

  • [Resource Group] p-devops: Where I can do “DevOps stuff”
    • [Storage Account] pdevopsstorsjdhf983: I will use this to store access the code that I want to deploy using the pipeline
  • [Resource Group] p-we1fw: Where my hub virtual network is and the Azure Firewall will be
    • [Virtual Network]: p-we1fw-vnet: The virtual network that contains a subnet called AzureFirewallSubnet

Remember that storage account!

DevOps Repo

I created and configured a DevOps repo called AzureFirewall in a DevOps project. There are two files in there:

  • [Template] azurefirewall.json: The file that will deploy the Azure Firewall
  • [Parameter] azurefirewall-parameters.json: The configuration of the firewall, including the rules!

New DevOps Service Connection

DevOps will need a way to authenticate with your Azure tenant and get authorization to use your tenant, subscription, or resource group. You can get real fancy here. I’m going simple and using a feature of DevOps called a Service Connection, found in DevOps > [Project] >Project Settings > Service Connections (under Pipelines):

  1. Click New Service Connection
  2. Select Azure Resource Manager and hit Next
  3. Select Service Principal (Automatic) which is recommended by DevOps.
  4. Here I selected the subscription option and the Azure subscription that my resource groups are in.
  5. I granted access permission to all pipelines.
  6. I named the service connection after my subscription: p-we1net.

As I said, you can get real fancy here because there are lots of options.

New DevOps Pipeline

Now for the fun!

Back in the project, I went to Pipelines and created a new Pipeline:

  1. I selected Azure Repos Git because I’m storing my code in an Azure DevOps (Git) repo. The contents of this repo will be deployed by the pipeline.
  2. I selected my AzureFirewall repo.
  3. Then I selected “Starter Pipeline”.
  4. An editor appeared – now you’re editing a file called azure-pipelines.yml that resides in the root of your repo.

There is an option (instead of Starter Pipeline) where you choose an existing YAML file, maybe one from a folder called .pipelines in your repo.

Edit the Pipeline

Here is the code:

That is a working pipeline. It is made up of several pieces:

Trigger

This controls how the pipeline is started. You can set it to none to stop automatic executions – in the early days when you’re trying to get this right, automatic runs can be annoying.

Pool

Your pipeline is going to run in a container. I’m using a stock Microsoft container based on WS2019. You can supply your own container from Azure Container Registry, but that’s getting fancy!

Task: AzureFileCopy

Now we move into the Steps. The first task is to download the contents of the repo into a storage account. We need to do this because the following deployment task cannot directly access the raw files in Azure DevOps. A task is created with the human friendly name of Stage Files. There are a few settings to configure here:

  • azureSubscription: This is not the name of your subscription! Aint that tricky?! This is the name of the service connection that authenticates the pipeline against the subscription. So that’s my service connection called p-we1net, which I happened to name after my subscription.
  • storage: This is the storage account in my target Azure subscription in the p-devops resource group. My service connection has access to the subscription so it has access to the storage account – be careful with restricting access of the service connection to just a resource group and placing the staging storage account elsewhere.
  • ContainerName: This is the name of the container that will be created in your storage account. The contents of the repo will be downloaded into this container.
  • outputStorageUri: The URI/URL of the storage account/container will be stored in a variable which is called artifactsLocation in this example.
  • outputStorageContainerSasToken: A SAS token will be created to allow temporary secure access to the contents of the container. The token will be stored in a variable called artifactsLocationSasToken in this example.

Task: AzureResourceGroupDeployment

This task will take the contents of the repo from the storage account, and deploy them to a resource group in the target subscription. There are a few things to change:

  • azureSubscription: Once again, specify the name of the service connection, not the Azure subscription.
  • resourceGroupName: Enter the name of the target resource group.
  • location: Specify the Azure region that you are targeting.
  • csmFileLink: This is the URI of the template file that you want to deploy. More in a moment.
  • csmParametersFileLink: This is the URI of the parameters file that you want to deploy. More in a moment.
  • deploymentName: I have hard-set the deployment name so I don’t have to clean up versioned deployments from the resource group later. Every resource group has a hard set limit on deployment objects, and with a resource such as a firewall, that could be hit quite quickly.

csmFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the template file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall.json: This is the name of the template file that I want to deploy.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

csmParametersFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall-parameters.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the parameter file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall-parameters.json: This is the name of the parameter file that I want to use to customise the template deployment.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

Pipeline Execution

There are three ways to run the pipeline now:

  1. Do an update (or a merge) to the master branch of the repo thanks to my trigger.
  2. Manually run the pipeline from Pipelines.
  3. Save a change to the pipeline in the DevOps editor if the master is not locked – which will trigger option 1, to be honest.

You can open the pipeline, or historic runs of it, to view/track the execution:

You’ll also get an email to let you know the status of an ended pipeline run:

Happy pipelining!

Speaking Today At Global Azure Virtual (ONLINE)

I am presenting at 14:00 UK/Ireland, 3PM central Europe, 9am Eastern US in the Global Azure virtual/online Bootcamp. You can find the link to the session here on Day 3. Here is the session information that is missing from the event site:

Trust No-One Architecture For Services And Data

Security is always one of the top 3 fears of Cloud customers. In The Cloud, the customer is responsible for their network security design and operation. This session will walk you through the components of Azure network security, and how to architect a secure network for Azure virtual machines or platform services, including VNets, network security groups, routing tables, Private Link, VNet peering, web application gateway, DDoS protection, and firewall appliances.

Free Online Training – Azure Network Security

On June 19th, I will be teaching a FREE online class called Securing Azure Services & Data Through Azure Networking.

I’ve run a number of Cloud Mechanix training classes and I’ve had several requests asking if I would ever consider doing something online because I wasn’t doing the classes outside of Europe. Well … here’s your opportunity. Thanks to the kind folks at European Cloud Conference, I will be doing a 1-day training course online and for free for 20 lucky attendees.

The class, relevant to PaaS and IaaS, takes the best practices from Microsoft for securing services and data in Microsoft Azure, and teaches them based on real-world experience. I’ve been designing and implementing this stuff for enterprises and have learned a lot. The class contains stuff that people who live only in labs will not know … and sadly, based on my googling/reading, a lot of bloggers & copy/pasters fall into that bucket. I’ve learned that the basics of Azure virtual networking must be thoroughly understood before you can even attempt security. So I teach that stuff – don’t assume that you know this stuff already because I know that few really do. Then I move into the fun stuff, like firewalls, WAFs, Private Link/Private Endpoint, and more. The delivery platform will allow an interactive class – this will not be a webinar – I’ve been talking to different people to get advice on choosing the best platform for delivering this class.  I’ve some testing to do, but I think I’m set.

Here’s the class description:

Security is always number 1 or 2 in any survey on the fears of cloud computing. Networking in The Cloud is very different from traditional physical networking … but in some ways, it is quite similar. The goal of this workshop is to teach you how to secure your services and data in Microsoft Azure using techniques and designs that are advocated by Microsoft Azure. Don’t fall into the trap of thinking that networking means just virtual machines; Azure networking plays a big (and getting bigger) role in offering security and compliance with platform and data services in The Cloud.

This online class takes you all the way back to the basics of Azure networking so you really understand the “wiring” of a secure network in the cloud. Only with that understanding do you understand that small is big. The topics covered in this class will secure small/mid businesses, platform deployments that require regulatory compliance, and large enterprises:

  • The Microsoft global network
  • Availability & SLA
  • Virtual network basics
  • Virtual network adapters
  • Peering
  • Service endpoints
  • Public IP Addresses
  • VNet gateways: VPN & ExpressRoute
  • Network Security Groups
  • Application Firewall
  • Route Tables
  • Platform services & data
  • Private Link & Private Endpoint
  • Third-Party Firewalls
  • Azure Firewall
  • Monitoring
  • Troubleshooting
  • Security management
  • Micro-Segmentation
  • Architectures

Level: 400

Topic: Security

Category: IT Professionals

Those of you who have seen the 1-hour (and I rarely stuck to that time limit) conference version of this class will know what to expect. An older version of the session scored 99% at NIC 2020 in Oslo in February with a room packed to capacity. Now imagine that class where I had enough time to barely mention things and give me a full day to share my experience … that’s what we’re talking about here!

This class is one of 4 classes being promoted by the European Cloud Conference:

If you’re serious about participating, register your interest and a lucky few will be selected to join the classes.

Cloud Silver Lining

Is There A Silver Lining To This COVID-19 Pandemic Cloud?

The world is pretty frakked up right now. Most of us are in lock-down and are trying to keep ourselves, vulnerable friends/family, and our communities safe from the virus. That means that, for the first time for many, they are working from home (WFH). Don’t close the browser tab – this is not another “advice on how to work from home” post. Personally, I find the process of the WFH bit easy – what’s hard is not leaving the house and trying to work while the kids run amok.

Instead, I want to discuss the possible outcomes of this crisis to our working habits. Think bank a month or so ago. Most of you were in the rat race every morning and evening, commuting on packed roads or stuff public transport, to offices in locations that were convenient for no one. You had the technology to work from home, heck, the boss probably did that every day, but a lack of trust by the boss or the HR Gestapo meant that you had to march like a nice little drone into the office every day. If I was working for one of the companies in the city centre, I would have to spend 3-4 hours a day commuting by public transport from my home that is approximately 38 kilometres from there. Stupid right? Instead, I work as a consultant for a Norwegian company, from the comfort of my home in Ireland, working hand-in-hand with customers and I have spent all of 4 days in Norway for in-person meetings in the last 16 months. That’s because “we have the technology” … but so do most companies that choose to not use it.

The Cloud is cool because it’s everywhere. If I have Internet access then I can use it. I know there have been capacity issues but:

  • The spike in demand was unprecedented and unexpected
  • No cloud or hosting company keeps 100% or more of free capacity sitting idle for “just in case” … otherwise, costs would be double.

Using Office 365, Teams, other SaaS and RDS/WVD/Citrix we can do our entire job from home – assuming that you are an office worker. The current crisis has forced us to do just that! So why do we commute like good little worker bees into that office every day? It makes no sense! HR are a big part of the blame. I don’t like HR people – I can’t stand them, actually. They cannot trust employees to work from home. But I wonder how many businesses are operating OK right now? Things might not be perfect but I bet that people are adapting and finding ways to get things done. Owners of small/medium businesses might have the same doubts, but are there employees standing up, and getting the job done when the business is at risk? These are the times when you can find the keepers, the staff that are self-motivated, innovative, and should be rewarded when things normalise.

Yes, I know, this won’t suit all types of staff/services. But it will suit the typical office worker.

The M50/M4, which is normally clogged like an obese person’s artery, taken at 11:25 am on Wednesday, April 8th, 2020.

Things will get back to some semblance of normality eventually. I’m not going to say when – it might be a month, it might be next year. But do we really have to go back to the old routine again? Can’t employees be empowered to work from home? Can office space demands be reduced and the rent savings converted into WFH allowances for equipment, etc? Can governments take environmental and public transport/road savings and convert them into tax breaks/grants for building out home offices? Won’t the reduce commuting of office workers reduce the loads on already-stressed transport systems? Won’t the environment continue to improve, as it has as slightly done in the last month? We all know that something must be done to change the direction of climate change and this might be the kick in the tail that we needed. And won’t businesses continue to run, with the already-identified staff that boost that business?

If you’re a business leader or owner that has employees that can work from home, I think you should take the current provisional systems and convert them into improved (from the learnings) permanent systems. You’ve been forced to evolve for the lock-down, so learn from it, and make your staff happier, improve the environment, make your company more attractive to future employees, and maybe save a fortune in office rent.

Errors When You Add A Cert To Application Gateway Listener From Key Vault

This post is dealing with a situation where you attempt to add a certificate to a v2 Azure Application Gateway/Firewall (WAG_v2/WAF_v2) from an Azure Key Vault. The attempt fails and any further attempt to delete/modify the certificate fails with this error:

Invalid value for the identities ‘/subscriptions/xxxxxxx/resourcegroups/myapp/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myapp-waf-id’. The ‘UserAssignedIdentities’ property keys should only be empty json objects, null or the resource exisiting property.

Application Gateway v2 and Key Vault

Azure Key Vault is the best place to store secrets in Microsoft Azure – particularly SSL certificates. Key Vault has a nice system for abstracting versions of a certificate so you can put in newer versions without changing references to the older one. There is also a feature for automatic renewal of expiring certs from certain issuers. I also like the separation of exposed resource from organisation secrets that you get with this approach; the legacy method was that you had to upload the cert into the WAG/WAF, but now WAG_v2/WAF_v2 allow you to store the certs in a Key Vault, and that limited access is done using a managed user ID (an Azure resource, not an Azure AD resource, which makes it more agile).

The Problem

I was actually going to write a blog post about how to obtain the secret ID of a certificate from the Key Vault so you could add it to the WAGv2/WAFv2. But as I was setting up the lab, I realised that during the day, Microsoft had updated the Azure Portal blade so certs were instead presented as a drop-down list box; now my post was pointless. But I continued setting things up and hit the above issue.

The Cause/Fix

When you use this architecture, WAF_v2/WAG_v2 requires that you have enabled soft delete on the Key Vault. And that’s the only check that they have been doing. The default setting for Key Vault soft delete is 90 days. I was in a lab, I was mucking around, so I set soft delete in my Key Vault to 7 days – a perfectly legit value for Key Vault. However, the Application Gateway (AppGW) requires it to be set to 90 days minimum … even though it does not check it!

To undo the damage you can run the following PowerShell cmdlets:

  • Set-AzApplicationGatewayIdentity
  • Remove-AzApplicationGatewaySslCertificate
  • Remove-AzApplicationGatewayHttpListener
  • Set-AzApplicationGateway to update the WAF

Thanks to Cat in the Azure network team for the help!

Enabling NSG Traffic Analytics Fails

This post will deal with a scenario where you get this error when attempting to enable NSG Traffic Analytics with a Log Analytics Workspace:

Failed to save flow log settings
Failed to update flow logs settings for ‘NSG-NAME’. Error: An error occurred..

NSG Traffic Analytics

I work mostly in Azure networking these days. My customers are typically larger enterprises that are focused on governance and security. When you build Azure network architecture for these kinds of organisations, the networks have many pieces to make micro-segmented security a reality. And that means you need to be able to troubleshoot NSG rules and routing. I find the troubleshooting tools in Network Watcher to be useless. Instead, I use:

  • My own understanding to make up a mental map of the effective routes for the subnet – because this is missing in Azure unless you have an allocated VM NIC in that subnet (often the case)
  • Azure Firewall’s logs
  • NSG Traffic Analytics logs in a Log Analytics Workspace

In my architecture, there is a single, central Log Analytics Workspace that is in a different subscription to the virtual networks/NSGs. And this is where the problem is rooted.

Symptoms

When you attempt to enable Traffic Analytics you get the above error. Interestingly, if you only attempt to enable NSG Flow Logs (data logged to storage account) there is no problem. So the issue is related to getting the Workspace configured as a part of the solution (NSG Traffic Analytics).

The Problem & Fix

The problem is that the Microsoft.Network resource provider must be enabled in the subscription that the Workspace is located in. In my case, as I said, I have a dedicated management subscription so there are no network resources to require/enable that resource provider automatically.

If you go to Subscriptions > Resource Providers in the Azure Portal, you can enable the provider there. Wait (no more than 15 minutes) and things should be OK then.

Thanks to Dalan in Azure Networking for helping fix this one!

Photo by Viktor Forgacs on Unsplash

Azure Firewall Improvements – February 2020

Microsoft blogged a couple of posts in the last month to announce some interesting news about Azure Firewall, a resource that I’m using with every customer that I dealt with in the last year.

Azure Firewall Manager (Preview)

I first played with Azure Firewall Manager in the Secure Virtual Hub preview. Now the feature is in preview with the “network SKU” of Azure Firewall. The concept starts with Azure Firewall Manager, an Azure Portal GUI that isn’t a resource; it’s a way to centrally manage one or more Azure Firewall resources in one region or in many regions.

Azure Firewall Manager does control a new top-level resource: a firewall policy. Policies move the management of Azure Firewall configuration and rules from the firewall resource to the policy resource. You can create a simple hierarchy of policies.

For example, I find myself creating the same collections/rules in every Azure Firewall; if a customer has 3 network deployments around the world with identical base requirements then you can create a “parent” policy. Then you can create a child policy for each firewall instance that is a child of the parent; that means it inherits the current and future configurations of the parent policy. And then you associate the child policy with the correct firewall. Now you do the network-specific changes in the child. Any future global changes go into the parent, and they will inherit down to each firewall.

Cool, right?

IP Groups (Preview)

This is another cool top-level resource. Let’s say I’m managing an Azure Firewall with a site-to-site network connection. There’s a pretty good chance that I am constantly creating rules for specific groups of addresses, sets of networks, or even all the “super-nets” of the WAN. Do I really want to remember/type each of those addresses? Surely a mistake will be made?

IP Groups allow you to create an abstraction. For example, I can put each of my WAN super-nets into an IP Group resource called wan-ipg. Then I can use wan-ipg instead of listing each address. Nice!

Support for TCP/UDP 65535

One of those base configurations that I’m constantly deploying is to enable Active Directory Domain Services (ADDS) domain controllers to replicate through the Azure Firewall. If you go look at the TCP/UDP requirements you’ll find that one of the rules requires a huge range, with the high port being 65535. However, Azure Firewall only supported up to TCP/UDP 64000. It did not affect me, but there were reports of issues with ADDS replication. Now you can create rules up to the normal maximum port number.

Forced Tunnelling Support

This is for those of you who live in 1990 or have tinfoil on your heads. Now you can force all outbound traffic to go back to on-premises instead of to the Internet. I guess that this one is for the US government or someone with equally large purchasing power (influence).

Enable Public IP Addresses in Private Networks

I’m working with a customer that has used public IP addressing behind their on-premises firewall. One of my colleagues at work has a similar customer. I know of others with the same sort of customer.

Azure Firewall has not been compatible with that configuration. Imagine this:

  • The customer has a public IP range for their on-premises LAN – no NAT rules on the firewall.
  • They have a site-to-site network connection to Azure.
  • An Azure Firewall sits in the hub of a hub and spoke network – all ingress and all egress traffic must pass through the firewall.
  • A service in an Azure spoke tries to communicate with something on-premises on one of those public IP addresses.

And that’s where it all goes wrong. Azure Firewall sees that the destination is a non-RFC1918 IP address (not 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16) and forcefully SNAT’s the packets to the Internet, and the packets never reach the on-premises destination.

With this update, you can use PowerShell/JSON to configure public IP ranges that are to route via the AzureFirewallSubnet (propagated routes from GatewaySubnet) and not to the Internet.

ICSA Labs Corporate Firewall Certification

Certifications are good, and some customers probably compare using these sorts of things.

Verifying Propagated BGP Routes on Azure ExpressRoute

An important step of verifying or troubleshooting communications over ExpressRoute is checking that all the required routes to get to on-premises or WAN subnets have been propagated by BGP to your ExpressRoute Virtual Network Gateway (and the connected virtual networks) by the on-premises edge router.

The Problem

Routing to Azure is often easy; your network admins allocate you a block of private address space on the “WAN” and you use it for your virtual network(s). They add a route entry to that CIDR block on their VPN/ExpressRoute edge device and packets can now get to Azure. The other part of that story is that Azure needs to know how to send packets back to on-premises – this affects responses and requests. And I have found that this is often overlooked and people start saying things like “Azure networking is broken” when they haven’t sent a route to Azure so that the Azure resources connected to the virtual network(s) can respond.

The other big cause is that the on-premises edge firewall doesn’t allow the traffic – this is the #1 cause of RDP/SSH to Azure virtual machines not working, in my experience.

I had one such scenario where a system in Azure was “not-accessible”. We verified that everything in Azure was correct. When we looked at the propagated BGP routes (via ExpressRoute) then we saw the client subnets were not included in the Route Table. The on-prem network admins had not propagated those routes so the Azure ExpressRoute Gateway did not have a route to send clients responses to. Once the route was propagated, things worked as expected.

Finding the Routes

There are two ways you can do this. The first is to use PowerShell:

The command takes quite a while to run. Eventually, it will spit out the full route table. If there are lots of routes (there could be hundreds if not thousands) then they will scroll beyond the buffer of your console. So modify the command to send the output to a text file:

Unfortunately, it does not create a CSV format by default but one could format the output to get something that’s easier to filter and manipulate.

You can also use the Azure Portal where you can view routes from the Route Table and export a CSV file with the contents of the Route Table. Open the ExpressRoute Circuit and browse to Peerings.

Click Azure Private, which is the site-to-site ExpressRoute connection.

Now a pop-up blade appears in the Azure Portal called Private Peering. There are three interesting options here:

  • Get ARP records to see information on ARP.
  • Get Route Table – more on this in a second.
  • Get Route Table Summary to get a breakdown/summary of the records, including neighbor, version, status ASN, and a count of routes.

We want to see the Route Table so you click that option. Another pop-up blade appears and now you wait for several minutes. Eventually, the screen will load up to 200 of the entries from the Route Table. If you want to see the entire list of entries or you want an export, click Download. A CSV file will download via your browser, with one line per route from the Route Table, including every one of the routes.

Search the Route Table and look for a listing that either lists the on-premises/WAN subnet or includes it’s space, for example, a route to 10.10.0.0/16 includes a subnet called 10.10.10.0/24.

I’m Presenting Two Sessions At NIC 20/20 Vision in Oslo

I will be presenting two Azure sessions at the (NICCONF) NIC 20/20 Vision conference in Oslo on February 6th.

The content I’m presenting on is inspired by the work I have been doing with Innofactor Norway for customers in Norway. So it will be kind of cool to stand (once again) on a stage in Oslo and share what I’ve learned. I have two sessions on the afternoon of the 6th.

Secure Azure Network Architecture

Azure networking & security has become my focus area. I enjoy the organic nature of how Azure’s software-defined networking functions. I enjoy the scale, the possibilities, and the variety of options. And most of all, I appreciate how the near-universally overlooked fundamentals play a bigger role in network security than people realise. It’s a huge area to cover, but I will do my best in the hour that I have:

This session will walk you through the components of Azure network security, and how to architect a secure network for Azure virtual machines or platform services, including VNets, network security groups, routing tables, VNet peering, web application gateway, DDoS protection, and firewall appliances.

Auditing Azure – Compliance, Oversight, Governance, and Protection

An important part of governance is recording what is going on in Azure and being able to retain, query, and report on that data. This is an area I had a cool solution for this time last year, but Microsoft blew that up. Recently I revisited this space and found cool new things that I could do. And in preparing for this session, I found more stuff that I could talk about. I’ve enjoyed preparing this session and it has contributed back to my work. This session is late in the day for most Norwegians, but I hope that attendees stick around.

Auditing isn’t the most glamorous subject, but in a self-service environment, it becomes important to protect assets, the company, and even your job. In this session, you’ll learn how Azure provides auditing functionality that you can query, report on, and store securely for as long as you need it in cost-efficient ways.

Hopefully, I will see some of you there at the event!