Deploy Shared Azure Bastion To Virtual WAN Spoke

In this post, I will explain how you can deploy Azure Bastion into a spoke in a hub & spoke architecture with a Virtual WAN hub – and use that Bastion to securely log into virtual machines in other spokes using RDP or SSH. I will also explain why this has limitations in a hub & spoke architecture with a VNet-based hub.

The Need

Even organisations that opt for a PaaS-only Azure implementation need virtual machines. Once you do network security, you need virtual machines for those DevOps build agents or those GitHub runners. And realistically, migrated legacy workloads need VMs. And things like AKS and HPC are based on VMs that you build and troubleshoot. So you need VMs. And therefore, you need a secure way to log into those VMs.

Those of us who want an air gap between the PC and the servers have tried things like RD Gateway and Guacamole. Neither is perfect. Ideally, we want Azure AD integration (for Premium security features) and a platform resource (to minimise maintenance).

And along came Azure Bastion. At first reading, it seemed ideal. And then we started to discover warts. Many of those warts were cleaned up. Bastion got support for a desktop client through a CLI login. A hub deployment was possible – if you use a VNet-based hub – but it gave Bastion users (including external support) staff a map of your entire Azure network because they read access to the hub VNet – and all its peering connections. For many of us, that left us with deploying Bastion in every subnet – both costly and a waste of IP space.

We needed an Azure Bastion that we could deploy once in a spoke. We could log into it, route through the firewall in the hub, and log into VMs running in other spokes.

IP-Based Connection

Microsoft announced a new feature in the Standard tier of Azure Bastion called IP-Based Connection. With this feature, you can log into a virtual machine across an IP network from your Bastion. That means you can log into:

  • Virtual machines in the same subnet or virtual network
  • Virtual machines in other Azure virtual networks
  • On-premises computers across site-to-site network connections such as VPN or ExpressRoute

The assumptions are that:

  • The NSG protecting the Azure virtual machine allows SSH/RDP from AzureBastionSubnet o the virtual machine.
  • The hub firewall (if you have one) allows SSH/RDP from AzureBastionSubnet to the virtual machine.
  • The on-premises firewall, if the virtual machine is on-premises, allows RDP/SSH from AzureBastionSubnet to the computer.
  • The OS of the virtual machine or computer allows RDP/SSH rom AzureBastionSubnet.

VNet-Based Hub

In my first experiment, I tried deploying a shared Azure Bastion in a spoke with a VNet-based hub. I could deploy it no problem but Azure Bastion could not route to other spokes sharing the hub. Why?

There were no routes from the spoke subnets to other spoke subnets. But that’s OK – I know how to fix that.

Let’s say my entire Azure network is in the 10.1.0.0/16 address space. Well, if I want to route to any spoke in that address space, I can:

  1. Create a route table and *cough* associate it with the AzureBastionSubnet
  2. Add a user-defined route (UDR) for 10.1.0.0/16 with a next hop of the hub firewall (or a routing appliance).

Azure Bastion needs to have a route of 0.0.0.0/0 to Internet so you can’t do the usual spoke thing with the UDR so I could leave the default Internet route in place.

Did it work? No – that’s because AzureBastionSubnet has a hard-coded rule to prevent the association of a route table. I guess that Microsoft had too many support calls with people doing bad things with a route table associated with AzureBastionSubnet.

It turns out that the only way to get a route from one spoke to another with a VNet-based hub is:

  1. Use Azure Virtual Network Manager (AVNM – currently in preview) to peer your spokes with transitive peering.
  2. Do not expect spoke-to-spoke traffic to flow through a hub firewall – AVNM does not support configuring a next-hop.

That means the only shared Azure Bastion option for VNet-based hubs is to deploy Bastion in the hub and leave all your peering connections visible. Ick!

vWAN-Based Hub

I really like using Azure Virtual WAN (vWAN) for my hub and spoke – and almost none of my customers use it for SD-WAN (the primary use case). The reasons I like it are:

  • It pushes the hub into the platform, reducing administrative efforts
  • Routing becomes something that you push from the hub using eBGP

“Ah – what’s that you say about eBGP, Aidan?”

You can create route tables in Virtual WAN Hubs – let’s call them hub route tables. Then in the properties of a spoke virtual network connection you can configure:

  • Propagation: Have the route table learn routes from the spoke virtual network using eBGP.
  • Association: Share routes from the route table to the spoke virtual network.

And you can put static routes into a hub route table.

My Scenario

A shared Azure Bastion in the spoke of an Azure Virtual WAN hub

When I deploy a Virtual WAN Hub, I choose the Secured Virtual WAN Hub option. This places an Azure Firewall in the hub. I then add a static route for the 0.0.0.0/0 and private IP address spaces to route via the Azure Firewall in the build-in Default hub route table.

All spokes:

  • Propagate to the built-in None hub route table, so the routes of the spoke are forgotten.
  • Associate with the built-in Default hub route table, so they learn the next hop to 0.0.0.0/0 and the private IP address spaces is via the firewall.

I can deploy Azure Bastion into a spoke but this spoke will require a different route configuration. That is because I use a firewall to isolate the spokes. If I had open spoke-to-spoke traffic then Bastion would probably just work. My scenario is actually simple to fix:

  1. Create a new hub route table, maybe called Bastion.
  2. Add a static route to the rest of the hub and spoke (10.1.0.0/16 in my example) with a next-hop of the hub firewall.
  3. Configure the Bastion spoke connection to associate with the Bastion hub route table and propagate to none.

Now:

  • The Bastion will use the firewall as the next hop to all other spokes.
  • Go directly to Internet for the control plane traffic, using the default route in the subnet.
  • Other spokes will have the same route back to the Bastion using the firewall as the next-hop.

Finally, you need to ensure that Firewall and NSG rules allow RDP/SSH from AzureBastionSubnet in the Bastion spoke to the VMs in other spokes. And it works! All an operator/developer/support staff member needs now is:

  • An Azure AD account with read access to the Azure Bastion resource – no need for read to the hub or even the spoke with the VM!
  • The IP address for the machine they want to sign into – my design is limited to Azure VMs but on-premises static routes could be added to the Bastion hub route table.
  • A username and password with login rights to the virtual machine or computer.

Digital Transformation Is Not Just A Tech Thing

In this post, I want to discuss why many businesses fail to get what they expect from The Cloud – why their “digital transformation” (or “cloud transformation”) fails.

Cloud Concerns

We’ve all seen the surveys. CIOs are scared about lots of things when it comes to cloud migration/adoption. Costs might overrun. Security might be insufficient. Skills are in short supply. I’m afraid to say those are all concerns – manageable ones. The big question is rarely discussed until it is too late: what are we really getting into?

Failure Starts In The Middle

Many cloud adoption projects start at the wrong place in the organisation. Instead of an instruction coming with direction and authority from the C-suite (CEO, CTO, CSO, CIO, etc), IT management, typically in Operations, make the decision to go to The Cloud for mundane reasons such as “try to reduce costs” or “avoid doing another hardware upgrade”.

Operations go ahead and build (or request a consultant to deliver) what they know: a centrally managed, locked-down environment. You know what I mean; a rigid environment that complies with old ITIL-style processes from the early 2000s. If a developer needs something, log a ticket, and we’ll get around to it.

Meanwhile, developers hear that The Cloud is coming and they imagine a world where there is self-service and bottomless pits of compute and storage. Oh! And less waiting on tickets logged with Operations. And the C-suite gets a visit from AWS or Microsoft and is told about how agile and disruptive their businesses will become. Super!

The Age-Old Battle Continues

The Cloud landing zone is built and developers are given “access” (if we can call it that) to their new virtual workspace, only to find that they can create limited quantities and sizes of (breath) virtual machines. Modern platforms are nowhere to be found. The ability to create is locked away behind custom permissions roles. If they want something, to do their job for the business, they need to log a ticket and wait – just how is this any different from the old VMware platform they probably ran on before The Cloud?

And the business is digitally transformed. Well, no, not really. Some SAP stuff might be running on Azure hardware and VMs now. And some databases might have been moved to The Cloud. But that’s about it. None of the agility or disruption happens.

What Went Wrong?

The issue began right at the start. I’ll give you an example. A Microsoft account manager (or whatever they are called this financial year), a Microsoft partner, and an IT operations manager have a meeting. This sounds like a bad joke so far, and I promise that no one will laugh. The Microsoft account manager offers X dollars to perform an assessment. Someone will come in, scan the VMware machines, write a report that says “here is the TCO comparison” and “you’d be silly not to get started now”. So the Operations manager gives the go-ahead and the direction of what to do.

That contrasts greatly with the Microsoft Cloud Adoption Framework (CAF). Phase 1 (Strategy) of the CAF is all about getting business motivations from the C-suite that can be shaped into the direction in phase 2 (Plan) where the tech stuff begins – including the (digital estate) assessment. And the Plan phase is all about people and process. Ahh – people (skills) and process (change). This is where digital transformation begins. We haven’t even gotten to the part where things are deployed (phase 3, Ready) in The Cloud yet because we don’t know what “shape” the organisation will be until business motivations tell the IT departments what is expected from The Cloud.

The Microsoft Azure Cloud Adoption Framework

So What Is Digital Transformation?

In short, digital transformation should change:

  1. Process: Legacy methods of creating & running IT systems may not suite the business, especially if self-service, agility, and disruption and required. Look at the organisations that have shaken up different verticals in different industries and service sectors. Their IT departments are very different to the vertical pillars that you may recognise in your organisation. The order to change came from the top where the authority resides.
  2. People: How people are organised may need to change. The skills they possess must change. Roles must be identified and skills must be developed before the project, not during or even after some handover phase from a consulting company. A budget and time for training, possibly even a budget to recruit additional bodies requires authority that can only come from the top.
  3. Technology: It’s easier to change technology than to change process and people. The problem is what to change it to. The shape of the technology must reflect the people and process patterns, otherwise it is not fit for purpose.

Az Module Scripts in GitHub Actions

In this post, I will show how to run Azure Az module scripts as tasks in a GitHub action workflow. Working examples can be found in my GitHub AzireFirewall/DevSecOps repo which is the content for my DevSecOps articles.

Credit to my old colleague Alan Kinane for getting me started with his post, Azure CI/CD with ARM templates and GitHub Actions.

Why Use A Script?

One can do some simple deployment tasks, but a script offers some advantages:

  • A task that runs a deployment
  • A simple PowerShell/Azure CLI task that runs an inline script

But you might want something that does more. For example, you might want to do some error checking. Or maybe you are going to use a custom container and execute complex tasks from it. In my case, I wanted to do lots of error checking and give myself the ability to wrap scripts around my deployments.

My Example

For this post, I will use the hub deployment from my GitHub AzireFirewall/DevSecOps repo – this deploys a VNet-based (legacy) hub in an Azure hub & spoke architecture.. There are a number of things you are going to need. I’ve just rorganised and updated the main branch to support both Azure DevOps pipelines (/.pipelines) and GitHub actions (/.github).

Afterwards, I will explain how the action workflow calls the PowerShell script.

GitHub Repo

Set up a repository in GitHub. Copy the required files into the repo. In my example, there are four folders:

  • platform: This contains the files to deploy the hub in an Azure subscription. In my example, you will find bicep files with JSON parameter files.
  • scripts: This folder contains scripts used in the deployment. In my example, deploy.ps1 is a generic script that will deploy an ARM/Bicep template to a selected subscription/resource group.
  • .github/workflows: This is where you will find YAML files that create workflows in GitHub actions. Any valid file will automatically create a workflow when there is a sucessful merge. My example contains hub.yaml to execute the script, deploy.ps1.
  • .pipelines: This contains the files to deploy the code. In my example, you will find a YAML file for a DevOps pipeline called hub.yaml that will execute the script, deploy.ps1.

You can upload the files into the repo or sync using Git/VS Code.

Azure AD App Registration (Service Principal or SPN)

You will require an App Registration; this will be used by the GitHub workflow to gain authorised access to the Azure subscription.

Create an App Registation in Azure AD. Create a secret and store that secret (Azure Key Vault is a good location) because you will not be able to see the secret after creation. Grant the App Registration Owner rights to the Azure subscription (as in my example) or to the resource groups if you prefer that sort of deployment.

Repo Secret

One of the features that I like a lot in GitHub is that you can store secrets at different levels (organisation, repo, environment). In my example, I am storing the secrets for using the the Service Principal in the repo, making it available to the workflow(s) that are in this repo only.

Open the repo, browse to Settings, and then go to Secrets. Create a new Repository Secret called AZURE_CREDENTIALS with the following struture:

{
  "tenantId": "<GUID>",
  "subscriptionId": "<GUID>",
  "clientId": "<GUID>",
  "clientSecret": "<GUID>"
}

Create the Workflow

GitHub makes this task easy. You can into your Repo>Actions and create a workflow from a template. Or if you have a YAML file that defines your workflow, you can place it into your repo in /.github/workflows. When that change is merged, GitHub will automatically create a workflow for you.

Tip: To delete a workflow, rename or delete the associated file and merge the change.

Logging In

The workflow must be able to sign into Azure. There are plenty of examples out there but you need to accomplish 3 things:

  1. Log into Azure
  2. Enable the Az modules

The following code in the workflow accomplishes those three tasks:

      - name: Login via Az module
        uses: azure/login@v1
        with:
          creds: ${{secrets.AZURE_CREDENTIALS}}
          enable-AzPSSession: true 

The line uses: azure/login@v1 enables and Azure sign-in. Note that the with statement selects the credentials that we previously added to the repo.

The line enable-AzPSSession: true enables the Az PowerShell modules. With that, we are signed in and have all we need to execute Azure PowerShell cmdlets.

Executing A PowerShell Script

A workflow has a section called jobs; in here, you can create multiple jobs that share something in common, such as a login. Each job is made up of steps. Steps perform actions such as checking out code from the repo, logging into Azure, and executing a deployment task (running a PowerShell script, for example).

I can create a PowerShell script that does lots of cool things and store it in my repo. That script can be edited and managed by change control (pull requests) just like my code that I’m deploying. There is an example of this below:

The PowerShell script step running in a GitHub action
      - name: Deploy Hub
        uses: azure/powershell@v1
        with:
          inlineScript: |
            .\scripts\deploy.ps1 -subscriptionId "${{env.hubSub}}" -resourceGroupName "${{env.hubNetworkResourceGroupName}}" -location "${{env.location}}" -deployment "hub" -templateFile './platform/hub.bicep' -templateParameterFile './platform/hub-parameters.json'
          azPSVersion: "latest"

The task is running “PowerShell v1” which will allow us to run an inline script using the sign-in that was previously created in the job (the login step). The configuration of this step is really simple – you specify the PowerShell cmdlets to run. In my example, it executes the PowerShell script and passes in the parameter values, some of which are stored as environment variables at the workflow level.

My script is pretty generic. I can have:

  • Multiple Bicep files/JSON parameter files
  • Multiple target scopes

I can create a PowerShell step for each deployment and use the parameters to specialise the execution of the script.

Running multiple PowerShell script tasks in a GitHub workflow

The PowerShell Script

The full code for the script can be found here. I’m going to focus on a few little things:

The Parameters

You can see in the above example that I passed in several parameters:

  • subscriptionId: The ID of the subscription to deploy the code to. This does not have to be the same as the default subscription specified in the Service Connction. The Service Principal used by the pipeline must have the required permissions in this subcsription.
  • resourceGroupName: The name of the resource group that the deployment will go into. My script will create the resource group if required.
  • location: The Azure region of the resource group.
  • deploymentName: The name of the ARM deployment that will be created in the resource group for the deployment (remember that Bicep deployments become ARM deployments).
  • templateFile: The path to the template file in the pipeline container.
  • templateParameterFile: The path to the parameter file for the template in the pipeline container.

Each of those parameters is identically named in param () at the start of the PowerShell script and those values specialise the execution of the generic script.

Outputs

You can use Write-Host to output a value from the script to appear in the console of the running job. If you add -ForegroundColor then you can make certain messages, such as errors or warnings, stand out.

Beware of Manual Inputs

Some PowerShell commands might want a manual input. This is not supported in a pipeline and will terminate the pipeline with an error. Test for this happening and use code logic wrapped around your cmdlets to prevent it from happening – this is why a file-based script is better than a simple/short inline script, even to handle a situation like creating a resource group.

Try/Catch

Error handling is a big deal in a hands-off script. You will find that 90% of my script is checking for things and dealing with unwanted scenarios that can happen. A simple example is a resource group.

An ARM deployment (remember this includes Bicep) must go into a resource group. You can just go ahead and write the one-liner to create a resource group. But what happens when you update the code, the script re-runs and sees the resource group is already there? In that scenario, a manual input will appear (and fail the pipeline) to confirm that you want to continue. So I have an elaborate test/to process:

if (!(Get-AzResourceGroup $resourceGroupName -ErrorAction SilentlyContinue))
{ 
  try 
  {
    # The resource group does not exist so create it
    Write-Host "Creating the $resourceGroupName resource group"
    New-AzResourceGroup -Name $resourceGroupName -Location $location -ErrorAction SilentlyContinue
  }
  catch 
  {
    # There was an error creating the resoruce group
    Write-Host "There was an error creaating the $resourceGroupName resource group" -ForegroundColor Red
    Break
  }
}
else
{
  # The resoruce group already exists so there is nothing to do
  Write-Host "The $resourceGroupName resource group already exists"
}

Conclusion

Once you know how to do it, executing a script in your pipeline is easy. Then your PowerShell knowledge can take over and your deployments can become more flexible and more powerful. My example executes ARM/Bicep deployments. Yours could do a PowerShell deployment, add scripted configurations to a template deployment, or even run another language like Terraform. The real thing to understand is that now you have a larger scripting toolset available to your automated deployments.

An IT Person On A Weight Loss Journey

In this post, I’m going to share a little about my weight loss journey – I’m an IT pro and the sedentary lifestyle isn’t exactly great for staying trim!

Let’s go back a few years. Back in 2016-2017 I was weighing in at around 16 stone 10 pounds (106 KG or 234 lbs); that’s a great weight for a 6’3″ NFL linebacker, not a 5’7″ IT consultant.

In the above picture with Paula Januszkiewicz, I might have been talking about new security features in Hyper-V, but I was probably wondering what pizza I might order at the hotel when I wrapped up.

Living the Good Life

When I grew up, there wasn’t a lot of money for unhealthy food. Food was homemade, sometimes even homegrown. Treats like chocolate might happen once every few weeks and during the holidays. There were no regular trips to “the chipper” for a burger, pizza didn’t exist … it was eat healthily or don’t eat.

Back when I was in school/college, I cycled to and from school and I was stick-thin at 11 stone 7 pounds – I know that’s heavy for someone my height but I was actually stick thin.

Then along came graduation, my first job in IT, and money. Every mid-morning, there was a trip to the company canteen (for a fried breakfast), followed by lunch, followed by a convenient dinner. I swear I could eat takeaway 7 days a week. I worked in international consulting, living in hotels for 6 months at a go, so I ate a lot either in the hotel or in a bar/restaurant. I remember sitting down and thinking “what’s up with my waist?” when it first rolled over my belt.  And so it went for 20+ years.

I got a real shock 4-5 years ago when I had to do a health check for some government stuff, and I was classified as … obese. The shock! That only happens to Americans that dine at KFC! It was time for a change. By now, I had a 38″ waist.

Exercise

I have done some time in the gym over the years. During one of my 6-month hotel stints (in a small wealthy town called Knutsford near Manchester, UK) I spent an hour in the gym on most days. And then I ate a 2-3 course dinner with a bottle of wine. When I exercise, I get HUNGRY. Back in the late 2000’s I spent 5-7 days a week in the gym for over half a year. I’d do up to 90 minutes of training, followed by a 14″ pizza, chicken, and fries. I’m not kidding about the hungry thing!

I live near a canal and a lot of work was being done to open up a “greenway”  that’s a path dedicated to walkers, runners, and cyclists. Not long after the obese determination, I purchased a bike and started cycling, getting out several times a week to do a good stretch. That first year I did quite a bit on the bike. I felt fitter, but not smaller. I came home and ate the same way as always. I drank lots of beer and wine.

2 years ago I was in Florida with my family for a 3-week vacation. I didn’t have enough shorts with me so I went to a nearby outlet and tried on my usual 38″ waistline. Huh! They were too … big! I ended up buying 36″ shorts and jeans that vacation and they fit just nice.

Last year, I decided that I needed to do something more. My diet needed work. I wasn’t getting time to go out on the bike anymore – a wife and kids that need me around and the pending arrival of a new baby (twins as it turned out) would kill off the hours away on the bike at a time. I joined a group that runs a simplified calorie counting scheme. The goal is that you make adjustments to reduce your calorie intake but still get to enjoy the nice things – but within limits. I seriously started at that program in August of last year. At that time I weighed 15 stone 12 pounds (100 KG or 222 lbs), not bad for an NFL running back.

Food Optimisation

The program I’m using doesn’t use the term “diet”. Diet means starving yourself. In this program, I can still eat, but the focus is on including more salad/veg (1/3 of my plate), swapping out wasteful calories, and replacing oils and butter. It has resulted in myself and my wife experimenting so much more with our cooking – even I cook, which is probably quite a shock to the takeaway industry. The results were fast. Soon all my 38″ waistline clothes went into recycling. In June of this year, I was a 34″ waist and weighing 12 stone 12 pounds (81 kg or 180 lbs), which is pretty OK for a small NFL wide receiver. This is where things went a little wrong when I was last in the gym, 10 years ago. My weight never would go lower – the 14″ pizzas (etc) didn’t help.

I miss beer 🙂 I used to love getting craft beer and trying it out. I switched over to lower-calorie whiskey & dark rum. But then my wife spotted that 0 alcohol beer has very few calories. It might be missing the buzz, but the taste is still there – perfect while cooking on the BBQ.

Running != Weight Loss

While on maternity leave this summer, my wife started walking. I tried to get out a bit on the bike but was really restricted to once or twice at the weekend. I started to join my wife on her walks on Saturdays and Sundays. We pushed the pace and really started clocking up the KMs, varying between 7-14km per walk. I could feel the difference in my fitness level as we kept walking faster.

I struggled a bit again. The summer months brought lots of good weather and I was making the most amazing homemade burgers on the BBQ, along with steak, and chicken … you get the idea. My weight was bouncing down and up.

And then my wife spotted that the gym at a local hotel was running a family deal. We could sign up the entire family, the kids could use the pool, and me and my wife could use the gym. I went the next day and I used the bike and I ran. I ran 5 KMs with a few stretches of walking to get a drink. I was shocked. Now when I say that I ran, I was crawling along at 5 kmph – not exactly Raheem Mostert burning up the injusry-causing turf on the New York Jets last year.

I ran for a week. I was a little sore. I weighed in the following week and I was up 2 lbs! What the f***? There were a few things that I realised/learned with some googling:

  • When you start/change your exercise routines, you “damage” muscle (it’s part of the building muscle process) and that causes your body to retain fluid which temporarily increases your weight.
  • Running does not cause weight loss alone. You need to still have a calorie deficit. Following up a run by pigging out does not help.
  • Muscle (which I can feel) adds weight but can cause calorie “afterburn” so you burn off more calories even when resting.

Where I Am Now

I set myself a goal of hitting 12 stone (168 lbs or 76 kg). I’m also aiming to get my waistline below 30″ – an important milestone in long-term health. I now weigh around 12 stone 7 lbs (175 lbs or 79 Kgs) and my waist is 32″. My weight is trending down over a 2-3 week period and I’m running 5 KMs, now at 9.5 kmph with sprints up to 13 kmph. If I have time, I’ll combine weights and bike with running. If I hit the 12 stone mark and I’m still carrying a few too many pounds, then I’ll aim for 11 stone 7 pounds and see how things go.

Me about a month ago.

I’m Not Missing Out – Much

I used to live on pizza. I miss it 🙂 We had a week away back in June and I gave myself a week off from the food optimisation. I ate fried food and pizza – a whole pizza – to myself. I drank nice beer every night. And my weight did go up by 4 lbs, but I lost it all within 10 days.

As I said, we’re cooking a lot. There are lots of websites and cookbooks that specialise in making nice food but with better ingredients. Last week I make double cheeseburgers using low-fat cheese and beef and they were DE-F’N-LICIOUS. We’ve been eating curries, air-fried everything, and slow cooking stuff like mad. Our fridge is full of sauces and our press is stuffed with spices. We’re at the point where restaurant food has to be pretty amazing to impress us now.

We like this series of books by an Irish couple that became well known on Instagram – they actually live near us and we bumped into them eating dinner around the corner from our home.

Amazon UK


Amazon US

 We’re also fans of the Pinch of Nom series:

Amazon UK


Amazon US

Feeling Like You Could Lose Weight?

I’m not an expert. The best advice I can give you is:

  • Start now. Make the decision. Go Bo Jackson and just do it.
  • Find the right food optimisation technique for you. I like the one I’m using because it’s flexible and isn’t about “punishing” you. My wife’s uncle did something different and shed weight in no time.
  • Exercise. This will help burn the calories. When combined with the calorie deficit of your food optimisation, you’ll see a difference.
  • Be patient. Weight loss is different for everyone. For some, there’s an instant buzz when the pounds go off. For others, they are shocked when lots of exercise leads to weight gain! Be consistent, do the right things, and be patient. It’s not about what happens today, it’s what happens over a 1, 2, 6, 12month period.

But most of all – enjoy it. I gained weight out of laziness and because I enjoyed certain foods. Now, I’m eating healthier than I ever did but I am still enjoying food – the chicken fillet burgers I had a couple of weeks ago might appear on the menu tomorrow (better than any I ever had out) and I’m looking forward to the beef curry that we’re making tonight!

Understanding the Azure Image Builder Resources

In this post, I will explain the roles of and links/connections between the various resources used by Azure Image Builder.

Background

I enjoy the month of July. My customers, all in the Nordics, are off for the entire month and I am working. This year has been a crazy busy one so far, so there has been almost no time in the lab – noticeable I’m sure by my lack of writing. But this month, if all goes to plan, I will have plenty of time in the lab. As I type, a pipeline is deploying a very large lab for me. While that runs, I’ve been doing some hands on lab work.

Recently I helped develop and use an image building process, based on Packer, to regularly create images for a Citrix farm hosted in Microsoft Azure. It’s a pretty sweet solution that is driven from Azure DevOps and results in a very automated deployment that requires little work to update app versions or add/remove apps. At the time, I quickly evaluated Azure Image Builder (also based on Packer but still in Preview back then) but I thought it was too complicated and would still require the same pieces as our Packer solution. But I did decide to come back to Azure Image Builder when there was time (today) and have another look.

The first mission – figure out the resource complexity (compared to Packer by itself).

The Resources

I believe that one of Microsoft’s failings when documenting these services is their inability to explain the functions of the resources and how they work together. Working primarily in ARM templates, I get to see that stuff (a little). I’ve always felt that understanding the underlying system helps with understanding the solution – it was that way with Hyper-V and that continues with Azure.

Managed Identity – Microsoft.ManagedIdentity/userAssignedIdentities

A managed identity will be used by an Image Template to authorise Packer to use the imaging process that you are building. A custom role is associated with this Managed Identity, granting Packer rights to the resource group that the Shared Image Gallery, Image Definition, and Image Template are stored in.

Shared Image Gallery – Microsoft.Compute/galleries/images

The Shared Image Gallery is the management resource for images. The only notable attribute in the deployment is the name of the resource, which sadly, is similar to things like Storage Accounts in lacking standardisation with the rest of Microsoft Azure resource naming.

Image Definition- Microsoft.Compute/galleries/images

The Image Definition documents your image as you would like to present it to your “customers”.

The Image Definition is associated with the Shared Image Gallery by naming. If your Shared Image Gallery was named “myGallery” then an image definition called “myImage” would actually be named as “myGallery/myImage”.

The properties document things including:

  • VM generation
  • OS type
  • Generalised or not
  • How you will brand the images build from the Image Definition

Image Template – Microsoft.VirtualMachineImages/imageTemplates

This is where you will end up spending most of your time while operating the imaging process over time.

The Image Template describes to Packer (hidden by Azure) how it will build your image:

  • Identity points to the resource ID of the Managed Identity, permitting Packer to sign in as that identity/receiving its rights when using this Image Template to build an Image Version.
  • Properties:
    • Source: The base image from the Azure Marketplace to start the build with.
    • Customize: The tasks that can be run, including PowerShell scripts that can be downloaded, to customise the image, including installing software, configuring the OS, patching and rebooting.
    • Distribute: Here you associate the Image Template with an Image Definition, referencing the resource ID of the desired Image Definition. Everytime you run this Image Template, a new Image Version of the Image Definition will be created.

Image Version – Microsoft.Compute/galleries/images/versions

An Image Version, a resource with a messy resource name that will break your naming standards, is created when you build from an Image Template. The name of the Image Version is based on the name of the Image Definition plus an incremental number. If my Image Definition is named “myGallery/myImage” then the Image Version will be named “myGallery/myImage/<unique number>”.

The properties of this resource include a publishing profile, documenting to what regions an image is replicated and how it is stored.

What Is Not Covered

Packer will create a resource group and virtual machine (and associated resources) to build the new image. The way that the virtual machine is networked (public IP address by default) can normally be manipulated by the Image Template when using Packer.

Summary

There is a lot more here than with a simple run of Packer. But, Azure Image Builder provides a lot more functionality for making images available to “customers” across an enterprise-scale deployment; that’s really where all the complexity comes from and I guess “releasing” is something that Microsoft knows a lot about.

 

Building Azure VM Images Using Packer & Azure Files

In this post, I will explain how I am using a freeware package called Packer to create SYSPREPed/generalised templates for Citrix Cloud / Windows Virtual Desktop (WVD) – including installing application/software packages from Azure Files.

My Requirement

Sometimes you need an image that you can quickly deploy. Maybe it’s for a scaled-out or highly-available VM-based application. Maybe it’s for a Citrix/Windows Virtual Desktop worker pool. You just need a golden image that you will update frequently (such as for Windows Updates) and be able to bring online quickly.

One approach is to deploy a Marketplace image into your application and then use some deployment engine to install the software. That might work in some scenarios, but not well (or at all) in WVD or Citrix Cloud scenarios.

A different, and more classic approach, is to build a golden image that has everything installed and then the  VM is generalised to create an image file. That image file can be used to create new VMs – this is what Citrix Cloud requires.

Options

You can use classic OS deployment tools as a part of the solution. Some of us will find familiarty in these tools but:

  • Don’t waste your time with staff under the age of 40
  • These tools aren’t meant for the cloud – you’ll have to daisy chain lots of moving parts, and that means complex failure/troubleshooting.

Maybe you read about Azure Image Builder? Surely, using a native image building service is the way to go? Unfortunately: no. AIB is a preview, driven by scripting, and it fails by being too complex. But if you dig into AIB, you’ll learn that it is based on a tool called Packer.

Packer

Packer, a free tool from Hashicorp, the people behind Terraform, is a simple command line tool that will allow you to build VM images on a number of platforms, including Azure ARM. The process is simple:

  • You build a JSON file that describes the image building process.
  • You run packer.exe to ingest that JSON file and it builds the image for you on your platform of choice.

And that’s it! You can keep it simple and run Packer on a PC or a VM. You can go crazy and build a DevOps routine around Packer.

Terminology

There are some terms you will want to know:

  • Builders: These are the types of builds that Packer can do – the platforms that it can build on. Azure ARM is the one I have used, but there’s a more complex/faster Builder for Azure called chroot that uses an existing build VM to build directly into a managed disk. Azure ARM builds a temporary VM, configures the OS, generalises it, and converts it into an image.
  • Provisioners: These are steps in the build process that are used to customise your operating system. In the Windows world, you are going to use the PowerShell provisioner a lot. You’ll find other built in provisioners for Ansible, Puppet, Chef, Windows Restart and more.
  • Custom/Community Provisioners: You can build additional provisioners. There is even a community of provisioners.

Accursed Examples

If you search for Windows Packer JSON Files, you are going to find the same file over and over. I did. Blog posts, powerpoints, training materials, community events – they all used the same example: Deploy Windows, install IIS, capture an image. Seriously, who is ever going to want an image that is that simple?

My Requirement

I wanted to build a golden image, a template, for a Citrix worker pool, running in Azure and managed by Citrix Cloud. The build needs to be monthly, receiving the latest Windows Updates and application upgrades. The solution should be independent of the network and not require any file servers.

Azure Files

The last point is easy to deal with: I put the application packages into Azure Files. Each installation is wrapped in a simple PowerShell script. That means I can enable a PowerShell provisioner to run multiple scripts:

      “type”: “powershell”,
      “scripts”: [
        “install-adobeReader.ps1
        “install-office365ProPlus.ps1”
      ]
This example requires that the two scripts listed in the array are in the same folder as packer.exe. Each script is run in turn, sequentially.

Unverified Executables

But what if one of those scripts, like Office, wants to run a .exe file from Azure Files? You will find that the script will stall while a dialog “appears” (to no one) on the build VM stating that “we can’t verify this file” and waits for a human (that will never see the dialog) to confirm execution. One might think “run unlock-file” but that will not work with Azure Files. We need to update HKEY_CURRENT_USER (which will be erased by SYSPREP) to truse EXE files from the FQDN of the Azure Fils share. There are two steps to this, which we solve by running another PowerShell provisioner:
    {
      “type”: “powershell”,
      “scripts”: [
        “permit-drive.ps1”
      ]
    },
That script will run two pieces of code. The first will add the FQDN of the Azure Files share to Trusted Sites in Internet Options:

set-location “HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains”
new-item “windows.net”
set-location “Windows.net”
new-item “myshare.file.core”
set-location “myshare.file.core”
new-itemproperty . -Name https -Value 2 -Type DWORD

The second piece of code will trust .EXE files:

set-location “HKCU:\Software\Microsoft\Windows\CurrentVersion\Policies”
new-item “Associations”
set-location “Associations”
new-itemproperty . -Name LowRiskFileTypes -Value ‘.exe’ -Type STRING

SYSPREP Stalls

This one wrecked my head. I used an inline PowerShell provisioner to add Windows roles & features:

      “type”: “powershell”,
      “inline”: [
        “while ((Get-Service RdAgent).Status -ne ‘Running’) { Start-Sleep -s 5 }”,
        “while ((Get-Service WindowsAzureGuestAgent).Status -ne ‘Running’) { Start-Sleep -s 5 }”,
        “Install-WindowsFeature -Name Server-Media-Foundation,Remote-Assistance,RDS-RD-Server -IncludeAllSubFeature”
      ]
But then the Sysprep task at the end of the JSON file stalled. Later I realised that I should have done a reboot after my roles/features add. And for safe measure, I also put one in before the Sysprep:
    {
      “type”: “windows-restart”
    },
You might want to run Windows Update – I’d recommend it at the start (to patch the OS) and at the end (to patch Microsoft software and catch any missing OS updates). Grab a copy of the community Windows-Update provisioner and place it in the same folder as Packer.exe. Then add this provisioner to your JSON – I like how you can prevent certain updates with the query:
    {
      “type”: “windows-update”,
      “search_criteria”: “IsInstalled=0”,
      “filters”: [
        “exclude:$_.Title -like ‘*Preview*'”,
        “include:$true”
      ]
    },

Summary

Why I like Packer is that it is simple. You don’t need to be a genius to make it work. What I don’t like is the lack of original documentation. That means there can be a curve to getting started. But once you are working, the tool is simple and extensible.

Monitoring & Alerting for Windows Defender in Azure VMs

In this post, I will explain how one can monitor Windows Defender and create incidents for it with Azure VMs.

Background

Windows Defender is built into Windows Server 2016 and Windows Server 2019. It’s free and pretty decent. But it surprises me how many of my customers (all) choose Defender over third-parties for their Azure VMs … with no coaching/encouragement from me or my colleagues. There is an integration with the control plane using the antimalwareagent extension. But the level of management is poor-none. There is a Log Analytics solution, but solutions are deprecated and, last time I checked, it required the workspace to be in per-node pricing mode. So I needed something different to operationalise Windows Defender with Azure VMs.

Data

At work, we always deploy the Log Analytics extension with all VMs – along with the antimalware extension and a bunch of others. We also enable data collection in Azure Security Center. We use a single Log Analytics workspace to enable the correlation of data and easy reporting/management.

I recently found out that a table in Log Analytics called ProtectionStatus contains a “heartbeat” record for Windows Defender. Approximately every hour, a record is stored in this table for every VM running Windows Defender. In there, you’ll find some columns such as:

  • DeviceName: The computer name
  • ThreatStatusRank: A code indicating the health of the device according to defender:
    • 150: Health
    • 470: Unknown (no extension/Defender)
    • 350: Quarantined malware
    • 550: Active malware
  • ThreatStatus: A description for the above code
  • ThreatStatusDetails: A longer description
  • And more …

So you can see that you can search this table for malware infection records. First thing, though, is to filter out the machines/records reporting that there is no Defender (Linux machines, for example):

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’ | summarize makeset(Resource); ProtectionStatus

| where Resource in (all_windows_vms)

| sort by TimeGenerated desc

The above will find all active Windows VMs that have been reporting to Log Analytics via the extension heartbeat. Then we’ll store that data in a set, and search that set. Now we can extend that search, for example finding all machines with a non-healthy state (150):

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’

| summarize makeset(Resource); ProtectionStatus

| where Resource in (all_windows_vms)

| where ThreatStatusRank <> 150

| sort by TimeGenerated desc

Testing

All the tech content here will be useless without data. So you’ll need some data! Search for the Eicar test string/file and start “infecting” machines – be sure to let people know if there are people monitoring the environment first.

Security Center

Security Center will record incidents for you:

You will get email alerts if you have configured notifications in the subscription’s Security Center settings. Make sure the threshold is set to LOW.

If you want an alternative form of alert then you can use a Log Analytics alert (Scheduled Query Alert resource type) based on the below basic query:

SecurityAlert

| where TimeGenerated > now(-5m)

| where VendorName == ‘Microsoft Antimalware’

The above query will search for Windows Defender alerts stored in Log Analytics (by Security Center) in the last 5 minutes. If the threshold is freater than 0 then you can trigger an Azure Monitor Action Group to tell whomever or start whatever task you want.

Workbooks

Armed with the ability to query the ProtectionStatus table, you can create your own visualisations for easy reporting on Windows Defender across many machines.

 

The pie chart is made using this query:

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’

| summarize makeset(Resource); ProtectionStatus

| where TimeGenerated > now(-7d)

| where Resource in (all_windows_vms)

| where ThreatStatusRank <> ‘150’

| summarize count(Threat) by Threat

With some reading and practice, you can make a really fancy workbook.

Azure Sentinel

I have enabled the Entity Behavior preview.

Azure Sentinel is supposed to be the central place to monitor all security events, hunt for issues, and where to start investigations – that latter thanks to the new Entity Behavior feature. Azure Sentinel is powered by Log Analytics – if you have data in there then you can query that data, correlate it, and do some clever things.

We have a query that can search for malware incidents reported by Windows Defender. What we will do is create a new Analytic Rule that will run every 5 minutes using 5 minutes of data. If the results exceed 0 (threshold greater than 0) then we will create an incident.

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’

| summarize makeset(Resource); ProtectionStatus

| where TimeGenerated > now(-5m)

| where Resource in (all_windows_vms)

| where ThreatStatus <> ‘No threats detected’ or ThreatStatusRank <> ‘150’ or Threat <> ”

| sort by Resource asc | extend HostCustomEntity = Computer

The last line is used to identity an entity. Optionally, we can associate a logic app for an automated response. Once that first malware detection is found:

You can do the usual operational stuff with these incidents. Note that this data is recorded and your effectiveness as a security organisation is visible in the Security Efficiency Workbook in Azure Sentinel – even the watchers are watched! If you open an incident you can click investigate which opens a new Investigation screen that leverages the Entity Behavior data. In my case, the computer is the entity.

The break-out dialogs allow me to query Log Analytics to learn more about the machine and its state at the time and the state of Windows Defender. For example, I can see who was logged into the machine at that time and what processes were running. Pretty nice, eh?

The Office – Construction Complete

The construction of Cloud Mechanix global HQ finished yesterday afternoon. The final piece to go in place was the step up to the door. You can really see the slope in the site in the below photo.


If you step in you can see the all-wood finish, with the 1st fitting electrics, ready for the final touches.

And you can see the view from one of the locations where one of the desks will be located.

We had some paving stones from the site clearance, so they are being repurposed to cross the lawn from the house to the front door. A swing set was placed directly in-front of the office – the replacement is going to the far end of the lawn. The grass underneath and the empty anchor spots are bald so I’ll be re-sowing grass there in the coming days.

So what’s next? Paint and second-fitting electrics are next in the project plan. I will also be looking at how we can extend the house security systems to the office – a combination of a supplier-based monitored alarm system and the Ring cameras. That will allow us to insure the contents of the new office, and then … it’ll be time to move in.

The New Office – Day 3

It’s Monday and day 3 of the construction of Cloud Mechanix Global HQ. On Friday, the carpenters finished the steel roof and installed the wall studs for the cavity wall with insulation, as you can see in the photo.

One of the options we selected in the installation was to have a “plinth” and step installed. The price for this is always estimated on the site – the site effects the levelling and height of the foundation, and the height determines the amount of wood and labour required. The quote I was given was €350. The team is planning to finish the interior and additional wood work today. Next up will be the painters and electrical second fitting. We also added the options for paint and painting in the purchase. I can paint, but I suck at edges and I’m slow; I’d rather let a professional do it plus that means it’s time I can spend doing other things.

Thoughts are now turning to the interior, which will be painted white. Floorboards are being installed and also being painted white. But I am considering installing additional flooring to protect the structural wood. If I do, it’ll be something dark, or even grey, to contrast somewhat with the white walls and ceiling. I already have a white L-shaped Ikea “Bekant” desk. I think I will stick with the Ikea desks – I do like them. The plan is that the L-shaped desks will be placed behind the windows at the front, giving us a nice view up the garden while working. We’ll see where the budget is in a week or so, but I’m partial towards the version of the Bekant that includes powered legs that can raise/lower the desk at the push of a button.

I have an office chair already with a white frame and black cushions. But it was a bad purchase – the piston lasted only a few months and it makes a cracking noise when I adjust my position – that was why I replaced the previous seat! I can see myself buying some Secret Lab chairs so comments on that are welcome – from people who have owned one for more than 1 year.

I have a white 2×4 Ikea “Kallax” unit in the small office in the house at the moment. On the plus side, it’s great to display certain things. On the negative side, the close options are too small for the stuff you want to hide – I have 2 large plastic storage boxes hidden in the built-in wardrobe in the office at the moment, filled with things like spare peripherals, cables, battery packs, and so on. I think I want something that is closed storage at the bottom and open shelf storage at the top. I will keep looking.

My desk is quite full at the moment: 3 x USB drives (that I might replace with a NAS), a laptop, an Intel NUC, a KVM switch, a Surface dock, an Epson eco-tank printer, 2 x 27″ monitors, a mic, and so on. Some of the stuff is stuff that you need around but don’t need beside you all the time. Because we have a nearly 6 x 4m space, we have room to add more desk space. So I am thinking of putting in 1, or even 2, additional straight desks at a later time. Things like the printer and networking gear can be relocated to there.

Finally, there are times where you want to work, but want to relax. Maybe you need to read something in a more relaxed position, or just take a break from the screen. I’m thinking that we’ll put in a black leather couch in middle of the office – I’m not sure where yet because I might also move the 43″ TV from the house office into there – nice for watching conferences, etc. The second-hand market sometimes is great for that kind of thing – my mother-in-law is quite skilled at buying/selling on those kinds of websites so we might have to recruit her assistance. Depending on how that goes and how much space there is, we might add a small coffee table too.

The New Office – Construction Has Begun

It is day 2 of the construction of our new office. The carpenters arrived exactly at 9 am yesterday morning, bang-on time, to begin the assembly and construction of our new garden office – the global headquarters of Cloud Mechanix.

My only concern at the start was whether the site would be level enough for the build. The team warned us early on that the slope would make one side look higher up when it was level but we knew that and were OK with it. They quickly assembled the wooden frame that would eventually site on concrete blocks.

Once the 6x4m frame was built I was called out to pick the final position – allowing just over half a metre from the concrete garden wall so I can paint the outside to maintain the integrity of the wooden construction. Not long after the wooden frame was in position. Around lunchtime, the planks for the outside wall were being hammered into place. I’d say it took the 3 of them 45 minutes to get all the planks in place. Then the ceiling was put on.

The electricians were called in and asked to confirm how we wanted to do things after they surveyed the site. The one thing I was worried about was how the armoured electrical cable would run from the office to the fuse box at the front of the house – we need to power at least 2 computers + equipment and a 1.5 KW heater (please save comments on €13,000 hearing/aircon systems – the office is costing that much!). The ethernet line and electrical cable are entering on the back-right and will run along the garden wall under the capping (which you cannot see in the photos here). Then they will run underground to the front of the house. The ethernet line will enter the house at the same location as the phone/Internet and be just a meter from the broadband router. The electrical cable will cross the front of the house hidden in white trunking (maybe 2 cm wide) attached under the white soffit of the house – nicely camouflaged. The electrical cable will enter above the front door (still camouflaged) and then be under 1 metre from the fusebox – trunking from the front door, along an interior white doorframe – still camouflaged.

The electricians did the first fitting: 5 x dual power sockets, light switch, and 6 x dropdown lights. When leaving, they let me know that they would be back once the interior was finished by the carpenters to do the second fittings – sockets, lights, and the aforementioned cable installations.

The carpenters carried on, adding insulation and sheeting to the roof before the end of the day.

As you can see, there are two big windows, right where the front desks will be – one for me and one for my wife. Cloud Mechanix will have a great view overlooking about 40m of the garden!

Day two (today) began with drizzle. The team (down to two) were bang-on time again. They told me that they expect to require 3 days for the complete assembly and finishing of the wooden structure. Today, the plan was to get the doors and windows done, the studs for the wall insulation & interior installed, and the roof started. As you can see, the doors and windows were in place at lunchtime.

Exterior trims have been added, the slats for the roof (steel roof) are in place, and I can hear steel being cut as I type – they are installing the steel sheet roof that has a black tile effect. Once that is done, the slow bit (as they referred to it) will begin: the interior finishes, the lower trim and the step.