3 Tips For MS Certification Hands-On-Labs

Here are my three tips for improving your results in a hands-on-lab in Microsoft certification exam:

  1. Read the task description carefully.
  2. Optimise your time.
  3. Review your work.

About Hands-On-Labs

If you’ve sat a Microsoft century exam over the past quarter century then you are familiar with the traditional format:

  • Either a simple scenario or a case study.
  • Multiple choice questions where you select one best answer or multiple answers that are correct or part of the best solution, or sometimes ordering the steps.
  • “Correct” answers that are wrong and “wrong” answers that are right depending on feature/update releases and when the question was probably written.
  • Trick questions that are quite unfair.

I sat the AZ-700: Designing and Implementing Microsoft Azure Networking Solutions this week and was surprised to see a hands-on-lab at the very end. Before the lab appeared I had approximately an hour left in the exam. When I was finished, I had 5 minutes left. The exercises were not hard, especially for anyone used to deploying Azure networking resources, but it was time consuming and there were a lot of tasks to complete.

The lab appeared at the end of the exam. It provided a username and password. The instructions do not offer a copy/paste feature in the Azure Portal which is embedded in the test. But you can click the username and password to get them to appear in the log-in screen. The absence of copy/paste means that you need to be careful when you are asked to enter a specific name for a resource.

The lab was made up of a number of exercises. Each exercise was discrete in my exam – no exercise depended on another. Each exercise had a description of differing complexity and clarity. Some were precise and some were vague. Some were short tasks and others were long-running tasks.

I found myself answering a comment on LinkedIn this morning and thought “this would be a nice blog post”. So on with the tips!

Read The Task Description Carefully

Just like the multiple-choice questions, the exercises are probably (I’m guessing) written by Microsoft Certified Trainers (MCTs) who may or may not be experts in the exam content – this sort of comment often makes MCTs angry and defensive. It’s clear from the language that some of the authors don’t have a clue – this is where MCTs say “leave feedback” but I would have spent another 2 hours leaving feedback when I had a family to get home to!

A task might have multiple ways to complete it. I can’t share specifics from the exam, but one of the tasks was very vague and offered no context. Without thinking too much I thought immediately of 3 possible deployments that could solve the issue. Which was right? Were all 3 right? I guessed with the one that would require the least work – it felt right based on some of the language but I was reading between lines and trying to think what the author was thinking.

Read the task instructions carefully. Take notes of things to complete – use the dry marker pens and boards you are given and check steps off as you do them. If a required resource is named, then note the case and duplicate that in your lab – Azure might drop it to lower case because that’s what it might do for that resource type!

Don’t jump to any assumptions. Look for clue words that hunt at requirements or things to avoid. In the unclear question that I had, there was one word that lead me to choose the approach that I took. I’ve no idea if the answer was right or wrong, but it felt right.

  1. Read the task carefully.
  2. Take notes on actions to create a checklist.
  3. Re-read the task looking for clue words.
  4. Verify your checklist is complete.
  5. Tick off items on the checklist as you work.

Optimise Your Time

I had 12 tasks (I think) to complete after answering dozens of multiple choice questions. I had an hour or so left in the exam and that hour flew by. As I said earlier, some tasks were quick. Some tasks required a lot of work. And some tasks were long-running.

In my exam, the tasks were independent of each other. That meant I could start on task 2 while a resource for task 1 was still deploying.

The Azure Portal can offer several ways to accomplish a task. You can build out each resource you require in individual wizards. Sometimes, the last “big” resource that you need has a wizard that can deploy everything – that’s the method that you want. Practice is your friend here, especially if you normally work using infrastructure-as-code (like me) and rarely deploy in the Azure Portal. Find different ways to deploy things and compare which is more efficient for basic deployments.

There are certain resources in Azure networking that take 10-45 minutes. If you have a task such as this then do not wait for the deployment to complete. Jump ahead to the next task and start reading.

You might find yourself working on 2-3 tasks at once if you use this approach. This is where tracking your work becomes critical. Earlier, I stated that you should track the requirements of a task using a checklist. You should also track the completion status of each task – you don’t want to forget to complete a task where you are waiting on a resource to deploy. Each task has a “Mark As Complete” button – use it and don’t consider the lab as complete until all tasks have green check marks.

  1. Practice deployments in the Azure Portal.
  2. Choose the deployment method that will complete more of the task requests in less time.
  3. Do not wait on long-running deployments.
  4. Track task completion using the “Mark As Complete” button.

Review Your Work

In my exam, the tasks normally did not instruct you to use a resource name. So I created names using the naming standard that I am used to. When I had completed all the tasks – all had green check marks – I decided to review my work. I read through the task requirements again and verified the results in the Azure Portal. I found that one task asked me to create a resource with a specific name and I had created it using my normal naming standard. I fixed my error and continued to check everything.

When you have finished your work, go through the exercise descriptions again. Confirm that the checklists are complete. And compare the asks with what you have done in the Azure Portal to verify that everything is done as it is required.

  1. Read the task descriptions again.
  2. Compare with your checklist.
  3. Compare with the results in the Azure Portal.

My Experience

It took me a few minutes to get over the shock of doing a hands-on-lab. I dreaded every minute because I have heard the horror tales of labs being slow or crashing mid-exam. I also was glad of blogging and lab work – in my day job I rarely use the Azure Portal to deploy networking (or any) resources.

But once settled in, I found that the labs were not difficult. The asks were not complicated – they were a mixture of vague and detailed:

  • How to decide what to deploy wasn’t written down.
  • Only one task had a requirement to name 1 resource.

The thing that really shocked me, though, was that I did not learn my result at the end of the exam. The normal Microsoft exam experience is that you confirm that you are finished the exam, you spend a few painful seconds wondering if the program has crashed, and a result appears. Instead, I was told to log into http://www.microsoft.com/learning/dashboard later in the day to see my result! I had to drive home and then take care of my kids so it was 90 minutes later when I finally got to sign in and navigate to see my result – which was a pass with no score shared. So I guess that I did OK in the labs.

Dealing With Azure Capacity Shortages

In this post, I will discuss the recent news that Azure is having some serious capacity issues affecting customers in 2022 that could last into 2023.

The News

Personally, I didn’t think that Azure having capacity issues was news. I thought that thanks to all the jokes on social media, everyone took it for granted that the effects of the pandemic and cargo crises had crippled electronics supplies the world over. Even the recent news that the annual MVP renewal (July 1st) was being postponed to July 5 was jokingly blamed on the capacity issues.

Last Friday, The Information (subscription required) reported:

… more than two dozen Azure data centers in countries around the world are operating with limited server capacity available to customers, according to two current Microsoft managers contending with the issue and an engineer who works for a major customer.

This is not the news we want to hear when there are repeated reports that cloud adoption is increasing, Azure adoption is growing faster than AWS adoption, and Microsoft’s total (including SaaS) cloud business is (marginally) the largest in the industry.

The Effect

Most people who have used Azure for a few years have seen the dreaded deployment fails – you cannot get capacity because the region you have selected doesn’t have it. And the “helpful” support agent tells you to try a less-impacted region on another continent.

That suggestion indicates how little that person cares or the lack of understanding about IT performance or compliance issues.

I work mostly with Norwegian clients. Typically, they need to stay inside the EU (using West Europe, the largest hero region), and often they have to stay inside Norway (Norway East, because Norway West is not publicly available to customers). If one of those clients tries to start something or create something in their chosen region then being told to deploy in Asia or the US isn’t going to work. If they chose Norway East because they need to comply with the local “Arkiv” (archive) law then venturing outside the border is not an option.

Let’s say you’ve built a highly integrated system in West US. You have lots of containers and functions, all integrating with each other, and hitting databases. Your workload is time-sensitive so you need to minimise every millisecond of latency. Heck, you have even started to use VMs to get access to proximity placement groups to control physical distances between tiers of your workloads. And then a support engineer tells you that your additional scale-out capacity should be halfway around the planet? Seriously?

Solutions

Other than some miracle where manufacturing backlogs and shipping mismanagement are solved, you will need to minimise risks of this sort of thing happening to you.

Auto-Scaling

Customers use auto-scaling to reduce costs. You dynamically create or power-up (allocate to a running/billing state) compute instances to stay slightly ahead of demand. And when demand (profits) reduce, you dynamically destroy or power down (deallocate to a stopped/non-billing state) unneeded compute instances.

When capacity is not available, auto-scaling can be a gamble. You want to minimise wastage during non-peak periods, but not having enough compute instances could damage revenue/operations more than having unused compute instances. For example, your Citrix/Azure Virtual Desktop instances are required for people to work, but when people try to log in on Monday morning, Azure cannot supply enough machines to scale-out your worker pools. Whoops! You saved some money over the weekend, but now only a small percentage of your employees can work.

In these times of shortage, you need to hold on to what you have got. It is worth considering the disablement of auto-scaling and going back to old practices of trying to figure out what you need, and keeping that capacity running.

Virtual Machine Migration

After the summer, I will have some involvement in three migration projects. Two of those could result in large numbers of virtual machines being deployed to Azure. All three projects have hard deadlines for completion. The last thing that those clients are going to want to hear is: “Sorry, your business plans are not feasible because there aren’t enough servers in the Cloud that IT has selected”.

Microsoft offers a program where you can reserve virtual machine capacity for selected series of virtual machines, called On-demand Capacity Reservation. In short, you pay for the price of the machine you want, before you deploy it, to guarantee the capacity that you want when you need it later.

Or as some cynics might say: you pay a vig to Microsoft for capacity that you should have expected in a hyper-scale cloud in the first place.

If you are in a scenario where you have an upcoming spike in capacity, such as a migration, and you need to be sure the capacity will be there, then this form of reservation must be strongly considered. However, telling a client that they need to pay for compute weeks before they need it – will be a hard sell.

SKU Choice

When we deploy new tech, we always want the latest and greatest. Why would you deploy a years-old D_v3 when there’s a D_v5 with a faster processor? If you really need the performance of the newer SKU, then I get the choice. But how many workloads genuinely need that kind of sizing. I’ve rarely encountered one.

Under the covers of the platform, that v5 is limited to new hardware with matching physical processors. That is the same capacity that Microsoft (and others) cannot get their hands on. However, the v3 is able to run on the v3 hardware, and thanks to Hyper-V processor management features, it is also able to run on newer hardware. So if you want more potential host capacity to run your compute instances, then you will choose older SKUs that run on older processors.

This advice will apply to any resource type – not just virtual machines. In the end, everything (including so-called serverless computing) is a virtual machine under the platform, except for items such as VMware hosts and SAP Hana machines which are physical servers.

Summary

It sounds like we are facing some issues. Azure might be making the headlines now, but Microsoft uses the same physical components as everyone else – there is only AMD and Intel, pretty much everything electronic is assembled in China, and everything needs months to move by cargo ship. We are going to have to box clever to ensure that we have enough capacity for resources such as virtual machines, app services, containers, databases, and so on – everything that relies on compute instances under the covers.

Deploy Shared Azure Bastion To Virtual WAN Spoke

In this post, I will explain how you can deploy Azure Bastion into a spoke in a hub & spoke architecture with a Virtual WAN hub – and use that Bastion to securely log into virtual machines in other spokes using RDP or SSH. I will also explain why this has limitations in a hub & spoke architecture with a VNet-based hub.

The Need

Even organisations that opt for a PaaS-only Azure implementation need virtual machines. Once you do network security, you need virtual machines for those DevOps build agents or those GitHub runners. And realistically, migrated legacy workloads need VMs. And things like AKS and HPC are based on VMs that you build and troubleshoot. So you need VMs. And therefore, you need a secure way to log into those VMs.

Those of us who want an air gap between the PC and the servers have tried things like RD Gateway and Guacamole. Neither is perfect. Ideally, we want Azure AD integration (for Premium security features) and a platform resource (to minimise maintenance).

And along came Azure Bastion. At first reading, it seemed ideal. And then we started to discover warts. Many of those warts were cleaned up. Bastion got support for a desktop client through a CLI login. A hub deployment was possible – if you use a VNet-based hub – but it gave Bastion users (including external support) staff a map of your entire Azure network because they read access to the hub VNet – and all its peering connections. For many of us, that left us with deploying Bastion in every subnet – both costly and a waste of IP space.

We needed an Azure Bastion that we could deploy once in a spoke. We could log into it, route through the firewall in the hub, and log into VMs running in other spokes.

IP-Based Connection

Microsoft announced a new feature in the Standard tier of Azure Bastion called IP-Based Connection. With this feature, you can log into a virtual machine across an IP network from your Bastion. That means you can log into:

  • Virtual machines in the same subnet or virtual network
  • Virtual machines in other Azure virtual networks
  • On-premises computers across site-to-site network connections such as VPN or ExpressRoute

The assumptions are that:

  • The NSG protecting the Azure virtual machine allows SSH/RDP from AzureBastionSubnet o the virtual machine.
  • The hub firewall (if you have one) allows SSH/RDP from AzureBastionSubnet to the virtual machine.
  • The on-premises firewall, if the virtual machine is on-premises, allows RDP/SSH from AzureBastionSubnet to the computer.
  • The OS of the virtual machine or computer allows RDP/SSH rom AzureBastionSubnet.

VNet-Based Hub

In my first experiment, I tried deploying a shared Azure Bastion in a spoke with a VNet-based hub. I could deploy it no problem but Azure Bastion could not route to other spokes sharing the hub. Why?

There were no routes from the spoke subnets to other spoke subnets. But that’s OK – I know how to fix that.

Let’s say my entire Azure network is in the 10.1.0.0/16 address space. Well, if I want to route to any spoke in that address space, I can:

  1. Create a route table and *cough* associate it with the AzureBastionSubnet
  2. Add a user-defined route (UDR) for 10.1.0.0/16 with a next hop of the hub firewall (or a routing appliance).

Azure Bastion needs to have a route of 0.0.0.0/0 to Internet so you can’t do the usual spoke thing with the UDR so I could leave the default Internet route in place.

Did it work? No – that’s because AzureBastionSubnet has a hard-coded rule to prevent the association of a route table. I guess that Microsoft had too many support calls with people doing bad things with a route table associated with AzureBastionSubnet.

It turns out that the only way to get a route from one spoke to another with a VNet-based hub is:

  1. Use Azure Virtual Network Manager (AVNM – currently in preview) to peer your spokes with transitive peering.
  2. Do not expect spoke-to-spoke traffic to flow through a hub firewall – AVNM does not support configuring a next-hop.

That means the only shared Azure Bastion option for VNet-based hubs is to deploy Bastion in the hub and leave all your peering connections visible. Ick!

vWAN-Based Hub

I really like using Azure Virtual WAN (vWAN) for my hub and spoke – and almost none of my customers use it for SD-WAN (the primary use case). The reasons I like it are:

  • It pushes the hub into the platform, reducing administrative efforts
  • Routing becomes something that you push from the hub using eBGP

“Ah – what’s that you say about eBGP, Aidan?”

You can create route tables in Virtual WAN Hubs – let’s call them hub route tables. Then in the properties of a spoke virtual network connection you can configure:

  • Propagation: Have the route table learn routes from the spoke virtual network using eBGP.
  • Association: Share routes from the route table to the spoke virtual network.

And you can put static routes into a hub route table.

My Scenario

A shared Azure Bastion in the spoke of an Azure Virtual WAN hub

When I deploy a Virtual WAN Hub, I choose the Secured Virtual WAN Hub option. This places an Azure Firewall in the hub. I then add a static route for the 0.0.0.0/0 and private IP address spaces to route via the Azure Firewall in the build-in Default hub route table.

All spokes:

  • Propagate to the built-in None hub route table, so the routes of the spoke are forgotten.
  • Associate with the built-in Default hub route table, so they learn the next hop to 0.0.0.0/0 and the private IP address spaces is via the firewall.

I can deploy Azure Bastion into a spoke but this spoke will require a different route configuration. That is because I use a firewall to isolate the spokes. If I had open spoke-to-spoke traffic then Bastion would probably just work. My scenario is actually simple to fix:

  1. Create a new hub route table, maybe called Bastion.
  2. Add a static route to the rest of the hub and spoke (10.1.0.0/16 in my example) with a next-hop of the hub firewall.
  3. Configure the Bastion spoke connection to associate with the Bastion hub route table and propagate to none.

Now:

  • The Bastion will use the firewall as the next hop to all other spokes.
  • Go directly to Internet for the control plane traffic, using the default route in the subnet.
  • Other spokes will have the same route back to the Bastion using the firewall as the next-hop.

Finally, you need to ensure that Firewall and NSG rules allow RDP/SSH from AzureBastionSubnet in the Bastion spoke to the VMs in other spokes. And it works! All an operator/developer/support staff member needs now is:

  • An Azure AD account with read access to the Azure Bastion resource – no need for read to the hub or even the spoke with the VM!
  • The IP address for the machine they want to sign into – my design is limited to Azure VMs but on-premises static routes could be added to the Bastion hub route table.
  • A username and password with login rights to the virtual machine or computer.

Digital Transformation Is Not Just A Tech Thing

In this post, I want to discuss why many businesses fail to get what they expect from The Cloud – why their “digital transformation” (or “cloud transformation”) fails.

Cloud Concerns

We’ve all seen the surveys. CIOs are scared about lots of things when it comes to cloud migration/adoption. Costs might overrun. Security might be insufficient. Skills are in short supply. I’m afraid to say those are all concerns – manageable ones. The big question is rarely discussed until it is too late: what are we really getting into?

Failure Starts In The Middle

Many cloud adoption projects start at the wrong place in the organisation. Instead of an instruction coming with direction and authority from the C-suite (CEO, CTO, CSO, CIO, etc), IT management, typically in Operations, make the decision to go to The Cloud for mundane reasons such as “try to reduce costs” or “avoid doing another hardware upgrade”.

Operations go ahead and build (or request a consultant to deliver) what they know: a centrally managed, locked-down environment. You know what I mean; a rigid environment that complies with old ITIL-style processes from the early 2000s. If a developer needs something, log a ticket, and we’ll get around to it.

Meanwhile, developers hear that The Cloud is coming and they imagine a world where there is self-service and bottomless pits of compute and storage. Oh! And less waiting on tickets logged with Operations. And the C-suite gets a visit from AWS or Microsoft and is told about how agile and disruptive their businesses will become. Super!

The Age-Old Battle Continues

The Cloud landing zone is built and developers are given “access” (if we can call it that) to their new virtual workspace, only to find that they can create limited quantities and sizes of (breath) virtual machines. Modern platforms are nowhere to be found. The ability to create is locked away behind custom permissions roles. If they want something, to do their job for the business, they need to log a ticket and wait – just how is this any different from the old VMware platform they probably ran on before The Cloud?

And the business is digitally transformed. Well, no, not really. Some SAP stuff might be running on Azure hardware and VMs now. And some databases might have been moved to The Cloud. But that’s about it. None of the agility or disruption happens.

What Went Wrong?

The issue began right at the start. I’ll give you an example. A Microsoft account manager (or whatever they are called this financial year), a Microsoft partner, and an IT operations manager have a meeting. This sounds like a bad joke so far, and I promise that no one will laugh. The Microsoft account manager offers X dollars to perform an assessment. Someone will come in, scan the VMware machines, write a report that says “here is the TCO comparison” and “you’d be silly not to get started now”. So the Operations manager gives the go-ahead and the direction of what to do.

That contrasts greatly with the Microsoft Cloud Adoption Framework (CAF). Phase 1 (Strategy) of the CAF is all about getting business motivations from the C-suite that can be shaped into the direction in phase 2 (Plan) where the tech stuff begins – including the (digital estate) assessment. And the Plan phase is all about people and process. Ahh – people (skills) and process (change). This is where digital transformation begins. We haven’t even gotten to the part where things are deployed (phase 3, Ready) in The Cloud yet because we don’t know what “shape” the organisation will be until business motivations tell the IT departments what is expected from The Cloud.

The Microsoft Azure Cloud Adoption Framework

So What Is Digital Transformation?

In short, digital transformation should change:

  1. Process: Legacy methods of creating & running IT systems may not suite the business, especially if self-service, agility, and disruption and required. Look at the organisations that have shaken up different verticals in different industries and service sectors. Their IT departments are very different to the vertical pillars that you may recognise in your organisation. The order to change came from the top where the authority resides.
  2. People: How people are organised may need to change. The skills they possess must change. Roles must be identified and skills must be developed before the project, not during or even after some handover phase from a consulting company. A budget and time for training, possibly even a budget to recruit additional bodies requires authority that can only come from the top.
  3. Technology: It’s easier to change technology than to change process and people. The problem is what to change it to. The shape of the technology must reflect the people and process patterns, otherwise it is not fit for purpose.

Az Module Scripts in GitHub Actions

In this post, I will show how to run Azure Az module scripts as tasks in a GitHub action workflow. Working examples can be found in my GitHub AzireFirewall/DevSecOps repo which is the content for my DevSecOps articles.

Credit to my old colleague Alan Kinane for getting me started with his post, Azure CI/CD with ARM templates and GitHub Actions.

Why Use A Script?

One can do some simple deployment tasks, but a script offers some advantages:

  • A task that runs a deployment
  • A simple PowerShell/Azure CLI task that runs an inline script

But you might want something that does more. For example, you might want to do some error checking. Or maybe you are going to use a custom container and execute complex tasks from it. In my case, I wanted to do lots of error checking and give myself the ability to wrap scripts around my deployments.

My Example

For this post, I will use the hub deployment from my GitHub AzireFirewall/DevSecOps repo – this deploys a VNet-based (legacy) hub in an Azure hub & spoke architecture.. There are a number of things you are going to need. I’ve just rorganised and updated the main branch to support both Azure DevOps pipelines (/.pipelines) and GitHub actions (/.github).

Afterwards, I will explain how the action workflow calls the PowerShell script.

GitHub Repo

Set up a repository in GitHub. Copy the required files into the repo. In my example, there are four folders:

  • platform: This contains the files to deploy the hub in an Azure subscription. In my example, you will find bicep files with JSON parameter files.
  • scripts: This folder contains scripts used in the deployment. In my example, deploy.ps1 is a generic script that will deploy an ARM/Bicep template to a selected subscription/resource group.
  • .github/workflows: This is where you will find YAML files that create workflows in GitHub actions. Any valid file will automatically create a workflow when there is a sucessful merge. My example contains hub.yaml to execute the script, deploy.ps1.
  • .pipelines: This contains the files to deploy the code. In my example, you will find a YAML file for a DevOps pipeline called hub.yaml that will execute the script, deploy.ps1.

You can upload the files into the repo or sync using Git/VS Code.

Azure AD App Registration (Service Principal or SPN)

You will require an App Registration; this will be used by the GitHub workflow to gain authorised access to the Azure subscription.

Create an App Registation in Azure AD. Create a secret and store that secret (Azure Key Vault is a good location) because you will not be able to see the secret after creation. Grant the App Registration Owner rights to the Azure subscription (as in my example) or to the resource groups if you prefer that sort of deployment.

Repo Secret

One of the features that I like a lot in GitHub is that you can store secrets at different levels (organisation, repo, environment). In my example, I am storing the secrets for using the the Service Principal in the repo, making it available to the workflow(s) that are in this repo only.

Open the repo, browse to Settings, and then go to Secrets. Create a new Repository Secret called AZURE_CREDENTIALS with the following struture:

{
  "tenantId": "<GUID>",
  "subscriptionId": "<GUID>",
  "clientId": "<GUID>",
  "clientSecret": "<GUID>"
}

Create the Workflow

GitHub makes this task easy. You can into your Repo>Actions and create a workflow from a template. Or if you have a YAML file that defines your workflow, you can place it into your repo in /.github/workflows. When that change is merged, GitHub will automatically create a workflow for you.

Tip: To delete a workflow, rename or delete the associated file and merge the change.

Logging In

The workflow must be able to sign into Azure. There are plenty of examples out there but you need to accomplish 3 things:

  1. Log into Azure
  2. Enable the Az modules

The following code in the workflow accomplishes those three tasks:

      - name: Login via Az module
        uses: azure/login@v1
        with:
          creds: ${{secrets.AZURE_CREDENTIALS}}
          enable-AzPSSession: true 

The line uses: azure/login@v1 enables and Azure sign-in. Note that the with statement selects the credentials that we previously added to the repo.

The line enable-AzPSSession: true enables the Az PowerShell modules. With that, we are signed in and have all we need to execute Azure PowerShell cmdlets.

Executing A PowerShell Script

A workflow has a section called jobs; in here, you can create multiple jobs that share something in common, such as a login. Each job is made up of steps. Steps perform actions such as checking out code from the repo, logging into Azure, and executing a deployment task (running a PowerShell script, for example).

I can create a PowerShell script that does lots of cool things and store it in my repo. That script can be edited and managed by change control (pull requests) just like my code that I’m deploying. There is an example of this below:

The PowerShell script step running in a GitHub action
      - name: Deploy Hub
        uses: azure/powershell@v1
        with:
          inlineScript: |
            .\scripts\deploy.ps1 -subscriptionId "${{env.hubSub}}" -resourceGroupName "${{env.hubNetworkResourceGroupName}}" -location "${{env.location}}" -deployment "hub" -templateFile './platform/hub.bicep' -templateParameterFile './platform/hub-parameters.json'
          azPSVersion: "latest"

The task is running “PowerShell v1” which will allow us to run an inline script using the sign-in that was previously created in the job (the login step). The configuration of this step is really simple – you specify the PowerShell cmdlets to run. In my example, it executes the PowerShell script and passes in the parameter values, some of which are stored as environment variables at the workflow level.

My script is pretty generic. I can have:

  • Multiple Bicep files/JSON parameter files
  • Multiple target scopes

I can create a PowerShell step for each deployment and use the parameters to specialise the execution of the script.

Running multiple PowerShell script tasks in a GitHub workflow

The PowerShell Script

The full code for the script can be found here. I’m going to focus on a few little things:

The Parameters

You can see in the above example that I passed in several parameters:

  • subscriptionId: The ID of the subscription to deploy the code to. This does not have to be the same as the default subscription specified in the Service Connction. The Service Principal used by the pipeline must have the required permissions in this subcsription.
  • resourceGroupName: The name of the resource group that the deployment will go into. My script will create the resource group if required.
  • location: The Azure region of the resource group.
  • deploymentName: The name of the ARM deployment that will be created in the resource group for the deployment (remember that Bicep deployments become ARM deployments).
  • templateFile: The path to the template file in the pipeline container.
  • templateParameterFile: The path to the parameter file for the template in the pipeline container.

Each of those parameters is identically named in param () at the start of the PowerShell script and those values specialise the execution of the generic script.

Outputs

You can use Write-Host to output a value from the script to appear in the console of the running job. If you add -ForegroundColor then you can make certain messages, such as errors or warnings, stand out.

Beware of Manual Inputs

Some PowerShell commands might want a manual input. This is not supported in a pipeline and will terminate the pipeline with an error. Test for this happening and use code logic wrapped around your cmdlets to prevent it from happening – this is why a file-based script is better than a simple/short inline script, even to handle a situation like creating a resource group.

Try/Catch

Error handling is a big deal in a hands-off script. You will find that 90% of my script is checking for things and dealing with unwanted scenarios that can happen. A simple example is a resource group.

An ARM deployment (remember this includes Bicep) must go into a resource group. You can just go ahead and write the one-liner to create a resource group. But what happens when you update the code, the script re-runs and sees the resource group is already there? In that scenario, a manual input will appear (and fail the pipeline) to confirm that you want to continue. So I have an elaborate test/to process:

if (!(Get-AzResourceGroup $resourceGroupName -ErrorAction SilentlyContinue))
{ 
  try 
  {
    # The resource group does not exist so create it
    Write-Host "Creating the $resourceGroupName resource group"
    New-AzResourceGroup -Name $resourceGroupName -Location $location -ErrorAction SilentlyContinue
  }
  catch 
  {
    # There was an error creating the resoruce group
    Write-Host "There was an error creaating the $resourceGroupName resource group" -ForegroundColor Red
    Break
  }
}
else
{
  # The resoruce group already exists so there is nothing to do
  Write-Host "The $resourceGroupName resource group already exists"
}

Conclusion

Once you know how to do it, executing a script in your pipeline is easy. Then your PowerShell knowledge can take over and your deployments can become more flexible and more powerful. My example executes ARM/Bicep deployments. Yours could do a PowerShell deployment, add scripted configurations to a template deployment, or even run another language like Terraform. The real thing to understand is that now you have a larger scripting toolset available to your automated deployments.

An IT Person On A Weight Loss Journey

In this post, I’m going to share a little about my weight loss journey – I’m an IT pro and the sedentary lifestyle isn’t exactly great for staying trim!

Let’s go back a few years. Back in 2016-2017 I was weighing in at around 16 stone 10 pounds (106 KG or 234 lbs); that’s a great weight for a 6’3″ NFL linebacker, not a 5’7″ IT consultant.

In the above picture with Paula Januszkiewicz, I might have been talking about new security features in Hyper-V, but I was probably wondering what pizza I might order at the hotel when I wrapped up.

Living the Good Life

When I grew up, there wasn’t a lot of money for unhealthy food. Food was homemade, sometimes even homegrown. Treats like chocolate might happen once every few weeks and during the holidays. There were no regular trips to “the chipper” for a burger, pizza didn’t exist … it was eat healthily or don’t eat.

Back when I was in school/college, I cycled to and from school and I was stick-thin at 11 stone 7 pounds – I know that’s heavy for someone my height but I was actually stick thin.

Then along came graduation, my first job in IT, and money. Every mid-morning, there was a trip to the company canteen (for a fried breakfast), followed by lunch, followed by a convenient dinner. I swear I could eat takeaway 7 days a week. I worked in international consulting, living in hotels for 6 months at a go, so I ate a lot either in the hotel or in a bar/restaurant. I remember sitting down and thinking “what’s up with my waist?” when it first rolled over my belt.  And so it went for 20+ years.

I got a real shock 4-5 years ago when I had to do a health check for some government stuff, and I was classified as … obese. The shock! That only happens to Americans that dine at KFC! It was time for a change. By now, I had a 38″ waist.

Exercise

I have done some time in the gym over the years. During one of my 6-month hotel stints (in a small wealthy town called Knutsford near Manchester, UK) I spent an hour in the gym on most days. And then I ate a 2-3 course dinner with a bottle of wine. When I exercise, I get HUNGRY. Back in the late 2000’s I spent 5-7 days a week in the gym for over half a year. I’d do up to 90 minutes of training, followed by a 14″ pizza, chicken, and fries. I’m not kidding about the hungry thing!

I live near a canal and a lot of work was being done to open up a “greenway”  that’s a path dedicated to walkers, runners, and cyclists. Not long after the obese determination, I purchased a bike and started cycling, getting out several times a week to do a good stretch. That first year I did quite a bit on the bike. I felt fitter, but not smaller. I came home and ate the same way as always. I drank lots of beer and wine.

2 years ago I was in Florida with my family for a 3-week vacation. I didn’t have enough shorts with me so I went to a nearby outlet and tried on my usual 38″ waistline. Huh! They were too … big! I ended up buying 36″ shorts and jeans that vacation and they fit just nice.

Last year, I decided that I needed to do something more. My diet needed work. I wasn’t getting time to go out on the bike anymore – a wife and kids that need me around and the pending arrival of a new baby (twins as it turned out) would kill off the hours away on the bike at a time. I joined a group that runs a simplified calorie counting scheme. The goal is that you make adjustments to reduce your calorie intake but still get to enjoy the nice things – but within limits. I seriously started at that program in August of last year. At that time I weighed 15 stone 12 pounds (100 KG or 222 lbs), not bad for an NFL running back.

Food Optimisation

The program I’m using doesn’t use the term “diet”. Diet means starving yourself. In this program, I can still eat, but the focus is on including more salad/veg (1/3 of my plate), swapping out wasteful calories, and replacing oils and butter. It has resulted in myself and my wife experimenting so much more with our cooking – even I cook, which is probably quite a shock to the takeaway industry. The results were fast. Soon all my 38″ waistline clothes went into recycling. In June of this year, I was a 34″ waist and weighing 12 stone 12 pounds (81 kg or 180 lbs), which is pretty OK for a small NFL wide receiver. This is where things went a little wrong when I was last in the gym, 10 years ago. My weight never would go lower – the 14″ pizzas (etc) didn’t help.

I miss beer 🙂 I used to love getting craft beer and trying it out. I switched over to lower-calorie whiskey & dark rum. But then my wife spotted that 0 alcohol beer has very few calories. It might be missing the buzz, but the taste is still there – perfect while cooking on the BBQ.

Running != Weight Loss

While on maternity leave this summer, my wife started walking. I tried to get out a bit on the bike but was really restricted to once or twice at the weekend. I started to join my wife on her walks on Saturdays and Sundays. We pushed the pace and really started clocking up the KMs, varying between 7-14km per walk. I could feel the difference in my fitness level as we kept walking faster.

I struggled a bit again. The summer months brought lots of good weather and I was making the most amazing homemade burgers on the BBQ, along with steak, and chicken … you get the idea. My weight was bouncing down and up.

And then my wife spotted that the gym at a local hotel was running a family deal. We could sign up the entire family, the kids could use the pool, and me and my wife could use the gym. I went the next day and I used the bike and I ran. I ran 5 KMs with a few stretches of walking to get a drink. I was shocked. Now when I say that I ran, I was crawling along at 5 kmph – not exactly Raheem Mostert burning up the injusry-causing turf on the New York Jets last year.

I ran for a week. I was a little sore. I weighed in the following week and I was up 2 lbs! What the f***? There were a few things that I realised/learned with some googling:

  • When you start/change your exercise routines, you “damage” muscle (it’s part of the building muscle process) and that causes your body to retain fluid which temporarily increases your weight.
  • Running does not cause weight loss alone. You need to still have a calorie deficit. Following up a run by pigging out does not help.
  • Muscle (which I can feel) adds weight but can cause calorie “afterburn” so you burn off more calories even when resting.

Where I Am Now

I set myself a goal of hitting 12 stone (168 lbs or 76 kg). I’m also aiming to get my waistline below 30″ – an important milestone in long-term health. I now weigh around 12 stone 7 lbs (175 lbs or 79 Kgs) and my waist is 32″. My weight is trending down over a 2-3 week period and I’m running 5 KMs, now at 9.5 kmph with sprints up to 13 kmph. If I have time, I’ll combine weights and bike with running. If I hit the 12 stone mark and I’m still carrying a few too many pounds, then I’ll aim for 11 stone 7 pounds and see how things go.

Me about a month ago.

I’m Not Missing Out – Much

I used to live on pizza. I miss it 🙂 We had a week away back in June and I gave myself a week off from the food optimisation. I ate fried food and pizza – a whole pizza – to myself. I drank nice beer every night. And my weight did go up by 4 lbs, but I lost it all within 10 days.

As I said, we’re cooking a lot. There are lots of websites and cookbooks that specialise in making nice food but with better ingredients. Last week I make double cheeseburgers using low-fat cheese and beef and they were DE-F’N-LICIOUS. We’ve been eating curries, air-fried everything, and slow cooking stuff like mad. Our fridge is full of sauces and our press is stuffed with spices. We’re at the point where restaurant food has to be pretty amazing to impress us now.

We like this series of books by an Irish couple that became well known on Instagram – they actually live near us and we bumped into them eating dinner around the corner from our home.

Amazon UK


Amazon US

 We’re also fans of the Pinch of Nom series:

Amazon UK


Amazon US

Feeling Like You Could Lose Weight?

I’m not an expert. The best advice I can give you is:

  • Start now. Make the decision. Go Bo Jackson and just do it.
  • Find the right food optimisation technique for you. I like the one I’m using because it’s flexible and isn’t about “punishing” you. My wife’s uncle did something different and shed weight in no time.
  • Exercise. This will help burn the calories. When combined with the calorie deficit of your food optimisation, you’ll see a difference.
  • Be patient. Weight loss is different for everyone. For some, there’s an instant buzz when the pounds go off. For others, they are shocked when lots of exercise leads to weight gain! Be consistent, do the right things, and be patient. It’s not about what happens today, it’s what happens over a 1, 2, 6, 12month period.

But most of all – enjoy it. I gained weight out of laziness and because I enjoyed certain foods. Now, I’m eating healthier than I ever did but I am still enjoying food – the chicken fillet burgers I had a couple of weeks ago might appear on the menu tomorrow (better than any I ever had out) and I’m looking forward to the beef curry that we’re making tonight!

Understanding the Azure Image Builder Resources

In this post, I will explain the roles of and links/connections between the various resources used by Azure Image Builder.

Background

I enjoy the month of July. My customers, all in the Nordics, are off for the entire month and I am working. This year has been a crazy busy one so far, so there has been almost no time in the lab – noticeable I’m sure by my lack of writing. But this month, if all goes to plan, I will have plenty of time in the lab. As I type, a pipeline is deploying a very large lab for me. While that runs, I’ve been doing some hands on lab work.

Recently I helped develop and use an image building process, based on Packer, to regularly create images for a Citrix farm hosted in Microsoft Azure. It’s a pretty sweet solution that is driven from Azure DevOps and results in a very automated deployment that requires little work to update app versions or add/remove apps. At the time, I quickly evaluated Azure Image Builder (also based on Packer but still in Preview back then) but I thought it was too complicated and would still require the same pieces as our Packer solution. But I did decide to come back to Azure Image Builder when there was time (today) and have another look.

The first mission – figure out the resource complexity (compared to Packer by itself).

The Resources

I believe that one of Microsoft’s failings when documenting these services is their inability to explain the functions of the resources and how they work together. Working primarily in ARM templates, I get to see that stuff (a little). I’ve always felt that understanding the underlying system helps with understanding the solution – it was that way with Hyper-V and that continues with Azure.

Managed Identity – Microsoft.ManagedIdentity/userAssignedIdentities

A managed identity will be used by an Image Template to authorise Packer to use the imaging process that you are building. A custom role is associated with this Managed Identity, granting Packer rights to the resource group that the Shared Image Gallery, Image Definition, and Image Template are stored in.

Shared Image Gallery – Microsoft.Compute/galleries/images

The Shared Image Gallery is the management resource for images. The only notable attribute in the deployment is the name of the resource, which sadly, is similar to things like Storage Accounts in lacking standardisation with the rest of Microsoft Azure resource naming.

Image Definition- Microsoft.Compute/galleries/images

The Image Definition documents your image as you would like to present it to your “customers”.

The Image Definition is associated with the Shared Image Gallery by naming. If your Shared Image Gallery was named “myGallery” then an image definition called “myImage” would actually be named as “myGallery/myImage”.

The properties document things including:

  • VM generation
  • OS type
  • Generalised or not
  • How you will brand the images build from the Image Definition

Image Template – Microsoft.VirtualMachineImages/imageTemplates

This is where you will end up spending most of your time while operating the imaging process over time.

The Image Template describes to Packer (hidden by Azure) how it will build your image:

  • Identity points to the resource ID of the Managed Identity, permitting Packer to sign in as that identity/receiving its rights when using this Image Template to build an Image Version.
  • Properties:
    • Source: The base image from the Azure Marketplace to start the build with.
    • Customize: The tasks that can be run, including PowerShell scripts that can be downloaded, to customise the image, including installing software, configuring the OS, patching and rebooting.
    • Distribute: Here you associate the Image Template with an Image Definition, referencing the resource ID of the desired Image Definition. Everytime you run this Image Template, a new Image Version of the Image Definition will be created.

Image Version – Microsoft.Compute/galleries/images/versions

An Image Version, a resource with a messy resource name that will break your naming standards, is created when you build from an Image Template. The name of the Image Version is based on the name of the Image Definition plus an incremental number. If my Image Definition is named “myGallery/myImage” then the Image Version will be named “myGallery/myImage/<unique number>”.

The properties of this resource include a publishing profile, documenting to what regions an image is replicated and how it is stored.

What Is Not Covered

Packer will create a resource group and virtual machine (and associated resources) to build the new image. The way that the virtual machine is networked (public IP address by default) can normally be manipulated by the Image Template when using Packer.

Summary

There is a lot more here than with a simple run of Packer. But, Azure Image Builder provides a lot more functionality for making images available to “customers” across an enterprise-scale deployment; that’s really where all the complexity comes from and I guess “releasing” is something that Microsoft knows a lot about.

 

Building Azure VM Images Using Packer & Azure Files

In this post, I will explain how I am using a freeware package called Packer to create SYSPREPed/generalised templates for Citrix Cloud / Windows Virtual Desktop (WVD) – including installing application/software packages from Azure Files.

My Requirement

Sometimes you need an image that you can quickly deploy. Maybe it’s for a scaled-out or highly-available VM-based application. Maybe it’s for a Citrix/Windows Virtual Desktop worker pool. You just need a golden image that you will update frequently (such as for Windows Updates) and be able to bring online quickly.

One approach is to deploy a Marketplace image into your application and then use some deployment engine to install the software. That might work in some scenarios, but not well (or at all) in WVD or Citrix Cloud scenarios.

A different, and more classic approach, is to build a golden image that has everything installed and then the  VM is generalised to create an image file. That image file can be used to create new VMs – this is what Citrix Cloud requires.

Options

You can use classic OS deployment tools as a part of the solution. Some of us will find familiarty in these tools but:

  • Don’t waste your time with staff under the age of 40
  • These tools aren’t meant for the cloud – you’ll have to daisy chain lots of moving parts, and that means complex failure/troubleshooting.

Maybe you read about Azure Image Builder? Surely, using a native image building service is the way to go? Unfortunately: no. AIB is a preview, driven by scripting, and it fails by being too complex. But if you dig into AIB, you’ll learn that it is based on a tool called Packer.

Packer

Packer, a free tool from Hashicorp, the people behind Terraform, is a simple command line tool that will allow you to build VM images on a number of platforms, including Azure ARM. The process is simple:

  • You build a JSON file that describes the image building process.
  • You run packer.exe to ingest that JSON file and it builds the image for you on your platform of choice.

And that’s it! You can keep it simple and run Packer on a PC or a VM. You can go crazy and build a DevOps routine around Packer.

Terminology

There are some terms you will want to know:

  • Builders: These are the types of builds that Packer can do – the platforms that it can build on. Azure ARM is the one I have used, but there’s a more complex/faster Builder for Azure called chroot that uses an existing build VM to build directly into a managed disk. Azure ARM builds a temporary VM, configures the OS, generalises it, and converts it into an image.
  • Provisioners: These are steps in the build process that are used to customise your operating system. In the Windows world, you are going to use the PowerShell provisioner a lot. You’ll find other built in provisioners for Ansible, Puppet, Chef, Windows Restart and more.
  • Custom/Community Provisioners: You can build additional provisioners. There is even a community of provisioners.

Accursed Examples

If you search for Windows Packer JSON Files, you are going to find the same file over and over. I did. Blog posts, powerpoints, training materials, community events – they all used the same example: Deploy Windows, install IIS, capture an image. Seriously, who is ever going to want an image that is that simple?

My Requirement

I wanted to build a golden image, a template, for a Citrix worker pool, running in Azure and managed by Citrix Cloud. The build needs to be monthly, receiving the latest Windows Updates and application upgrades. The solution should be independent of the network and not require any file servers.

Azure Files

The last point is easy to deal with: I put the application packages into Azure Files. Each installation is wrapped in a simple PowerShell script. That means I can enable a PowerShell provisioner to run multiple scripts:

      “type”: “powershell”,
      “scripts”: [
        “install-adobeReader.ps1
        “install-office365ProPlus.ps1”
      ]
This example requires that the two scripts listed in the array are in the same folder as packer.exe. Each script is run in turn, sequentially.

Unverified Executables

But what if one of those scripts, like Office, wants to run a .exe file from Azure Files? You will find that the script will stall while a dialog “appears” (to no one) on the build VM stating that “we can’t verify this file” and waits for a human (that will never see the dialog) to confirm execution. One might think “run unlock-file” but that will not work with Azure Files. We need to update HKEY_CURRENT_USER (which will be erased by SYSPREP) to truse EXE files from the FQDN of the Azure Fils share. There are two steps to this, which we solve by running another PowerShell provisioner:
    {
      “type”: “powershell”,
      “scripts”: [
        “permit-drive.ps1”
      ]
    },
That script will run two pieces of code. The first will add the FQDN of the Azure Files share to Trusted Sites in Internet Options:

set-location “HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains”
new-item “windows.net”
set-location “Windows.net”
new-item “myshare.file.core”
set-location “myshare.file.core”
new-itemproperty . -Name https -Value 2 -Type DWORD

The second piece of code will trust .EXE files:

set-location “HKCU:\Software\Microsoft\Windows\CurrentVersion\Policies”
new-item “Associations”
set-location “Associations”
new-itemproperty . -Name LowRiskFileTypes -Value ‘.exe’ -Type STRING

SYSPREP Stalls

This one wrecked my head. I used an inline PowerShell provisioner to add Windows roles & features:

      “type”: “powershell”,
      “inline”: [
        “while ((Get-Service RdAgent).Status -ne ‘Running’) { Start-Sleep -s 5 }”,
        “while ((Get-Service WindowsAzureGuestAgent).Status -ne ‘Running’) { Start-Sleep -s 5 }”,
        “Install-WindowsFeature -Name Server-Media-Foundation,Remote-Assistance,RDS-RD-Server -IncludeAllSubFeature”
      ]
But then the Sysprep task at the end of the JSON file stalled. Later I realised that I should have done a reboot after my roles/features add. And for safe measure, I also put one in before the Sysprep:
    {
      “type”: “windows-restart”
    },
You might want to run Windows Update – I’d recommend it at the start (to patch the OS) and at the end (to patch Microsoft software and catch any missing OS updates). Grab a copy of the community Windows-Update provisioner and place it in the same folder as Packer.exe. Then add this provisioner to your JSON – I like how you can prevent certain updates with the query:
    {
      “type”: “windows-update”,
      “search_criteria”: “IsInstalled=0”,
      “filters”: [
        “exclude:$_.Title -like ‘*Preview*'”,
        “include:$true”
      ]
    },

Summary

Why I like Packer is that it is simple. You don’t need to be a genius to make it work. What I don’t like is the lack of original documentation. That means there can be a curve to getting started. But once you are working, the tool is simple and extensible.

Monitoring & Alerting for Windows Defender in Azure VMs

In this post, I will explain how one can monitor Windows Defender and create incidents for it with Azure VMs.

Background

Windows Defender is built into Windows Server 2016 and Windows Server 2019. It’s free and pretty decent. But it surprises me how many of my customers (all) choose Defender over third-parties for their Azure VMs … with no coaching/encouragement from me or my colleagues. There is an integration with the control plane using the antimalwareagent extension. But the level of management is poor-none. There is a Log Analytics solution, but solutions are deprecated and, last time I checked, it required the workspace to be in per-node pricing mode. So I needed something different to operationalise Windows Defender with Azure VMs.

Data

At work, we always deploy the Log Analytics extension with all VMs – along with the antimalware extension and a bunch of others. We also enable data collection in Azure Security Center. We use a single Log Analytics workspace to enable the correlation of data and easy reporting/management.

I recently found out that a table in Log Analytics called ProtectionStatus contains a “heartbeat” record for Windows Defender. Approximately every hour, a record is stored in this table for every VM running Windows Defender. In there, you’ll find some columns such as:

  • DeviceName: The computer name
  • ThreatStatusRank: A code indicating the health of the device according to defender:
    • 150: Health
    • 470: Unknown (no extension/Defender)
    • 350: Quarantined malware
    • 550: Active malware
  • ThreatStatus: A description for the above code
  • ThreatStatusDetails: A longer description
  • And more …

So you can see that you can search this table for malware infection records. First thing, though, is to filter out the machines/records reporting that there is no Defender (Linux machines, for example):

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’ | summarize makeset(Resource); ProtectionStatus

| where Resource in (all_windows_vms)

| sort by TimeGenerated desc

The above will find all active Windows VMs that have been reporting to Log Analytics via the extension heartbeat. Then we’ll store that data in a set, and search that set. Now we can extend that search, for example finding all machines with a non-healthy state (150):

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’

| summarize makeset(Resource); ProtectionStatus

| where Resource in (all_windows_vms)

| where ThreatStatusRank <> 150

| sort by TimeGenerated desc

Testing

All the tech content here will be useless without data. So you’ll need some data! Search for the Eicar test string/file and start “infecting” machines – be sure to let people know if there are people monitoring the environment first.

Security Center

Security Center will record incidents for you:

You will get email alerts if you have configured notifications in the subscription’s Security Center settings. Make sure the threshold is set to LOW.

If you want an alternative form of alert then you can use a Log Analytics alert (Scheduled Query Alert resource type) based on the below basic query:

SecurityAlert

| where TimeGenerated > now(-5m)

| where VendorName == ‘Microsoft Antimalware’

The above query will search for Windows Defender alerts stored in Log Analytics (by Security Center) in the last 5 minutes. If the threshold is freater than 0 then you can trigger an Azure Monitor Action Group to tell whomever or start whatever task you want.

Workbooks

Armed with the ability to query the ProtectionStatus table, you can create your own visualisations for easy reporting on Windows Defender across many machines.

 

The pie chart is made using this query:

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’

| summarize makeset(Resource); ProtectionStatus

| where TimeGenerated > now(-7d)

| where Resource in (all_windows_vms)

| where ThreatStatusRank <> ‘150’

| summarize count(Threat) by Threat

With some reading and practice, you can make a really fancy workbook.

Azure Sentinel

I have enabled the Entity Behavior preview.

Azure Sentinel is supposed to be the central place to monitor all security events, hunt for issues, and where to start investigations – that latter thanks to the new Entity Behavior feature. Azure Sentinel is powered by Log Analytics – if you have data in there then you can query that data, correlate it, and do some clever things.

We have a query that can search for malware incidents reported by Windows Defender. What we will do is create a new Analytic Rule that will run every 5 minutes using 5 minutes of data. If the results exceed 0 (threshold greater than 0) then we will create an incident.

let all_windows_vms = Heartbeat

| where TimeGenerated > now(-7d)

| where OSType == ‘Windows’

| summarize makeset(Resource); ProtectionStatus

| where TimeGenerated > now(-5m)

| where Resource in (all_windows_vms)

| where ThreatStatus <> ‘No threats detected’ or ThreatStatusRank <> ‘150’ or Threat <> ”

| sort by Resource asc | extend HostCustomEntity = Computer

The last line is used to identity an entity. Optionally, we can associate a logic app for an automated response. Once that first malware detection is found:

You can do the usual operational stuff with these incidents. Note that this data is recorded and your effectiveness as a security organisation is visible in the Security Efficiency Workbook in Azure Sentinel – even the watchers are watched! If you open an incident you can click investigate which opens a new Investigation screen that leverages the Entity Behavior data. In my case, the computer is the entity.

The break-out dialogs allow me to query Log Analytics to learn more about the machine and its state at the time and the state of Windows Defender. For example, I can see who was logged into the machine at that time and what processes were running. Pretty nice, eh?

The Office – Construction Complete

The construction of Cloud Mechanix global HQ finished yesterday afternoon. The final piece to go in place was the step up to the door. You can really see the slope in the site in the below photo.


If you step in you can see the all-wood finish, with the 1st fitting electrics, ready for the final touches.

And you can see the view from one of the locations where one of the desks will be located.

We had some paving stones from the site clearance, so they are being repurposed to cross the lawn from the house to the front door. A swing set was placed directly in-front of the office – the replacement is going to the far end of the lawn. The grass underneath and the empty anchor spots are bald so I’ll be re-sowing grass there in the coming days.

So what’s next? Paint and second-fitting electrics are next in the project plan. I will also be looking at how we can extend the house security systems to the office – a combination of a supplier-based monitored alarm system and the Ring cameras. That will allow us to insure the contents of the new office, and then … it’ll be time to move in.