In this post, I will show how to run Azure Az module scripts as tasks in a GitHub action workflow. Working examples can be found in my GitHub AzireFirewall/DevSecOps repo which is the content for my DevSecOps articles.
Credit to my old colleague Alan Kinane for getting me started with his post, Azure CI/CD with ARM templates and GitHub Actions.
Why Use A Script?
One can do some simple deployment tasks, but a script offers some advantages:
- A task that runs a deployment
- A simple PowerShell/Azure CLI task that runs an inline script
But you might want something that does more. For example, you might want to do some error checking. Or maybe you are going to use a custom container and execute complex tasks from it. In my case, I wanted to do lots of error checking and give myself the ability to wrap scripts around my deployments.
My Example
For this post, I will use the hub deployment from my GitHub AzireFirewall/DevSecOps repo – this deploys a VNet-based (legacy) hub in an Azure hub & spoke architecture.. There are a number of things you are going to need. I’ve just rorganised and updated the main branch to support both Azure DevOps pipelines (/.pipelines) and GitHub actions (/.github).
Afterwards, I will explain how the action workflow calls the PowerShell script.
GitHub Repo
Set up a repository in GitHub. Copy the required files into the repo. In my example, there are four folders:
- platform: This contains the files to deploy the hub in an Azure subscription. In my example, you will find bicep files with JSON parameter files.
- scripts: This folder contains scripts used in the deployment. In my example, deploy.ps1 is a generic script that will deploy an ARM/Bicep template to a selected subscription/resource group.
- .github/workflows: This is where you will find YAML files that create workflows in GitHub actions. Any valid file will automatically create a workflow when there is a sucessful merge. My example contains hub.yaml to execute the script, deploy.ps1.
- .pipelines: This contains the files to deploy the code. In my example, you will find a YAML file for a DevOps pipeline called hub.yaml that will execute the script, deploy.ps1.
You can upload the files into the repo or sync using Git/VS Code.
Azure AD App Registration (Service Principal or SPN)
You will require an App Registration; this will be used by the GitHub workflow to gain authorised access to the Azure subscription.
Create an App Registation in Azure AD. Create a secret and store that secret (Azure Key Vault is a good location) because you will not be able to see the secret after creation. Grant the App Registration Owner rights to the Azure subscription (as in my example) or to the resource groups if you prefer that sort of deployment.
Repo Secret
One of the features that I like a lot in GitHub is that you can store secrets at different levels (organisation, repo, environment). In my example, I am storing the secrets for using the the Service Principal in the repo, making it available to the workflow(s) that are in this repo only.
Open the repo, browse to Settings, and then go to Secrets. Create a new Repository Secret called AZURE_CREDENTIALS with the following struture:
{
"tenantId": "<GUID>",
"subscriptionId": "<GUID>",
"clientId": "<GUID>",
"clientSecret": "<GUID>"
}
Create the Workflow
GitHub makes this task easy. You can into your Repo>Actions and create a workflow from a template. Or if you have a YAML file that defines your workflow, you can place it into your repo in /.github/workflows. When that change is merged, GitHub will automatically create a workflow for you.
Tip: To delete a workflow, rename or delete the associated file and merge the change.
Logging In
The workflow must be able to sign into Azure. There are plenty of examples out there but you need to accomplish 3 things:
- Log into Azure
- Enable the Az modules
The following code in the workflow accomplishes those three tasks:
- name: Login via Az module
uses: azure/login@v1
with:
creds: ${{secrets.AZURE_CREDENTIALS}}
enable-AzPSSession: true
The line uses: azure/login@v1 enables and Azure sign-in. Note that the with statement selects the credentials that we previously added to the repo.
The line enable-AzPSSession: true enables the Az PowerShell modules. With that, we are signed in and have all we need to execute Azure PowerShell cmdlets.
Executing A PowerShell Script
A workflow has a section called jobs; in here, you can create multiple jobs that share something in common, such as a login. Each job is made up of steps. Steps perform actions such as checking out code from the repo, logging into Azure, and executing a deployment task (running a PowerShell script, for example).
I can create a PowerShell script that does lots of cool things and store it in my repo. That script can be edited and managed by change control (pull requests) just like my code that I’m deploying. There is an example of this below:
- name: Deploy Hub
uses: azure/powershell@v1
with:
inlineScript: |
.\scripts\deploy.ps1 -subscriptionId "${{env.hubSub}}" -resourceGroupName "${{env.hubNetworkResourceGroupName}}" -location "${{env.location}}" -deployment "hub" -templateFile './platform/hub.bicep' -templateParameterFile './platform/hub-parameters.json'
azPSVersion: "latest"
The task is running “PowerShell v1” which will allow us to run an inline script using the sign-in that was previously created in the job (the login step). The configuration of this step is really simple – you specify the PowerShell cmdlets to run. In my example, it executes the PowerShell script and passes in the parameter values, some of which are stored as environment variables at the workflow level.
My script is pretty generic. I can have:
- Multiple Bicep files/JSON parameter files
- Multiple target scopes
I can create a PowerShell step for each deployment and use the parameters to specialise the execution of the script.
The PowerShell Script
The full code for the script can be found here. I’m going to focus on a few little things:
The Parameters
You can see in the above example that I passed in several parameters:
- subscriptionId: The ID of the subscription to deploy the code to. This does not have to be the same as the default subscription specified in the Service Connction. The Service Principal used by the pipeline must have the required permissions in this subcsription.
- resourceGroupName: The name of the resource group that the deployment will go into. My script will create the resource group if required.
- location: The Azure region of the resource group.
- deploymentName: The name of the ARM deployment that will be created in the resource group for the deployment (remember that Bicep deployments become ARM deployments).
- templateFile: The path to the template file in the pipeline container.
- templateParameterFile: The path to the parameter file for the template in the pipeline container.
Each of those parameters is identically named in param () at the start of the PowerShell script and those values specialise the execution of the generic script.
Outputs
You can use Write-Host to output a value from the script to appear in the console of the running job. If you add -ForegroundColor then you can make certain messages, such as errors or warnings, stand out.
Beware of Manual Inputs
Some PowerShell commands might want a manual input. This is not supported in a pipeline and will terminate the pipeline with an error. Test for this happening and use code logic wrapped around your cmdlets to prevent it from happening – this is why a file-based script is better than a simple/short inline script, even to handle a situation like creating a resource group.
Try/Catch
Error handling is a big deal in a hands-off script. You will find that 90% of my script is checking for things and dealing with unwanted scenarios that can happen. A simple example is a resource group.
An ARM deployment (remember this includes Bicep) must go into a resource group. You can just go ahead and write the one-liner to create a resource group. But what happens when you update the code, the script re-runs and sees the resource group is already there? In that scenario, a manual input will appear (and fail the pipeline) to confirm that you want to continue. So I have an elaborate test/to process:
if (!(Get-AzResourceGroup $resourceGroupName -ErrorAction SilentlyContinue))
{
try
{
# The resource group does not exist so create it
Write-Host "Creating the $resourceGroupName resource group"
New-AzResourceGroup -Name $resourceGroupName -Location $location -ErrorAction SilentlyContinue
}
catch
{
# There was an error creating the resoruce group
Write-Host "There was an error creaating the $resourceGroupName resource group" -ForegroundColor Red
Break
}
}
else
{
# The resoruce group already exists so there is nothing to do
Write-Host "The $resourceGroupName resource group already exists"
}
Conclusion
Once you know how to do it, executing a script in your pipeline is easy. Then your PowerShell knowledge can take over and your deployments can become more flexible and more powerful. My example executes ARM/Bicep deployments. Yours could do a PowerShell deployment, add scripted configurations to a template deployment, or even run another language like Terraform. The real thing to understand is that now you have a larger scripting toolset available to your automated deployments.