Understanding the Azure Virtual Desktop Resources

In this post, I will document the resources used in Azure Virtual Desktop, what they do, and how they interconnect.

This is a work-in-progress, so any updates I discover along the way will be added. You should also check out a similar post on Azure Image Builder.

Host Pool – Microsoft.DesktopVirtualization/hostpools

The host pool documents the configuration of the hosts that will provide the desktops/applications. Note that a Host Pool resource ID is required to create an Application Group.

Note, the VMs themselves are deployed using a linked template when you use the Azure Portal. My deployment used the “managed disks” template. This template deploys the VMs, runs some DSC, joins the machines to your domain. There is also a task to update the host pool.

The result of running Microsoft.DesktopVirtualization/hostpools does not create the VMs – it just manages any VMs added to the Host Pool.

The mandatory properties appear to be:

  • hostPoolType: BYODesktop, Personal, or Pooled.
  • loadBalancerType: BreadthFirst, DepthFirst, or Persistent.
  • preferredAppGroupType: Persistent, None, or RailApplications.

The full set of properties can be found in the REST API documentation.

Application Group – Microsoft.DesktopVirtualization/applicationgroups

The Application Group documents the applications, user associations (the Desktop Virtualization User role is assigned to users/groups), and is associated with a Host Pool; therefore you must deploy a Host Pool resource before you deploy the planned Application Group.

The mandatory values appear to be:

  • hostPoolArmPath: The resource ID of the associated Host Pool
  • applicationGroupType: Desktop or RemoteApp

We know that Windows 365 (AKA “Cloud PC”) is built on Azure Virtual Desktop. Proof of that is in ARM, with a true/false property called cloudPcResource.

The REST API documentation has complete documentation of the properties.

Workspace – Microsoft.DesktopVirtualization/workspaces

The Azure Virtual Desktop Workspace is the glue that holds everything together. The Workspace can be associated with no, 1, or many Application Groups via a non-mandatory array value called applicationGroupReferences. You can build a Workspace before your Application Groups and update this value later. Or you can build the (1) Host Pool(s), (2) Application Group(s), followed by the Workspace.

The mandatory values appear to be:

  • applicationGroupReferences: An array value with 0+ items, each being the resource ID of an Application Group.

    Virtual Machines

The Host Pool will require virtual machines; these are created as a separate deployment. There’s nothing special here; they are virtual machines created from the Marketplace or from your own generalised image (captured or Shared Image Gallery). Two actions must be done to the VMs:

  • Domain Join: Either (legacy) ADDS (including Azure AD DS or Windows Server ADDS) or an Azure AD Join (a recent feature add).
  • Virtual Desktop agent: DSC will be used to deploy the agent. This will make an outbound connection to the Host Pool and register the VM.

AAD, AADDS, or ADDS? I prefer ADDS. This is because:

  • Most of the controls that you need are in Group Policy and AAD doesn’t do Group Policy.
  • AADDS relies on AAD which is a single-region service. If that region has AAD issues (and this happens pretty frequently) then your Azure Virtual Desktop farm is dead.
  • Third-party applications typically expect ADDS and will not support AADDS/AAD, even if it “works”.

Deploying Azure ARM Templates From Azure DevOps – With A Complete Example

In this post, I will show you how to get those ARM templates sitting in an Azure DevOps repo deploying into Azure using a pipeline. With every merge, the pipeline will automatically trigger (you can disable this) to update the deployment. In other words, a complete CI/CD deployment where you manage your infrastructure/services as code.

Annoyance

I’m not a DevOps guru. I use DevOps every day. Every deployment I do for a customer runs from JSON that I’ve helped write into the customers’ Azure tenants. But we have people who are DevOps gurus and we have one seriously fancy deployment system that literally just uses a DevOps pipeline as a trigger mechanism and nothing more. But I use that, not develop it. I wanted to create & run a pipeline for my own needs (Cloud Mechanix Azure training). Admittedly, I’ve tried this before, lost patience, and abandoned it. This time, I persisted and succeeded.

What didn’t help? The dreadful Microsoft documentation. One doc, from DevOps was rubbish. Another had deprecated YAML code (pipelines are written in YAML). A third had an example that was full of errors. OK, let’s look at blogs. But as with many blogs on this topic, those few that were originals only showed how to push code into an existing App Service and the rest were copies and pastes of App Services posts or bad Microsoft examples.

When it comes to tech like this, I have the feeling that many who have the knowledge don’t like to share it.

Concept

What I’m dealing with here is infrastructure-as-code (Iac). The code (Azure JSON in ARM templates) will describe the resources and configurations of those resources that I want to deploy. In my example, it’s an Azure Firewall and its configuration, including the rules. I have created a repository (repo) in Azure DevOps and I edit the JSON using Visual Studio Code (VS Code), the free version of Visual Studio. When I make a change in VS Code, it will be done in a branch of the master copy of the code. I will sync that branch to the Cloud. To merge the changes, I will create a pull request. This pull request starts a change control process, where the owners of the repo can review the code and decide to accept or reject the changes. If the changes are accepted they are merged into the master copy of the code. And now the magic happens.

A pipeline is a description of a process that will take the master code from the repo and do stuff with it. In my case, deploy the code to a resource group in an Azure subscription. If the resources are already there, then the pipeline will do an update.

I will end up with an Azure Firewall that is managed as code. The rules and configuration are described in a parameter file so that’s all that I should normally need to touch. To make a rules change, I edit the parameter file and do a pull request. A security officer will review the change and approve/reject it. If the change is approved, the new firewall configuration will be deployed. And yes, this approach could probably be used with Azure Firewall Policy resources – I haven’t tested that yet. Now I can give people Read access only to my subscription and force all configuration changes through the pull request review process of Azure DevOps.

Your deployment can be any Azure resources that you can deploy using a template.

Azure Subscription

In Azure I have two resource groups:

  • [Resource Group] p-devops: Where I can do “DevOps stuff”
    • [Storage Account] pdevopsstorsjdhf983: I will use this to store access the code that I want to deploy using the pipeline
  • [Resource Group] p-we1fw: Where my hub virtual network is and the Azure Firewall will be
    • [Virtual Network]: p-we1fw-vnet: The virtual network that contains a subnet called AzureFirewallSubnet

Remember that storage account!

DevOps Repo

I created and configured a DevOps repo called AzureFirewall in a DevOps project. There are two files in there:

  • [Template] azurefirewall.json: The file that will deploy the Azure Firewall
  • [Parameter] azurefirewall-parameters.json: The configuration of the firewall, including the rules!

New DevOps Service Connection

DevOps will need a way to authenticate with your Azure tenant and get authorization to use your tenant, subscription, or resource group. You can get real fancy here. I’m going simple and using a feature of DevOps called a Service Connection, found in DevOps > [Project] >Project Settings > Service Connections (under Pipelines):

  1. Click New Service Connection
  2. Select Azure Resource Manager and hit Next
  3. Select Service Principal (Automatic) which is recommended by DevOps.
  4. Here I selected the subscription option and the Azure subscription that my resource groups are in.
  5. I granted access permission to all pipelines.
  6. I named the service connection after my subscription: p-we1net.

As I said, you can get real fancy here because there are lots of options.

New DevOps Pipeline

Now for the fun!

Back in the project, I went to Pipelines and created a new Pipeline:

  1. I selected Azure Repos Git because I’m storing my code in an Azure DevOps (Git) repo. The contents of this repo will be deployed by the pipeline.
  2. I selected my AzureFirewall repo.
  3. Then I selected “Starter Pipeline”.
  4. An editor appeared – now you’re editing a file called azure-pipelines.yml that resides in the root of your repo.

There is an option (instead of Starter Pipeline) where you choose an existing YAML file, maybe one from a folder called .pipelines in your repo.

Edit the Pipeline

Here is the code:

name: AzureFirewall.$(Date:yyyy.MM.dd)

trigger:
  batch: true

pool:
  name: Hosted Windows 2019 with VS2019

steps:
- task: AzureFileCopy@3
  displayName: 'Stage files'
  inputs:
    SourcePath: ''
    azureSubscription: 'p-we1net'
    Destination: 'AzureBlob'
    storage: 'pdevopsstorsjdhf983'
    ContainerName: 'AzureFirewall'
    outputStorageUri: 'artifactsLocation'
    outputStorageContainerSasToken: 'artifactsLocationSasToken'
    sasTokenTimeOutInMinutes: '240'
- task: AzureResourceGroupDeployment@2
  displayName: 'Deploy template'
  inputs:
     ConnectedServiceName: 'p-we1net'
     action: 'Create Or Update Resource Group'
     resourceGroupName: 'p-we1fw'
     location: 'westeurope'
     templateLocation: 'URL of the file'
     csmFileLink: '$(artifactsLocation)azurefirewall.json$(artifactsLocationSasToken)'
     csmParametersFileLink: '$(artifactsLocation)azurefirewall-parameters.json$(artifactsLocationSasToken)'
     deploymentMode: 'Incremental'
     deploymentName: 'AzureFirewall-Pipeline'

That is a working pipeline. It is made up of several pieces:

Trigger

This controls how the pipeline is started. You can set it to none to stop automatic executions – in the early days when you’re trying to get this right, automatic runs can be annoying.

Pool

Your pipeline is going to run in a container. I’m using a stock Microsoft container based on WS2019. You can supply your own container from Azure Container Registry, but that’s getting fancy!

Task: AzureFileCopy

Now we move into the Steps. The first task is to download the contents of the repo into a storage account. We need to do this because the following deployment task cannot directly access the raw files in Azure DevOps. A task is created with the human friendly name of Stage Files. There are a few settings to configure here:

  • azureSubscription: This is not the name of your subscription! Aint that tricky?! This is the name of the service connection that authenticates the pipeline against the subscription. So that’s my service connection called p-we1net, which I happened to name after my subscription.
  • storage: This is the storage account in my target Azure subscription in the p-devops resource group. My service connection has access to the subscription so it has access to the storage account – be careful with restricting access of the service connection to just a resource group and placing the staging storage account elsewhere.
  • ContainerName: This is the name of the container that will be created in your storage account. The contents of the repo will be downloaded into this container.
  • outputStorageUri: The URI/URL of the storage account/container will be stored in a variable which is called artifactsLocation in this example.
  • outputStorageContainerSasToken: A SAS token will be created to allow temporary secure access to the contents of the container. The token will be stored in a variable called artifactsLocationSasToken in this example.

Task: AzureResourceGroupDeployment

This task will take the contents of the repo from the storage account, and deploy them to a resource group in the target subscription. There are a few things to change:

  • azureSubscription: Once again, specify the name of the service connection, not the Azure subscription.
  • resourceGroupName: Enter the name of the target resource group.
  • location: Specify the Azure region that you are targeting.
  • csmFileLink: This is the URI of the template file that you want to deploy. More in a moment.
  • csmParametersFileLink: This is the URI of the parameters file that you want to deploy. More in a moment.
  • deploymentName: I have hard-set the deployment name so I don’t have to clean up versioned deployments from the resource group later. Every resource group has a hard set limit on deployment objects, and with a resource such as a firewall, that could be hit quite quickly.

csmFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the template file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall.json: This is the name of the template file that I want to deploy.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

csmParametersFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall-parameters.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the parameter file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall-parameters.json: This is the name of the parameter file that I want to use to customise the template deployment.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

Pipeline Execution

There are three ways to run the pipeline now:

  1. Do an update (or a merge) to the master branch of the repo thanks to my trigger.
  2. Manually run the pipeline from Pipelines.
  3. Save a change to the pipeline in the DevOps editor if the master is not locked – which will trigger option 1, to be honest.

You can open the pipeline, or historic runs of it, to view/track the execution:

You’ll also get an email to let you know the status of an ended pipeline run:

Happy pipelining!

Azure: PowerShell versus ARM Templates

In this post, I’m going to make the PowerShell acolytes angry (not hard) by explaining why they are too slow, and ARM/JSON is they best way to deploy things in Azure.

The PowerShell Experience

Let’s imagine that you & your significant other go into a restaurant, and let’s say you order a steak and your other wants to order something else. How does the ordering process go? Is it something like this .. let’s start with your order:

  • Customer: Waiter!
  • <Wait 1 minute>
  • Waiter: What would you like sir?
  • Customer: Could you ask the chef to go to the fridge?
  • <Wait while the chef is asked to go to the fridge>
  • Waiter: Yes?
  • Customer: Would you ask the chef to open the fridge?
  • <Wait while the chef opens the fridge>
  • Waiter: Yes?
  • Customer: Would you ask the chef to take a steak out of the fridge?
  • <Wait while the chef takes a steak out of fridge>
  • Waiter: Yes?
  • Customer: Please ask the check to put a pan on the cooker.
  • <Wait while the chef puts a pan on the cooker>
  • Waiter: Yes?

You see what’s going on here? Meanwhile your significant other is getting no love from the restaurant. Ouch!

With PowerShell you describe the deployment process, one step at a time, connecting each and every dot. The deployment is serialized, with no parallelism unless you use PowerShell features to run parallel jobs. The result isn’t much faster than you doing all the clicking for yourself.

The ARM Experience

I like to describe the ARM as a waiter, and the Azure resource providers as the kitchen cooks. How does the order go?

  • Customer: Waiter, I would like a Salmon dish for my wife and steak for myself.
  • Waiter: Yes, sir, in the meantime, would you like a drink?

That’s a bit better, right?

ARM or JSON templates describe the result, not the process. Once you submit the deployment, ARM divides up the job and orders the deployment based on your dependencies. That means that the deployment can be parallelized. If I need 100 web servers, all 100 will be deployed at once, not in some 1..100 loop, one at a time (or 5 at a time if you are clever).

Best of Both Worlds

For some of the training that I do at work, I deploy the training lab in Azure as follows:

  • A PowerShell script that asks me how many attendees there are, and then it runs a glorified 2 line loop.
  • The loop iterates through different subscriptions, adding a resource group and then doing an ARM deployment.

In other words, PowerShell automates my very fast ARM deployments.

PowerShell Still Required

PowerShell is still very useful for some fiddly deployment things that don’t have ARM options, or are once-offs and don’t have a GUI option. To be honest, I do use GUI for most of my once-offs because it is convenient and gets the end result faster than researching/tweaking/fixing PowerShell examples. When it comes to learning about settings and troubleshooting, PowerShell can be pretty awesome.

But PowerShell is much slower than ARM for deployments. Now let’s hear the screams of outraged PowerShellers!

Was This Post Interesting?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.