Azure Virtual WAN ARM – The Resources

In this post, I will explain the types of resources used in Azure Virtual WAN and the nature of their relationships.

Note, I have not included any content on the recently announced preview of third-party NVAs. I have not seen any materials on this yet to base such a post on and, being honest, I don’t have any use-cases for third-party NVAs.

As you can see – there are quite a few resources involved … and some that you won’t see listed at all because of the “appliance-like” nature of the deployment. I have not included any detail on spokes or “branch offices”, which would require further resources. The below diagram is enough to get a hub operational and connected to on-premises locations and spoke virtual networks.

The Virtual WAN – Microsoft.Network/virtualWans

You need at least one Virtual WAN to be deployed. This is what the hub will connect to, and you can connect many hubs to a common Virtual WAN to get automated any-to-any connectivity across the Microsoft physical WAN.

Surprisingly, the resource is deployed to an Azure region and not as a global resource, such as other global resources such as Traffic Manager or Azure DNS.

The Virtual Hub – Microsoft.Network/virtualHubs

Also known as the hub, the Virtual Hub is deployed once, and once only, per Azure region where you need a hub. This hub replaces the old hub virtual network (plus gateway(s), plus firewall, plus route tables) deployment you might be used to. The hub is deployed as a hidden resource, managed through the Virtual WAN in the Azure Portal or via scripting/ARM.

The hub is associated with the Virtual WAN through a virtualWAN property that references the resource ID of the virtualWans resource.

In a previous post, I referred to a chicken & egg scenario with the virtualHubs resource. The hub has properties that point to the resource IDs of each deployed gateway:

  • vpnGateway: For site-to-site VPN.
  • expressRouteGateway: For ExpressRoute circuit connectivity.
  • p2sVpnGateway: For end-user/device tunnels.

If you choose to deploy a “Secured Virtual Hub” there will also be a property called azureFirewall that will point to the resource ID of an Azure Firewall with the AZFW_Hub SKU.

Note, the restriction of 1 hub per Azure region does introduce a bottleneck. Under the covers of the platform, there is actually a virtual network. The only clue to this network will be in the peering properties of your spoke virtual networks. A single virtual network can have, today, a maximum of 500 spokes. So that means you will have a maximum of 500 spokes per Azure region.

Routing Tables – Microsoft.Network/virtualHubs/hubRouteTables & Microsoft.Network/virtualHubs/routeTables

These are resources that are used in custom routing, a recently announced as GA feature that won’t be live until August 3rd, according to the Azure Portal. The resource control the flows of traffic in your hub and spoke architecture. They are child-resources of the virtualHubs resource so no references of hub resource IDs are required.

Azure Firewall – Microsoft.Network/azureFirewalls

This is an optional resource that is deployed when you want a “Secured Virtual Hub”. Today, this is the only way to put a firewall into the hub, although a new preview program should make it possible for third-parties to join the hub. Alternatively, you can use custom routing to force north-south and east-west traffic through an NVA that is running in a spoke, although that will double peering costs.

The Azure Firewall is deployed with the AZFW_Hub SKU. The firewall is not a hidden resource. To manage the firewall, you must use an Azure Firewall Policy (aka Azure Firewall Manager). The firewall has a property called firewallPolicy that points to the resource ID of a firewallPolicies resource.

Azure Firewall Policy – Microsoft.Network/firewallPolicies

This is a resource that allows you to manage an Azure Firewall, in this case, an AZFW_Hub SKU of Azure Firewall. Although not shown here, you can deploy a parent/child configuration of policies to manage firewall configurations and rules in a global/local way.

VPN Gateway – Microsoft.Network/vpnGateways

This is one of 3 ways (one, two or all three at once) that you can connect on-premises (branch) sites to the hub and your Azure deployment(s). This gateway provides you with site-to-site connectivity using VPN. The VPN Gateway uses a property called virtualHub to point at the resource ID of the associated hub or virtualHubs resource. This is a hidden resource.

Note that the virtualHubs resource must also point at the resource ID of the VPN gateway resource ID using a property called vpnGateway.

ExpressRoute Gateway – Microsoft.Network/expressRouteGateways

This is one of 3 ways (one, two or all three at once) that you can connect on-premises (branch) sites to the hub and your Azure deployment(s). This gateway provides you with site-to-site connectivity using ExpressRoute. The ExpressRoute Gateway uses a property called virtualHub to point at the resource ID of the associated hub or virtualHubs resource. This is a hidden resource.

Note that the virtualHubs resource must also point at the resource ID of the ExpressRoute gateway resource ID using a property called p2sGateway.

Point-to-Site Gateway – Microsoft.Network/p2sVpnGateways

This is one of 3 ways (one, two or all three at once) that you can connect on-premises (branch) sites to the hub and your Azure deployment(s). This gateway provides users/devices with connectivity using VPN tunnels. The Point-to-Site Gateway uses a property called virtualHub to point at the resource ID of the associated hub or virtualHubs resource. This is a hidden resource.

The Point-to-Site Gateway inherits a VPN configuration from a VPN configuration resource based on Microsoft.Network/vpnServerConfigurations, referring to the configuration resource by its resource ID using a property called vpnServerConfiguration.

Note that the virtualHubs resource must also point at the resource ID of the Point-to-Site gateway resource ID using a property called p2sVpnGateway.

VPN Server Configuration – Microsoft.Network/vpnServerConfigurations

This configuration for Point-to-Site VPN gateways can be seen in the Azure WAN and is intended as a shared configuration that is reusable with more than one Point-to-Site VPN Gateway. To be honest, I can see myself using it as a per-region configuration because of some values like DNS servers and RADIUS servers that will probably be placed per-region for performance and resilience reasons. This is a hidden resource.

The following resources were added on 22nd July 2020:

VPN Sites – Microsoft.Network/vpnSites

This resource has a similar purpose to a Local Network Gateway for site-to-site VPN connections; it describes the on-premises location, AKA “branch office”.  A VPN site can be associated with one or many hubs, so it is actually connected to the Virtual WAN resource ID using a property called virtualWan. This is a hidden resource.

An array property called vpnSiteLinks describes possible connections to on-premises firewall devices.

VPN Connections – Microsoft.Network/vpnGateways/vpnConnections

A VPN Connections resource associates a VPN Gateway with the on-premises location that is described by an associated VPN Site. The vpnConnections resource is a child resource of vpnGateways, so there is no actual resource; the vpnConnections resource takes its name from the parent VPN Gateway, and the resource ID is an extension of the parent VPN Gateway resource ID.

By necessity, there is some complexity with this resource type. The remoteVpnSite property links the vpnConnections resource with the resource ID of a VPN Site resource. An array property, called vpnSiteLinkConnections, is used to connect the gateway to the on-premises location using 1 or 2 connections, each linking from vpnSiteLinkConnections to the resource/property ID of 1 or 2 vpnSiteLinks properties in the VPN Site. With one site link connection, you have a single VPN tunnel to the on-premises location. With 2 link connections, the VPN Gateway will take advantage of its active/active configuration to set up resilient tunnels to the on-premises location.

Virtual Network Connections – Microsoft.Network/virtualHubs/hubVirtualNetworkConnections

The purpose of a hub is to share resources with spoke virtual networks. In the case of the Virtual Hub, those resources are gateways, and maybe a firewall in the case of Secured Virtual Hub. As with a normal VNet-based hub & spoke, VNet peering is used. However, the way that VNet peering is used changes with the Virtual Hub; the deployment is done using the hub/VirtualNetworkConnections child resource, whose parent is the Virtual Hub. Therefore, the name and resource ID are based on the name and resource ID of the Virtual Hub resource.

The deployment is rather simple; you create a Virtual Network Connection in the hub specifying the resource ID of the spoke virtual network, using a property called remoteVirtualNetwork. The underlying resource provider will initiate both sides of the peering connection on your behalf – there is no deployment required in the spoke virtual network resource. The Virtual Network Connection will reference the Hub Route Tables in the hub to configure route association and propagation.

More Resources

There are more resources that I’ve yet to document, including:

Deploying Azure ARM Templates From Azure DevOps – With A Complete Example

In this post, I will show you how to get those ARM templates sitting in an Azure DevOps repo deploying into Azure using a pipeline. With every merge, the pipeline will automatically trigger (you can disable this) to update the deployment. In other words, a complete CI/CD deployment where you manage your infrastructure/services as code.

Annoyance

I’m not a DevOps guru. I use DevOps every day. Every deployment I do for a customer runs from JSON that I’ve helped write into the customers’ Azure tenants. But we have people who are DevOps gurus and we have one seriously fancy deployment system that literally just uses a DevOps pipeline as a trigger mechanism and nothing more. But I use that, not develop it. I wanted to create & run a pipeline for my own needs (Cloud Mechanix Azure training). Admittedly, I’ve tried this before, lost patience, and abandoned it. This time, I persisted and succeeded.

What didn’t help? The dreadful Microsoft documentation. One doc, from DevOps was rubbish. Another had deprecated YAML code (pipelines are written in YAML). A third had an example that was full of errors. OK, let’s look at blogs. But as with many blogs on this topic, those few that were originals only showed how to push code into an existing App Service and the rest were copies and pastes of App Services posts or bad Microsoft examples.

When it comes to tech like this, I have the feeling that many who have the knowledge don’t like to share it.

Concept

What I’m dealing with here is infrastructure-as-code (Iac). The code (Azure JSON in ARM templates) will describe the resources and configurations of those resources that I want to deploy. In my example, it’s an Azure Firewall and its configuration, including the rules. I have created a repository (repo) in Azure DevOps and I edit the JSON using Visual Studio Code (VS Code), the free version of Visual Studio. When I make a change in VS Code, it will be done in a branch of the master copy of the code. I will sync that branch to the Cloud. To merge the changes, I will create a pull request. This pull request starts a change control process, where the owners of the repo can review the code and decide to accept or reject the changes. If the changes are accepted they are merged into the master copy of the code. And now the magic happens.

A pipeline is a description of a process that will take the master code from the repo and do stuff with it. In my case, deploy the code to a resource group in an Azure subscription. If the resources are already there, then the pipeline will do an update.

I will end up with an Azure Firewall that is managed as code. The rules and configuration are described in a parameter file so that’s all that I should normally need to touch. To make a rules change, I edit the parameter file and do a pull request. A security officer will review the change and approve/reject it. If the change is approved, the new firewall configuration will be deployed. And yes, this approach could probably be used with Azure Firewall Policy resources – I haven’t tested that yet. Now I can give people Read access only to my subscription and force all configuration changes through the pull request review process of Azure DevOps.

Your deployment can be any Azure resources that you can deploy using a template.

Azure Subscription

In Azure I have two resource groups:

  • [Resource Group] p-devops: Where I can do “DevOps stuff”
    • [Storage Account] pdevopsstorsjdhf983: I will use this to store access the code that I want to deploy using the pipeline
  • [Resource Group] p-we1fw: Where my hub virtual network is and the Azure Firewall will be
    • [Virtual Network]: p-we1fw-vnet: The virtual network that contains a subnet called AzureFirewallSubnet

Remember that storage account!

DevOps Repo

I created and configured a DevOps repo called AzureFirewall in a DevOps project. There are two files in there:

  • [Template] azurefirewall.json: The file that will deploy the Azure Firewall
  • [Parameter] azurefirewall-parameters.json: The configuration of the firewall, including the rules!

New DevOps Service Connection

DevOps will need a way to authenticate with your Azure tenant and get authorization to use your tenant, subscription, or resource group. You can get real fancy here. I’m going simple and using a feature of DevOps called a Service Connection, found in DevOps > [Project] >Project Settings > Service Connections (under Pipelines):

  1. Click New Service Connection
  2. Select Azure Resource Manager and hit Next
  3. Select Service Principal (Automatic) which is recommended by DevOps.
  4. Here I selected the subscription option and the Azure subscription that my resource groups are in.
  5. I granted access permission to all pipelines.
  6. I named the service connection after my subscription: p-we1net.

As I said, you can get real fancy here because there are lots of options.

New DevOps Pipeline

Now for the fun!

Back in the project, I went to Pipelines and created a new Pipeline:

  1. I selected Azure Repos Git because I’m storing my code in an Azure DevOps (Git) repo. The contents of this repo will be deployed by the pipeline.
  2. I selected my AzureFirewall repo.
  3. Then I selected “Starter Pipeline”.
  4. An editor appeared – now you’re editing a file called azure-pipelines.yml that resides in the root of your repo.

There is an option (instead of Starter Pipeline) where you choose an existing YAML file, maybe one from a folder called .pipelines in your repo.

Edit the Pipeline

Here is the code:

name: AzureFirewall.$(Date:yyyy.MM.dd)

trigger:
  batch: true

pool:
  name: Hosted Windows 2019 with VS2019

steps:
- task: AzureFileCopy@3
  displayName: 'Stage files'
  inputs:
    SourcePath: ''
    azureSubscription: 'p-we1net'
    Destination: 'AzureBlob'
    storage: 'pdevopsstorsjdhf983'
    ContainerName: 'AzureFirewall'
    outputStorageUri: 'artifactsLocation'
    outputStorageContainerSasToken: 'artifactsLocationSasToken'
    sasTokenTimeOutInMinutes: '240'
- task: AzureResourceGroupDeployment@2
  displayName: 'Deploy template'
  inputs:
     ConnectedServiceName: 'p-we1net'
     action: 'Create Or Update Resource Group'
     resourceGroupName: 'p-we1fw'
     location: 'westeurope'
     templateLocation: 'URL of the file'
     csmFileLink: '$(artifactsLocation)azurefirewall.json$(artifactsLocationSasToken)'
     csmParametersFileLink: '$(artifactsLocation)azurefirewall-parameters.json$(artifactsLocationSasToken)'
     deploymentMode: 'Incremental'
     deploymentName: 'AzureFirewall-Pipeline'

That is a working pipeline. It is made up of several pieces:

Trigger

This controls how the pipeline is started. You can set it to none to stop automatic executions – in the early days when you’re trying to get this right, automatic runs can be annoying.

Pool

Your pipeline is going to run in a container. I’m using a stock Microsoft container based on WS2019. You can supply your own container from Azure Container Registry, but that’s getting fancy!

Task: AzureFileCopy

Now we move into the Steps. The first task is to download the contents of the repo into a storage account. We need to do this because the following deployment task cannot directly access the raw files in Azure DevOps. A task is created with the human friendly name of Stage Files. There are a few settings to configure here:

  • azureSubscription: This is not the name of your subscription! Aint that tricky?! This is the name of the service connection that authenticates the pipeline against the subscription. So that’s my service connection called p-we1net, which I happened to name after my subscription.
  • storage: This is the storage account in my target Azure subscription in the p-devops resource group. My service connection has access to the subscription so it has access to the storage account – be careful with restricting access of the service connection to just a resource group and placing the staging storage account elsewhere.
  • ContainerName: This is the name of the container that will be created in your storage account. The contents of the repo will be downloaded into this container.
  • outputStorageUri: The URI/URL of the storage account/container will be stored in a variable which is called artifactsLocation in this example.
  • outputStorageContainerSasToken: A SAS token will be created to allow temporary secure access to the contents of the container. The token will be stored in a variable called artifactsLocationSasToken in this example.

Task: AzureResourceGroupDeployment

This task will take the contents of the repo from the storage account, and deploy them to a resource group in the target subscription. There are a few things to change:

  • azureSubscription: Once again, specify the name of the service connection, not the Azure subscription.
  • resourceGroupName: Enter the name of the target resource group.
  • location: Specify the Azure region that you are targeting.
  • csmFileLink: This is the URI of the template file that you want to deploy. More in a moment.
  • csmParametersFileLink: This is the URI of the parameters file that you want to deploy. More in a moment.
  • deploymentName: I have hard-set the deployment name so I don’t have to clean up versioned deployments from the resource group later. Every resource group has a hard set limit on deployment objects, and with a resource such as a firewall, that could be hit quite quickly.

csmFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the template file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall.json: This is the name of the template file that I want to deploy.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

csmParametersFileLink

There are three parts to the string: $(artifactsLocation)azurefirewall-parameters.json$(artifactsLocationSasToken). Together, the three parts give the task secure access to the parameter file in the staging storage account.

  • $(artifactsLocation): This is the storage account/container URI/URL variable from the AzureFileCopy task.
  • azurefirewall-parameters.json: This is the name of the parameter file that I want to use to customise the template deployment.
  • $(artifactsLocationSasToken): This is the SAS token variable from the AzureFileCopy task.

Pipeline Execution

There are three ways to run the pipeline now:

  1. Do an update (or a merge) to the master branch of the repo thanks to my trigger.
  2. Manually run the pipeline from Pipelines.
  3. Save a change to the pipeline in the DevOps editor if the master is not locked – which will trigger option 1, to be honest.

You can open the pipeline, or historic runs of it, to view/track the execution:

You’ll also get an email to let you know the status of an ended pipeline run:

Happy pipelining!

The Secret Sauce That Devs Don’t Want IT Pros to Know About

Honesty time: that title is a bit click-baitish, but the dev community is using a tool that most IT pros don’t know much/anything about, and it can be a real game changer, especially if you write scripts or work with deployment solutions such as Azure Resource Manager (ARM) JSON templates.

Shot Time

As soon as I say “DevOps” you’re already reaching for that X on the top right or saying “Oh, he’s lost it … again”. But that’s what I’m going to talk about: DevOps. Or to be more precise, Azure DevOps.

Methodology

For years, when I’ve thought about DevOps, I’ve thought “buggy software with more frequent releases”. And that certainly can be the case. But DevOps is born out of the realisation that how we have engineered software (or planned almost anything, to be honest) for the past few decades has not been ideal. Typically, we have some start-middle-end waterfall approach that assumes we know the end state. If this is a big (budget) or long (time) project, then getting half way to that planned end-state and realising that the preconception was wrong is very expensive – it leads to headlines! DevOps is a way of saying:

  • We don’t know the end-state
  • We’re going to work on smaller scenarios
  • We will evolve what we create based on those scenarios
  • The more we create, the more we’ll learn about what the end state will be
  • There will be larger milestones, which will be our releases

This is where the project management gurus will come out and say “this is Scrum” or some other codswallop that I do not care about; that’s the minutia for the project management nerds to worry about.

Devs started leaning this direction years ago. It’s not something that is always the case – most devs that I encountered in the past didn’t use some of the platform tools for DevOps such as GitHub or Azure DevOps (previously Teams). But here’s an interesting thing: some businesses are adopting the concepts of “DevOps” for building a business, even if IT isn’t involved, because they realised that some business problems are very like tech problems: big, complex, potentially expensive, and with an unknown end-state.

Why I Started

I got curious about Azure DevOps last year when my friend, Damian Flynn, was talking about it at events. Like me, Damian is an Azure MVP/IT Pro but, unlike me, he stayed in touch with development after college. I tried googling and reading Microsoft Docs, but the content was written in that nasty circular way that Technet used to be – there was no entry point for a non-dev that I could find.

And then I changed jobs … to work with Damian as it happens. We’ve been working on a product together for the last 7 months. And on day 1, he introduced me to DevOps. I’ll be honest, I was lost at first, but after a few attempts and then actually working with it, I have gotten to grips with it and it gives me a structured way to work, plan, and collaborate on a product that will never have and end-state.

What I’m Working On

I’m not going to tell you exactly what I’m working on but it is Azure and it is IT Pro with no dev stuff … from me, at least. Everything I’ve written or adjusted is PowerShell or Azure JSON. I can work with Damian (who is 2+ hours away by car) on Teams on the same files:

  • Changes are planned as features or tasks in Azure DevOps Boards.
  • Code is stored in an Azure DevOps repo (repository).
  • Major versions are built as branches (changes) of a master copy.
  • Changes to the master copy are peer reviewed when you try to merge a branch.
  • Repos are synchronized to our PCs using Git.
  • VS Code is our JSON and PowerShell editor.

It might all sound complex … but it really was pretty simple to set up. Now behind the scenes, there is some crazy-mad release “pipelines” stuff that Damian built, and that is far from simple, but not mandatory – don’t tell Damian that I said that!

Confusing Terminology

Azure DevOps inherits terminology from other sources, such as Git. And that is fine for devs in that space, but some of it made me scratch my head because it sounded “the wrong way around”. Here’s some of the terms:

  • Repo: A repository is where you store code.
  • Project: A project might have 1 or more repos. Each repo might be for a different product in that project.
  • Boards: A board is where you do the planning. You can create epics, tasks and issues. Typically, an Epic is a major part of a solution, a task is what you need to do to make that work, and an issue is a bug to be fixed.
  • Sprint: In managed projects, sprints are a predefined period of time that you assign people to. Tasks are pulled into the sprint and assigned to people (or pulled by people to themselves) who have available time and suitable skills.
  • Branch: You always have one branch called the master or trunk. This is the “master copy” of the code. Branches can be made from the master. For example, if I have a task, I might create a branch from the master in VS Code to work on that task. Once I am complete, I will sync/push that branch back up to Azure DevOps.
  • Pull Request: This is the one that wrecked my head for years. A pull request is when you want to take your changes that are stored in a branch and push it back into the parent branch. From Git’s or DevOps’ point of view, this is a pull, not a push. So you create a pull request for (a) identify the tasks you did, get someone to review/approve your changes, merge the branch (changes) back into the parent branch.
  • Nested branch: You can create branches from branches. Master is typically pretty locked down. A number of people might want a more flexible space to work in so they create a branch of master, maybe for a new version – let’s call this the second level branch. Each person then creates their own third level branches of the first branch. Now each person can work away and do pull requests into the more flexible second-level branch. And when they are done with that major piece of work, they can do a pull request to merge the second-level back into the master or trunk.
  • Release: Is what it sounds like – the “code” is ready for production, in the opinion of the creators.

Getting Started

The first two tools that you need are free:

  • Git command line client – you do not need a GitHub account.
  • Visual Studio Code

And then you need Azure DevOps. That’s where the free pretty much stops and you need to acquire either per-user/plan licensing or get it via MSDN/Visual Studio licensing.

Opinion

I came into this pretty open minded. Damian’s a smart guy and I had long conversations with one of our managers about these kinds of methodologies after he attended Scrum master training.

Some of the stuff in DevOps is nasty. The terminology doesn’t help, but I hope the above helps. Pipelines is still a mystery to me. Microsoft shared a doc to show how to integrate a JSON release via Pipelines and it’s a big ol’ mess of things to be done. I’ll be honest … I don’t go near that stuff.

I don’t think that me and Damian could have collaborated the way we have without DevOps. We’ve written many thousands of lines of code, planned tasks, fought bugs. It’s been done without a project manager – we discuss/record ideas, prioritize them, and then pull (assign to ourselves) the tasks when we have time. At times, we have worked in the same spaces and been able to work as one. And importantly, when it comes to pull requests, we peer review. The methodology has allowed other colleagues to participate and we’re already looking at how we can grow that more in the organization to bring in more skills/experience into the product. Without (Azure) DevOps we could not have done that … certainly storing code on some file storage in the cloud would have been a total disaster and lacked the structure that we have had.

This Subscription Is Not Registered With The Microsoft.Insights Resource Provider

It is possible to get the error, This Subscription Is Not Registered With The Microsoft.Insights Resource Provider sometimes with a new Azure subscription. The latest example I had of this was when using Azure Monitor.

image

A provider is an element in the backend of Azure Resource Manager – think of it as wait staff in the Azure restaurant that takes your order and passes it back to the chef who figures out how to make it happen. Sometimes, a provider that is normally registered with a subscription … isn’t. You can fix this with PowerShell:

Register-AzureRmResourceProvider –ProviderNamespace Microsoft.Insights

It can take several minutes for the provider to register. You can check the status with:

Get-AzureRmResourceProvider –ProviderNamespace Microsoft.Insights

Or you can just use the Azure Portal. Browse to Subscriptions > select your subscription > Resource Providers (under Settings). Here you can see the registration status of the provider, and you can register the provider in the GUI:

image

Click Register and the status will switch from NotRegistered to Registering. Give it 5-15 minutes, refresh the blade, and see if it’s registered. Your problem will be fixed then.

image

Did you Find This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in London on July 5-6, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Azure Template DSC Never Starts

In this post, I’ll explain how I figured out a problem where I couldn’t get the Azure Resource Manager (ARM) JSON template DSC extension to execute. The problem below might explain why your DSC extension never appears to start, assuming that you have uploaded your DSC pack (zip file) to an accessible Internet location, and enter the URL and module names correctly in your template.

In my scenario, I wanted to deploy a domain controller as a VM on a virtual network. Normally, when you do this you would configure the DNS settings of the VNet to point at the desired static IP of the DC. For example, you’d create a NIC for the DC, set that NIC to have a static IP (10.0.0.4 for example), and then edit the settings of the VNet to be the IP address of the DC’s NIC. In am ARM template, the resource dependencies would order that process as below:

FailedDcDscAzureJSON

I configured my ARM template as above and everything was deploying … or so it appeared. The DSC extension appeared in the Portal and had a status of Created. However, when I used PowerShell to query things, I found it still had a status of Creating, and when I logged into the DC VM I found that nothing had happened. I don’t know how many hours I spent trying to figure out what I had done wrong. My emphasis on DNS above should give you a clue.

The virtual network has been configured to use the VM is it’s own DNS server, but the VM is still not a DNS server because the DSC extension hasn’t added the roles or done the DCPROMO. So when I tried to download the DSC pack (zip file) from the Internet, it wasn’t downloading. In fact, I couldn’t resolve any DNS names. I went looking at some of the sample ARM templates that do a DCPROMO and noticed a trend. They did the following using nested templates:

WorkingDcDscAzureJSON

What changed? A nested template is used to deploy the virtual network using the default Azure DNS addresses (no configuration required). Now the new DC VM can access Internet resources via DNS names – and the DSC pack can be downloaded from the Internet and applied – adding the roles and executing the DCPROMO to make the machine a domain controller. The final step is to fix up the virtual network – so another nested template is executed to modify the VNet’s DNS settings to use the static IP address of the DC.

Did you Find This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in London on July 5-6, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Azure: PowerShell versus ARM Templates

In this post, I’m going to make the PowerShell acolytes angry (not hard) by explaining why they are too slow, and ARM/JSON is they best way to deploy things in Azure.

The PowerShell Experience

Let’s imagine that you & your significant other go into a restaurant, and let’s say you order a steak and your other wants to order something else. How does the ordering process go? Is it something like this .. let’s start with your order:

  • Customer: Waiter!
  • <Wait 1 minute>
  • Waiter: What would you like sir?
  • Customer: Could you ask the chef to go to the fridge?
  • <Wait while the chef is asked to go to the fridge>
  • Waiter: Yes?
  • Customer: Would you ask the chef to open the fridge?
  • <Wait while the chef opens the fridge>
  • Waiter: Yes?
  • Customer: Would you ask the chef to take a steak out of the fridge?
  • <Wait while the chef takes a steak out of fridge>
  • Waiter: Yes?
  • Customer: Please ask the check to put a pan on the cooker.
  • <Wait while the chef puts a pan on the cooker>
  • Waiter: Yes?

You see what’s going on here? Meanwhile your significant other is getting no love from the restaurant. Ouch!

With PowerShell you describe the deployment process, one step at a time, connecting each and every dot. The deployment is serialized, with no parallelism unless you use PowerShell features to run parallel jobs. The result isn’t much faster than you doing all the clicking for yourself.

The ARM Experience

I like to describe the ARM as a waiter, and the Azure resource providers as the kitchen cooks. How does the order go?

  • Customer: Waiter, I would like a Salmon dish for my wife and steak for myself.
  • Waiter: Yes, sir, in the meantime, would you like a drink?

That’s a bit better, right?

ARM or JSON templates describe the result, not the process. Once you submit the deployment, ARM divides up the job and orders the deployment based on your dependencies. That means that the deployment can be parallelized. If I need 100 web servers, all 100 will be deployed at once, not in some 1..100 loop, one at a time (or 5 at a time if you are clever).

Best of Both Worlds

For some of the training that I do at work, I deploy the training lab in Azure as follows:

  • A PowerShell script that asks me how many attendees there are, and then it runs a glorified 2 line loop.
  • The loop iterates through different subscriptions, adding a resource group and then doing an ARM deployment.

In other words, PowerShell automates my very fast ARM deployments.

PowerShell Still Required

PowerShell is still very useful for some fiddly deployment things that don’t have ARM options, or are once-offs and don’t have a GUI option. To be honest, I do use GUI for most of my once-offs because it is convenient and gets the end result faster than researching/tweaking/fixing PowerShell examples. When it comes to learning about settings and troubleshooting, PowerShell can be pretty awesome.

But PowerShell is much slower than ARM for deployments. Now let’s hear the screams of outraged PowerShellers!

Was This Post Interesting?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.