Skip to content

Aidan Finn, IT Pro

A blog covering Azure, Hyper-V, Windows Server, desktop, systems management, deployment, and so on …

  • Blog
  • Events
  • Azure Newsletter
  • Azure Training
  • About Aidan Finn
  • Privacy
  • RSS

Tag: Storage Account

Securing A Storage Account Static Website Using VNet Web Application Firewall

Securing A Storage Account Static Website Using VNet Web Application Firewall

Alternative title: Using the Azure Application Gateway to do content redirection with a storage account static website in a secure way.

I was looking at a scenario where I needed to find a platform method of setting up a website that would:

  • Be cost-effective
  • Be able to easily receive content directly from Azure virtual machines
  • Be secure

This post will describe the solution.

The Storage Account

A resilient storage account is set up with a static website. The content can be uploaded to the $web container. The firewall is enabled and only traffic from the virtual machine subnet and an Azure Application Gateway or Web Application Firewall subnet is allowed. This means that you get a 404 error when you try to access the website from any other address space.

The WAF

A WAFv2 is set up. The WAF subnet is protected by an NSG. The WAF is controlled by a WAF policy. And certificates for custom domains are stored in a Key Vault – the WAF uses a user-managed identity to get Get/List rights to secrets/certificates in the Key Vault’s access policy. A multi-site HTTPS Listener is set up for the static website using a custom domain name:

  • The HTTP setting will handle the name translation from the custom domain name to the default storage account URI.
  • The Key Vault will store the certificate for the custom domain name.
  • There is full end-to-end encryption thanks to the storage account using a Microsoft-supplied certificate for the default storage account URI.

The HTTP setting in the WAF will be set up as follows:

  • HTTPS
  • Use Well-Known CA Certificate (Yes)
  • Override with a new hostname: the default URI of the static website

Solution 1 – Service Endpoint

In this case, the WAF subnet has a Microsoft.Storage service endpoint enabled. This will mean that traffic from the WAF to the storage account hosting the static website will fall through a routing “trap door” across the Azure private backbone to reach the storage account. This keeps the traffic relatively private and reduces latency.

The backend pool of the WAF is the FQDN of the static website.

Pros:

  • Easy to set up.

Cons:

  • Service Endpoints appear to be a dead-end technology
  • It will require the Microsoft.Storage Service Endpoint to be configured in every subnet that needs to interact with the website/storage account.

Solution 2 – Private Link/Private Endpoint

In this design, Service Endpoint is dropped and replaced with a Private Endpoint associated with the Web API of the storage account. This Private Endpoint can be in the same VNet as the WAF or even a different (peered) VNet to the WAF.

The only change to the WAF configuration is that the backend pool must now be the private IP address of the Private Endpoint. Now traffic will route from the WAF subnet to the storage account subnet, even across peering connections.

Pros:

  • Private Link/Private Endpoint is the future for this kind of connectivity.
  • There is no need to configure subnets with anything – they just need to route to the storage account (to modify content) or the WAF (access content).

Cons:

  • A little more complex to set up, but the effort is returned in the long-run with less configuration required.

There is no support for inbound NSG rules for the Private Endpoint but:

  • That is coming in the future
  • The storage account firewall is rejecting unwanted direct traffic
  • The NSG in front of the WAF provides Layer-4 security and the WAF provides Layer-7 security

Want to Learn More?

Why not join me for an ONLINE 1-day training course on securing Azure IaaS and PaaS services. Securing Azure Services & Data Through Azure Networking is my newest Azure training course, designed to give Level 400 training to those who have been using Azure for a while. It dives deep on topics that most people misunderstand and covers many topics similar to the above content.

 

 

Author AFinnPosted on June 15, 2020June 15, 2020Categories AzureTags Azure, Networking, Private Endpoint, Private Link, Security, Static Website, Storage Account, Web Application Firewall, Web Application Firewall v2, Web Application Gateway, Web Application Gateway v22 Comments on Securing A Storage Account Static Website Using VNet Web Application Firewall

Inter-Region Resiliency for Zone Redundant Storage

Inter-Region Resiliency for Zone Redundant Storage

Microsoft has added two new kinds of resiliency to general purpose v2 (GPv2) storage accounts called Geo-Zone Redundant Storage (GZRS) and Read-Access Geo-Zone Redundant Storage (RA-GZRS).

The Old ZRS

ZRS, when it originally appeared several years ago in Azure, was a form of general purpose storage account v (GPv1) replication that had a complex definition. It kept 3 copies of your data, 2 in the region of choice, and the third was either in the same region or in a nearby region. But this was before Azure regions had zones as we know them today.

The concept of ZRS was to get over the availability limitations of LRS and GRS:

  • LRS keeps 3 asynchronous copies of the storage account on a single storage cluster, in a single room (co-lo), in a single data centre, in a single region. If that one cluster, co-lo, or data centre goes down, then you lose the storage account until/if it returns.
  • GRS is an extension of LRS, keeping an additional 3 asynchronous copies of the storage account in the paired region (secondary region) of the primary region (the region you deployed the storage account into). However, you cannot use the failover replicas until Microsoft declares a failover, which is a non-retrievable failure of the primary; this event has never occurred but there have been plenty of local outages which made the accessible (LRS) copies unavailable for periods of time.
  • RA-GRS extends GRS by making the additional copies in the paired region available for read access, useful if you have a custom app that only needs to read the data.

However, the old ZRS still didn’t understand how to divide up it’s copies into independent zones in the same region, even if it spread the data around 2 to 3 data centres in the same region; those data centres could have had shared dependencies.

Availability Zones

Microsoft is slowly adding availability zones to their Azure regions. When a region (a cluster of closely located data centres that you deploy resources into) is broken up into availability zones, Microsoft creates 4 zones that have completely independent power, networking, etc. The idea is that if one zone goes down because of an internal infrastructure failure, it should have no affect on production systems in the other zones in the same region. As a result, we can get higher SLAs by using zone-redundant deployments.

However, there is a cost. Some resources require higher SKUs, there is a micro inter-zone communications cost, and latency between tiers of a service or services can be increased by using more than one zone.

Note that a region divides the data centres of that region into 4 zones. At any one time, you will see 3 zones, “round robin” (or some other algorithm) selected for you, labelled as 1, 2, and 3.

The New ZRS

When Microsoft launched GPv2, they did two things:

  • The shared an end-of-life date for ZRS in GPv1
  • They introduced a new form of ZRS in GPv2

The new ZRS uses the availability zones of an enabled region to place 3 copies of your storage account data across three different storage clusters, across 3 different data centres that do not have shared dependencies. Now if two of those data centres, co-los, or storage clusters go down, the storage account remains available.

Adding Geo-Redundancy To ZRS

It would make sense for ZRS to be used, but it does not have geo-redundancy. So just like with LRS, Microsoft is adding (in preview today in US East) two geo-redundant options:

  • GZRS or Geo-Zone Redundant Storage: ZRS plus 3 asynchronous copies in the paired region.
  • RA-GZRS or Read-Access Geo-Zone Redundant Storage: GZRS where the asynchronous copies can be used for read operations only.

Note that:

  • The replicas in the paired region are stored in LRS, not ZRS. And that means that …
  • The paired region does not need to be in the preview for GZRS or RA-GZRS and it does not need to support availability zones – only the primary region does.

Which means that more people will be able to use GZRS and RA-GZRS.

Is ZRS the New LRS?

For those regions where ZRS is supported, and GZRS/RA-GZRS will be added, would it make sense to use ZRS as your starting point? I would like to say that the answer is yes. My default answer is “yes” but you need to check that your services will support it. For example, I use ZRS for certain things, but other things, such virtual machine diagnostics, I cannot because the IaaS diagnostics agent will not support ZRS! I guess the team responsible for that is more focused on driving revenue into Azure Monitor Logs (Log Analytics) by adding support for Workspace (preview today) in addition to LRS/GRS storage.

Author AFinnPosted on August 15, 2019August 15, 2019Categories AzureTags Azure, General Purpose v2 Storage Account, Geo-Zone Redundant Storage, GPv2, GZRS, RA-GZRS, Read-Access Geo-Zone Redundant Storage, Storage, Storage Account, Zone Redundant Storage, ZRSLeave a comment on Inter-Region Resiliency for Zone Redundant Storage

Amsterdam – Starting Azure Infrastructure Course

Amsterdam – Starting Azure Infrastructure Course

I am happy to announce that our Cloud Mechanix venture is bringing my custom-written Starting Azure Infrastructure course to Amsterdam on April 19-20.

This course is intended for IT professionals and developers that wish to start working with or improve their knowledge of Azure virtual machines. The course starts at the very beginning, explaining what Azure is (and isn’t), administrative concepts, and then works through the fundamentals of virtual machines before looking at more advanced topics such as security, high availability, storage engineering, backup, disaster recovery, management/alerting, and automation. The frequently updated content is focused on real world applications, with a focus on best practices, security, stability, control, and uptime. I’ve also added lots of useful links to additional reading & how-to’s, as well as my own tips and tricks.

Cloud Mechanix has gotten off to a great start. We quickly sold 11 of 10 (!) seats in London for Feb 22-23. The room we booked actually can hold 24 people in classroom format. So we expanded that to 20 seats, and 19 of those seats were booked by early January – one seat is still available. That gave me some confidence to expand our operations, especially with some of the market knowledge I have about Azure’s success in Europe. So the next stop is Amsterdam!

The Location

Why Amsterdam? Obviously, the Netherlands is a big economy, but Amsterdam is one of Europe’s hubs – you can get to Amsterdam with a direct & cheap flight from almost every major city in Europe, and it’s easily driven to from a large part of western Europe.

The venue is the Radisson Blu hotel at Schiphol. This hotel:

  • Is on public transport.
  • Has a car park, an we are covering 25% of the day rate.
  • Is accessible by direct flights from large parts of Europe (and the World) thanks to being near Schiphol Airport – 866 NOK return from Oslo, 589 DKK from Copenhagen, £105 from London City, €150 from Rome.
  • Is just 9 minutes by taxi.
  • Provides a hotel shuttle from to/from the airport.
  • If you wish to stay in the hotel, check for prices on 3rd party sites, for as low as €114 per night.

All travel & accommodation prices were searched for on Jan 20th for the dates of Apr 18-20 using Skyscanner and Trivago.

The Class Agenda

This course spans two days, running on April 19-20, 2018. The agenda is below.

Day 1 (09:30 – 17:00):

  • Introducing Azure
  • Tenants & subscriptions
  • Azure administration
  • Admin tools
  • Intro to IaaS
  • Storage
  • Networking basics

Day 2 (09:30 – 16:00):

  • Virtual machines
  • Advanced networking
  • Backup
  • Disaster recovery
  • JSON
  • Diagnostics
  • Monitoring & alerting
  • Security Center

By the end of this course you should have new/additional knowledge to help you deploy business services using Azure virtual machines, with best practices on performance, security, and availability … in the real world.

Past Feedback

Here are some recent comments from courses that I have written and delivered:

“Aidan covers all the information and workarounds so that you know what not to do, rather than having to learn the hard way from your mistakes. Also happy to answer question and off-topic questions.”

 

“I found it very useful for my job. I especially liked that we covered aspects that weren’t in the course but provided useful context, e.g. applying learnings like MS Flow and MigAz. I learned a lot about networking – not just the tools but the aspects that the tools are applied to.”

 

“Excellent as always.”

Costs & Registration

image

You can read all about the course, venue, terms & conditions, and register here.

Further Dates & Private Training

Future dates will be announced in the EU. I’m investigating running training in the USA – accounting/tax is the challenge there! If you want private runs of this training, then please contact me.

Author AFinnPosted on January 22, 2018January 20, 2018Categories AzureTags ASR, Azure, Azure Backup, Azure Monitor, Azure Portal, Backup, Cloud Mechanix, DR, IaaS, JSON, Network Security Groups, Networking, Security, Starting Azure Infrastructure, Storage, Storage Account, Training, Virtual Machines4 Comments on Amsterdam – Starting Azure Infrastructure Course

Hot/Cold/Archive Blob Tiering in Azure

Hot/Cold/Archive Blob Tiering in Azure

You have the ability to (manually or programmatically) tier blobs (files) in a storage account. This post will explain how.

The Theory

There are three tiers of blob storage in Azure:

  • Hot: The cheapest to access, but the most expensive to store.
  • Cool: Medium price storage, but expensive to access.
  • Archive: Extremely cheap per GB storage (~$2.05 per TB per month)

Archive storage is unique because it does not offer read performance – you cannot download or directly access blobs (files) from archive storage. You can only send items from hot/cool storage to archive storage, and then “rehydrate” the blobs again by restoring them to hot/cool storage – then you can download or read the blobs. Hot and cool storage have a read latency of milliseconds, but rehydrating a blob from archive storage can take up to 15 hours. 15 hours is alright because these are files that aren’t even cool any more – they’re files that you’re keeping for legal reasons. In a legal scenario, a retrieval isn’t a rush operation because you’ll have days/weeks to comply with requests/orders.

Cool and archive storage both have minimum storage durations. For example, if you place a file into cool storage, Azure expects you to keep that file there for a minimum of 30 days. If you retrieve it after 5 days, then there’s a pro-rated minimum storage charge of 25 days (30-5). Archive storage expects you to keep files in that tier for at least 180 days. If I retrieve a file after 5 days, then there is a pro-rated charge of 175 days (180-5). In other words, only put things into cool or archive storage if they are either being used infrequently (cool) or not at all (archive).

At the moment, tiering is a manual or scripted/programmed action. There is no auto-tiering of blobs but, at Build (earlier in 2017), Microsoft did say this was something they wanted to try to do after general availability (yesterday).

Storage Accounts

There are 3 kinds of storage account today. You can deploy cool and hot blob storage accounts. Creating new blob storage account that is cool sets the default storage tier to the cool, and creating a hot blob storage account sets the default tier to be hot. You can switch the tiering of individual blobs from hot-cool-archive as you wish. Blob storage accounts were a stepping stone to tiering, as you’ll see in a moment.

General Purpose storage accounts (what used to be just “storage accounts”) are now called General Purpose v1 (GPv1) storage accounts. GPv1 storage accounts support blob storage, but also page blob (un-managed disks or VHD), Azure Files (Azure File Sync and file storage for legacy apps), queues (PaaS messages), and tables (NoSQL, where performance metrics are kept). GPv1 storage accounts do not offer tiering for blobs.

Now we have General Purpose v2 storage accounts which do offer blob tiering, at the same prices as hot/cool blob storage accounts. The per GB price of blob storage is slightly less than that of GPv1, however the blob storage transaction costs are quite a bit higher than GPv1.

Note, those of you doing Azure virtual machines should now be using managed disks which are not kept in storage accounts. Your use of storage accounts should be reduced quite a bit now.

GPv2 storage accounts have higher transaction costs than GPv1 storage accounts, x125 in some cases. This could have a big bad impact on your bill. You will be better off sticking with GPv1 storage accounts with some Azure services, such as Azure Site Recovery (ASR).

The Practice

The process of creating a new storage account is changed slightly. Microsoft is recommending, and have set it to default, that we only ever create GPv2 storage accounts. You’ll note that GPv2 storage accounts are the default, and you have the option to set the default blob tier to be hot or cool. Choose hot if you’re accessing the blobs frequently, and cool if you’ll access the majority of blobs infrequently.

image

Existing GPv1 (not blob) storage accounts can be easily upgraded to GPv2. Open the GPv1 storage account, browse to Configuration, and hit Upgrade.

I have uploaded a blob to a container in a GPv2 storage account. As you can see below, the default tier was hot, because the blob (file) is in the hot tier.

image

If I click the blob, the properties of the blob appear. Note the ability to change the tier of the blob at the bottom of the blade.

image

I can switch the blob to cool by selecting Cool and clicking Save (at the top). Now if I refresh the view of the container, the blob is in the cool tier.

image

To move the blob to the archive tier, I’ll open the properties again, select the Archive tier, and click Save. I am warned that the blob will be inaccessible while it resides in the archive tier. If I want to use the blob again, I will have to rehydrate it back to the Cool or Hot tiers. I’m doing moving the blob from the cool tier on day 0 of it being in the cool tier so there will be a 30 day minimum duration charge.

image

Now if I hit Refresh, the tier of the blob has changed to Archive.

image

Note that if I open the blob properties, the option to download this archived blob is greyed out/disabled. I cannot directly access archived blobs, so to download the file, I must rehydrate the blob back to either the cool tier or the hot tier. That’s easily done by selecting a tier (hot in my example) and clicking Save. I’m doing this on day 0 on this blob’s presence in the archive tier, so there will be a 180 day minimum duration charge.

This rehydration may take up to 15 hours. A notification in the properties of the blob informs you that rehydration is taking place.

image

Summary

And that’s it! The process is pretty easy. I don’t envision anyone changing the tiers of lots of individual blobs up and down on a manual basis, but I can imagine software taking advantage of this tiering process and doing it on your behalf. Maybe Azure Backup or other 3rd party tiering systems (or StorSimple) might take advantage of this over time.

Was This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Author AFinnPosted on December 14, 2017January 20, 2018Categories AzureTags Archive, Azure, Blob, Cold, Hot, Storage, Storage Account, Tiering5 Comments on Hot/Cold/Archive Blob Tiering in Azure

Azure General Purpose v2 Storage Accounts

Azure General Purpose v2 Storage Accounts

Combined with the general availability of blob tiering (hot, cool, and archive), Microsoft has launched a new version of storage account, called the General Purpose v2 Storage Account.

Before, we had two kinds of storage account:

  • General Purpose: Now called the General Purpose v2 storage account, this offered blob, page blob, queue, and table storage. It was the main place for storing virtual hard disks (un-managed disks) until managed disks were launched.
  • Blob: Blob storage accounts could be deployed as hot or cool tiers. Hot tier offered the cheaper access rate, but per-GB capacity billing was priced between General Purpose V1 storage accounts and cool blob storage accounts. Cool blob storage accounts offered the cheapest per GB capacity, but had the highest charge for accessing blobs. Blob storage accounts can only store blobs.

Now we have the v2 General Storage account which takes the features of the blob storage accounts and combines then with the general storage account, plus tiering.

Tiering means that we can move a blob between 3 tiers within the same storage account (not automatic tiering today):

  • Hot: Lowest access rates, most expensive per GB capacity.
  • Cool: Still low latency, but cheap per GB capacity at higher access rate.
  • Archive: The cheapest per GB capacity (~$2.05 per TB per month!), but it takes up to 15 hours to move a blob back to cool/hot where it can be accessed again.

image

Today, when you create a new storage account, the General Purpose v2 is the default option and is what you should choose, according to Microsoft. Your blob storage accounts live on, but you should stop deploying them. You can choose to upgrade from General Purpose v2 to v2 (open the storage account, go to Configuration, click Upgrade), but you should understand what affect this will have on your bill.

image

The per GB cost of blobs will decrease. For example, LRS in v2 costs $0.024 per GB in East US for the first 1 TB. In v2, the cost is $0.0208 per GB. However, the blob access rates are going up. For all of my customers today, these access rates barely even register in the monthly bill! For example, Write Operations in v1 are $0.0004 per 10,000 and this goes up to $0.05 in v2. If you do the math, a customer with $1 of blob write charges in v1 will have a bill of $125 in v2.

Note that un-managed disk prices (page blobs) don’t appear to be changing.

A new charge called “early deletion” is being introduced for cool (GPv2) and archive tiers. This is also known as “minimum storage duration”. If you delete/move a blob from cool (GPv2) or archive before this minimum time, then there is a pro-rated charge. For example, if you move a blob from archive tier after 170 days, then there is a prorated charge for 10 days of storage in that tier (180 minimum – 170).

This should be “fun” to estimate for customers Confused smile

Was This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Author AFinnPosted on December 13, 2017January 20, 2018Categories AzureTags Archive, Azure, Storage, Storage Account, Tiering6 Comments on Azure General Purpose v2 Storage Accounts

Azure VM Could Not Start–Missing Boot Diagnostics Storage Account

Azure VM Could Not Start–Missing Boot Diagnostics Storage Account

I recently had an issue where starting an Azure virtual machine would fail. The reason given was that the Boot Diagnostics storage account was missing. In this post, I’ll show you mu workaround.

As part of my recent upgrade/migration for the VM hosting this site, I did a big clean up and re-organization of resources. One piece of that was to deploy a dedicated storage account for diagnostics and logging. I configured the VM Agent to use this new diagnostics storage account, and I changed Boot Diagnostics to use this new diagnostics storage account too – both updates appears to succeed because the notifications said the changes were successful. I then deleted the old storage account because it was no longer used.

Later in the day, I shutdown the VM to resize it. Then I attempted to boot the VM up, and the start job failed. That caused an increase in my heart’s BPM! I explored the error and it said that the storage account for Boot Diagnostics could not be found.

FYI, boot diagnostics periodically captures a BMP file of a Windows Server VM’s “console” and stores it in a container in the storage account. I don’t like that failing to get a screenshot fails a VM.

I checked Boot Diagnostics, and … the old storage account that I had replaced and deleted was still specified. I changed it to the new storage account, and saved the change. Once again, I got a notification that the save was successful.

I attempted to start the VM again, and I got the same error. I went back into Boot Diagnostics, and the old storage account was still specified. I’d see this happen before and I new how to work around it:

  1. Disable Boot Diagnostics in the VM and save the change.
  2. Start the VM – now it will start.
  3. Enable Boot Diagnostics in the VM, select the new storage account, and save the change.

image

This gets the VM running as quickly as possible, and gets Boot Diagnostics to use the correct storage account.

Author AFinnPosted on September 4, 2017September 1, 2017Categories AzureTags Azure, Boot Diagnostics, Diagnostics, Storage Account, Virtual Machines6 Comments on Azure VM Could Not Start–Missing Boot Diagnostics Storage Account

How I Upgraded This VM To Azure Resource Manager

How I Upgraded This VM To Azure Resource Manager

In this post, I will explain how I upgraded the Azure virtual machine, that this site is hosted on, from ASM (Azure Service Management, Classic or Azure v1) to ARM (Azure Resource Manager, Resource Manager or Azure v2).

FYI, although I had no plans for changing subscriptions, this migration is also the first step for moving resources from one subscription to another, e.g. Credit Card/Direct to CSP, Open to CSP, or EA to CSP.

Background

I’ve been running aidanfinn.com on Azure for a few years now. It game me a vested interest in Azure, other than my day job where I teach and sell Azure services to MS partners. Over the years, I’ve applied some of the things that I’ve learned, including one time where MySQL blew up so bad that I had to use Azure Backup (then in preview) to restore the entire VM – this is why you’ll see the phrase “aidanfinn02” later in the example.

The VM was running in ASM. ASM is effectively deprecated, but I never had a chance to migrate it to ARM. That changed, and I decided to make the switch. In the process, I decided that I wanted to clean up resource groups, migrate to managed disks, and maybe change a few other things.

I had two options available to me:

  • MigAz: Great community tool that allows a complete change, including renaming. There is downtime to move the disks.
  • The “Platform Supported” method: Using official cmdlets to do an ASM-ARM migration with no downtime during the ASM-ARM migration.

I went with the platform supported method. Long-term, it means I have more work to do to do some renaming, etc, but I’ve never had “my own” stuff to move using this method, so I wanted to do it and document it with a real example.

Note that the platform support method has a few approaches. My virtual machine was in a virtual network, so I opted to move the entire virtual network and associated contents. This is cool because, all VMs (I have one only) are moved, and endpoints are converted into NAT rules in a load balancer. Any reserved cloud service IPs are converted into static public IPs.

Note, you’ll need to download the latest Azure PowerShell modules (and reboot if it’s your first install) to do either method.

FYI, the below includes copy/pastes of the actual cmdlets that I used. The only thing I have modified, for obvious reasons, is the subscription ID.

Register the Migration Provider

You’ll need to register the provider that allows you to do ASM-ARM migrations. This is done on a per subscription basis. You’ll log into your subscription using ARM:

Login-AzureRmAccount

If your tenant has more than one subscription, like mine does, then you need to query the subscriptions:

Get-AzureRmSubscription

This allows you to get the ID of the subscription that you will sign into, so you can select it as the current subscription to work on:

Select-AzureRmSubscription -SubscriptionId 1234567f-a1b2-1234-1a2b-1234ab123456

Next you register the migration proivder:

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate

This migration can take up to five minutes. You can query the status of the migration:

Get-AzureRmResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate

You can continue once you get a registered status:

ProviderNamespace : Microsoft.ClassicInfrastructureMigrate

RegistrationState : Registered

ResourceTypes : {classicInfrastructureResources}

Locations : {East Asia, Southeast Asia, East US, East US 2…}

Validation

The migration has three phases. The first allows you to validate that you can migrate your machines cleanly. Common issues include unsupported extensions, extension versions, classic backup vault registrations, endpoint ACLs, etc.

To do the validation, you need to sign in using ASM:

Add-AzureAccount

Once again you need to make sure you’re working with the right subscription:

Get-AzureSubscription

And then make that subscription current:

Select-AzureSubscription -SubscriptionId 1234567a-a1b2-1234-1a2b-1234ab123456

I migrated my virtual network, so I queried for the Azure name of the virtual network.

Get-AzureVnetSite | Select -Property Name

My virtual network was called aidanfinn02 so I saved that as a variable – it’ll be used a few times.

$vnetName = “aidanfinn02”

Then I ran the validation against the virtual network, saving the results in a variable called $validate:

$validate = Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName

I could then see the results:

$validate.ValidationMessages

There were two issues:

  • A faulty monitoring (Log Analytics) extension (guest OS agent)
  • The VM was being backed up by a classic Azure Backup backup vault.

ResourceType : VirtualNetwork

ResourceName : aidanfinn02

Category : Error

Message : Virtual Network aidanfinn02 has encountered validation failures, and hence it is not supported for migration.

VirtualMachineName :

ResourceType : Deployment

ResourceName : aidanfinn02

Category : Error

Message : Deployment aidanfinn02 in Cloud Service aidanfinn02 has encountered validation failures, and hence it is not supported for migration.

VirtualMachineName :

ResourceType : Deployment

ResourceName : aidanfinn02

Category : Error

Message : VM aidanfinn02 in HostedService aidanfinn02 contains Extension MicrosoftMonitoringAgent reporting Status : Error. Hence, the VM cannot be migrated. Please ensure that

the Extension status being reported is Success or uninstall it from the VM and retry migration.,Additional Details: Message=This machine is already connected to another

Log Analytics workspace, please set stopOnMultipleConnections to false in public settings or remove this property, so this machine can connect to new workspaces, also

it means this machine will get billed multiple times for each workspace it report to. (MMAEXTENSION_ERROR_MULTIPLECONNECTIONS) Code=400

VirtualMachineName : aidanfinn02

ResourceType : Deployment

ResourceName : aidanfinn02

Category : Error

Message : VM aidanfinn02 in HostedService aidanfinn02 contains Extension MicrosoftMonitoringAgent reporting Handler Status : Unresponsive. Hence, the VM cannot be migrated.

Please ensure that the Extension handler status being reported is Ready or uninstall it from the VM and retry migration.,Additional Details: Message=Handler

Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent of version 1.0.11049.5 is unresponsive Code=0

VirtualMachineName : aidanfinn02

ResourceType : Deployment

ResourceName : aidanfinn02

Category : Error

Message : VM aidanfinn02 in HostedService aidanfinn02 is currently configured with the Azure Backup service and therefore currently not supported for Migration. To migrate this

VM, please follow the procedure described athttps://aka.ms/vmbackupmigration.

VirtualMachineName : aidanfinn02

The solutions to these problems were easy:

  1. I signed into the classic Azure Management Portal and unregistered the virtual machine in the backup vault – DO NOT DELETE THE BACKUP DATA!
  2. Then I switched to the Azure Portal (and stayed here), and I removed the VMSnapshot (Azure Backup) extension from the VM.
  3. And then I removed the Microsoft.EnterpriseCloud.Monitoring (Log Analytics) extension
  4. The VM had an old version of the Diagnostics agent, so I removed that too, even though it didn’t effect validation.

I re-ran the validation:

$validate = Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName$validate.ValidationMessages

And the result was that the virtual network (and thus the VM) was ready for an ARM migration.

$validate.ValidationMessages

ResourceType : VirtualNetwork

ResourceName : aidanfinn02

Category : Information

Message : Virtual Network aidanfinn02 is eligible for migration.

VirtualMachineName :

ResourceType : Deployment

ResourceName : aidanfinn02

Category : Information

Message : Deployment aidanfinn02 in Cloud Service aidanfinn02 is eligible for migration.

VirtualMachineName :

ResourceType : Deployment

ResourceName : aidanfinn02

Category : Information

Message : VM aidanfinn02 in Deployment aidanfinn02 within Cloud Service aidanfinn02 is eligible for migration.

VirtualMachineName : aidanfinn02

Preparation

The preparation phase is next. This is an interim or trial period where you introduce the ARM API to your resources. For a time, the resources are visible to both ARM and ASM. There is no downtime, but you cannot make any configuration changes. The reasoning for this is that you can validate that everything is still working.

You run the command, as I did against the virtual network.

Move-AzureVirtualNetwork -Prepare -VirtualNetworkName $vnetName

And then you wait … try not to grind your teeth or chew your gums:

OperationDescription OperationId OperationStatus

——————– ———– —————

Move-AzureVirtualNetwork 2bcb56ce-2330-0824-a376-a3dcc4892e3d Succeeded

Commit

Once preparation is done, double-check everything. My website was still responding, and resources appeared in two different resource groups with -Migrated suffixes – I’ll show you how I tidied that up later in the post. I was ready to commit; this is when you tell Azure that all is good, and to please remove the ASM APIs from the resources.

Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName

More time passes when you forget to breath, and then it’s done! You’re in ARM, and your resources are manageable once again. Note that there is an alternative to Commit, which is to abort the process and roll back to ASM-only management.

Network Security Group

My ASM deployment was quite basic, and not best practice. I created a network security group for the subnet, allowing in:

  • RDP to the subnet
  • HTTP to the local static IP of the VM.

Resource Group Clean-up

As I mentioned, my ASM resources migrated into two resource groups as ARM resources. I wanted to:

  • Move the resources into a single resource group.
  • Get rid of the –Migrated suffix.

You can move resources, but you cannot rename resource groups. So I created a third resource group (aidanfinn):

image

Then I moved the resources from both of the migrated resource groups into the new aidanfinn resource group:

image

Migrate Storage to ARM

The official method for migrating storage is to migrate the storage account. That means that you move the storage account with the VHDs within it. Azure offers a new method for handling storage called managed disks:

  • You dispense with the storage account for VHDs
  • The disk becomes a manageable resource in the Portal
  • You get cool new features like Snapshots and easier VM restores from disk

I decided to take a different route – I would convert my ARM VM from un-managed disks to managed disks. I needed a few values:

$rgName = "aidanfinn"
$vmName = "aidanfinn02"

I stopped the VM (my first piece of downtime in this entire process):

Stop-AzureRmVM -ResourceGroupName $rgName -Name $vmName -Force

Then I did the conversion:

ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $rgName -VMName $vmName

And then I restarted the VM, after just a few minutes of downtime:

Start-AzureRmVM -ResourceGroupName $rgName -Name $vmName

Re-Introduce Management

I like the management features of Azure, so I re-introduced:

  • Azure Backup of the VM using a recovery services vault using a custom policy – I created a manual backup immediately.
  • Monitoring & diagnostics, to a new dedicated storage account – make sure you verify that the storage account is being used by the VM Agent and Boot Diagnostics.

Cleanup ASM

Lots of stuff can get left behind, especially if you’ve been trying things out. That all needs to be removed. One thing I kept around for a while was the classic backup vault, just in case. I only unregistered the old ASM VM – I did not delete the data. That means I have a way back if all goes wrong in ARM or I screw up in some way. I’ll give it a month, and then I’ll remove the old vault.

By the way, you can upgrade a backup vault to ARM (recovery services vault) if you want to keep your retention.

The End Result

I ended up with everything in ARM and in one resource group:

image

I don’t like the naming, so I will be cleaning things up. My next mini-project will be to:

  1. Power down the VM.
  2. Create a disk snapshot.
  3. Create a new disk from the snapshot.
  4. Create a new deployment, using the new disk, with names that I like.

I know – I could have done all that more quickly and easily using MigAz, but I wanted to do the platform supported migration on my stuff, and it was a chance to document it too! Hopefully this will be useful for you.

Author AFinnPosted on August 31, 2017August 31, 2017Categories AzureTags ARM, ASM, Azure, Azure Backup, Managed Disks, Migration, PowerShell, Storage Account, Virtual MachinesLeave a comment on How I Upgraded This VM To Azure Resource Manager
RSS
Facebook
Facebook
fb-share-icon
Twitter
Visit Us
Follow Me
LinkedIn
LinkedIn
Share

Tags

  • 1709
  • Access Restrictions
  • ACT
  • Action
  • Active Directory
  • Activity Log
  • Advanced Threat Protection
  • AKS
  • Alerts
  • AMD
  • Analytic Rules
  • App Controller
  • Apple
  • Appliance
  • Application Firewall
  • Application Gateway
  • App Services
  • Architecture
  • Archive
  • ARM
  • ARM Template
  • ASM
  • ASR
  • Automation
  • Availability Sets
  • Availability Zones
  • Azure
  • Azure AD
  • Azure AD Connect
  • Azure AD Domain Services
  • Azure Automation
  • Azure Backup
  • Azure Backup Server
  • Azure Bastion
  • Azure DevOps
  • Azure DNS
  • Azure Files
  • Azure File Sync
  • Azure Firewall
  • Azure Firewall Manager
  • Azure Firewall Policy
  • Azure IaaS
  • Azure Image Builder
  • Azure Kubernetes Service
  • Azure Lighthouse
  • Azure Migrate
  • Azure Monitor
  • Azure Monitor Logs
  • Azure PaaS
  • Azure Policy
  • Azure Portal
  • Azure Resource Graph
  • Azure Resource Manager
  • Azure Security Center
  • Azure Sentinel
  • Azure Shared Image Gallery
  • Azure Site Recovery
  • Azure Stack
  • Azure Virtual Desktop
  • Azure Virtual WAN
  • Azure WAN
  • B-Series
  • Backup
  • Barracuda
  • Bastion Host
  • BGP
  • Bicep
  • BitLocker
  • Blob
  • Books
  • Boot Diagnostics
  • Business Intelligence
  • Certificate
  • Check Point
  • Circuit
  • Cisco
  • Citrix
  • Cloud
  • Cloud Adoption Framework
  • Cloud Camp
  • Cloud Computing
  • Cloud Mechanix
  • Cold
  • Compliance
  • Conference
  • Conferences
  • ConfigMgr
  • Configuration Manager
  • Connect
  • Connection
  • Containers
  • Course
  • Custom RBAC Roles
  • Custom Resource Provider
  • Custom Routing
  • Das_v3
  • Data Warehouse
  • Default Route
  • Delegated Resource Management
  • Delegation
  • Deployment
  • DevOps
  • DevSecOps
  • DevTest Labs
  • Diagnostics
  • Dig Data
  • DMZ Hub
  • DNS
  • DPM
  • DR
  • DSC
  • Dublin
  • Eas_v3
  • EMS
  • Entity Behavior
  • EPYC
  • Evens
  • Event
  • Event Hub
  • Event Notes
  • Events
  • Exchange
  • Exchange 2010
  • ExpressRoute
  • ExpressRoute Gateway
  • Failover Clustering
  • Featured
  • Firewall
  • Forefront
  • Functions
  • GA
  • Gateway
  • GatewaySubnet
  • General Purpose v2 Storage Account
  • Geo-Zone Redundant Storage
  • Git
  • GitHub
  • Global Azure
  • Global Azure Bootcamp
  • Global VNet Peering
  • Governance
  • GPv2
  • GZRS
  • Hardware
  • HB_v2
  • HCI
  • HDInsight
  • Health Monitoring
  • Hot
  • HP
  • Hub
  • Hub & Spoke
  • Hub-and-spoke
  • Hybrid Cloud
  • Hyper-Converged Infrastructure
  • Hyper-V
  • Hyper-V Server
  • IaaS
  • IaC
  • I am live blogging this session
  • IDPS
  • IE
  • Ignite
  • IIS
  • Image Builder
  • Image Definition
  • Image Template
  • Image Version
  • Infrastructure-as-Code
  • Intel NUC
  • Internet
  • Internet Explorer
  • Intune
  • IoT
  • IoT Hub
  • iPhone
  • it’s lovely and cool in this room Smile
  • JSON
  • Jump box
  • Jumpbox
  • Key Vault
  • Kubernetes
  • Layer-7
  • Licensing
  • Lighthouse
  • Linux
  • Live
  • Live Migration
  • Load Balancer
  • Load Balancing
  • Log Analytics
  • Lync
  • MABS
  • Machine Learning
  • Managed Apps
  • Managed Disks
  • Managed Service Prover
  • MAP
  • MARS
  • MDT
  • Metasploit
  • Microservices
  • Microsoft
  • Microsoft. Hyper-V
  • Microsoft. MDT
  • Microsoft 365
  • Microsoft Ignite 2019
  • Microsoft Information Protection
  • Microsoft Multipath I/O (MPIO) Users Guide for Windows Server 2012
  • Microsoft News
  • Migration
  • Mobile
  • MVP
  • MVPBuzz
  • Nano Server
  • Network
  • Networking
  • Network Security Group
  • Network Security Groups
  • Network Virtual Appliance
  • Network Watcher
  • News
  • NSG
  • NSG Flow Logging
  • NSG Traffic Analytics
  • NVA
  • O365
  • Office
  • Office 365
  • Opalis
  • Operations Manager
  • OpsMgr
  • Orchestrator
  • P2S
  • P2S Server Configuration
  • PaaS
  • Packer
  • Palo Alto
  • Peering
  • Performance
  • PIP
  • Pipeline
  • Planned Maintenance
  • Platform-as-a-Service
  • Podcasts
  • Point-to-Site Gateway
  • Point-to-Site VPN
  • PowerShell
  • Pricing
  • Private Cloud
  • Private DNS Zone
  • Private Endpoint
  • Private Link
  • Private Peering
  • Probe
  • Project Honolulu
  • ProLiant
  • Public IP Address
  • Public IP Prefix
  • Quick Storage Migration
  • RA-GZRS
  • RBAC
  • RDmi
  • RDP
  • RDS
  • RDS Gateway
  • Read-Access Geo-Zone Redundant Storage
  • Redundancy
  • Regions
  • Remote Desktop
  • Remote Desktop Services
  • Resizing
  • Resource Manager
  • Resources
  • Route Propagation
  • Route Table
  • Route Tables
  • Routing
  • RTM
  • Rules Collection Group
  • S2D
  • S2S VPN
  • Satya Nadella
  • Scale-Out File Server
  • SCORCH
  • Scripting
  • Scripts
  • SCVMM
  • SD-WAN
  • Secured Virtual Hub
  • Secure Virtual Hub
  • Security
  • Security Center
  • Series
  • Serveless
  • Server Core
  • Serverless
  • Service Catalog
  • Service Endpoint
  • Service Fabric
  • Service Level Agreement
  • Service Manager
  • Service Provider
  • Service Tags
  • SharePoint
  • Site-to-Site VPN
  • SLA
  • SMA
  • so hit refresh to get the latest. BTW
  • Spoke
  • Springboard
  • SQL
  • SQL 2005
  • SQL 2008
  • SQL Server
  • SSH
  • SSL
  • SSL Gateway
  • Standard Public IP
  • Standard SSD
  • Starting Azure Infrastructure
  • Static Website
  • STEP
  • Storage
  • Storage Account
  • Storage Accounts
  • Storage Replica
  • Storage Spaces
  • Storage Spaces Direct
  • Stream Analytics
  • Subnet
  • Subscription
  • Surface
  • Surface Pro
  • System Center
  • System Center Essentials
  • System Route
  • Tablet
  • Template
  • Tenant
  • Terraform
  • Tiering
  • Traffic Manager
  • Training
  • UDR
  • Uptime
  • User-Defined Route
  • User-Defined Routing
  • VDI
  • Virtual Hub
  • Virtual Hub Route Table
  • Virtualisation
  • Virtual Machine
  • Virtual Machines
  • Virtual Network
  • Virtual Network Gateway
  • Virtual WAN
  • Virtual WAN Hub
  • Virtual WAN Hub Route Table
  • Virtual WAN Route Table
  • Visual Studio Code
  • VMM
  • VMs
  • VMware
  • VNet
  • VNet Peering
  • VPN
  • VPN Gateway
  • VS Code
  • W2008
  • W2008R2
  • WAF
  • WAFv2
  • WAG
  • WAGv2
  • WAIK
  • WAN
  • WAP
  • WatchGuard
  • WDS
  • Web App
  • Web Application Firewall
  • Web Application Firewall v2
  • Web Application Gateway
  • Web Application Gateway v2
  • WebApps
  • Webinar
  • Windows
  • Windows 7
  • Windows 8
  • Windows 8.1
  • Windows 10
  • Windows 2000
  • Windows Azure
  • Windows Azure Pack
  • Windows Defender
  • Windows Home Server
  • Windows Phone
  • Windows Server
  • Windows Server. Windows Server 2016
  • Windows Server 2003
  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Server 2012
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2015
  • Windows Server 2016
  • Windows Server 2019
  • Windows Server Containers
  • Windows Sever 2016
  • Windows Updates
  • Windows User Group
  • Windows Virtual Desktop
  • Windows Vista
  • Windows XP
  • WordPress
  • Workbooks
  • Workspace
  • WS2019
  • WSUS
  • Xen
  • Xeon
  • Zone Redundant Storage
  • ZRS
  • Blog
  • Events
  • Azure Newsletter
  • Azure Training
  • About Aidan Finn
  • Privacy
  • RSS
Aidan Finn, IT Pro Privacy