Restore An Azure VM to an Availability Set From Azure Backup in the Azure Portal

Microsoft has shared how to restore an Azure VM to an availability set using PowerShell from Azure Backup. It’s nasty-hard looking PowerShell, and my problem with examples of VM creation using PowerShell is that they’re never feature complete.

While writing some Azure VM training recently, I stumbled across a cool option in the Azure Portal that I tried out … and it worked … and it means that I never have to figure that nasty PowerShell out Smile

The key to all this is to start using Managed Disks. Even if your existing VMs are using un-managed (storage account) disks, that’s not a problem because you can still use this restore method. The other thing you should remember is that the metadata of the VM is irrelevant – everything of value is in the disks.

Restore the Disks of the VM

Using these steps you can restore the disks of your VM, managed or un-managed, to a storage location, referred to as the staging account.. Each disk is restored as a blob VHD file, and a JSON file describes the disks so that you can identify which one is the “osDisk”.

Create Managed Disks from the Restored VHDs

In this process, you create a managed disk from each restored VHD or blob file in the staging location. You have the option to restore the disks as Standard (HDD) or Premium (SSD) disks, which offers you some flexibility in your restore (you can switch storage types!). Make sure you ID the osDisk from the JSON file and mark it as either a Windows or Linux OS disk, depending on the contents.

Create a VM From the OS Managed Disk

The third set of steps bring your VM back online. You use the previously restored/identified osDisk and create a new virtual machine using that managed disk. Make sure you select the availability set that you want to restore the VM to.

Clean Up

The last step is the clean up. If you had any data disks in the original machine then you need to re-attach them to the new virtual machine. You’ll also need to configure the network settings of the Azure NIC resource. For example, if the new VM is replacing the old one, you should enter the IP settings of the old VM into the new NIC Azure resource, change any NAT/load balancing rules, NSGs, PIPs, etc.

And that’s it! There’s no PowerShell, and it’s all pretty simple clicking in the Azure Portal that won’t take that long to do after the disks are restored from the recovery services vault.

Create a New VM From An Existing Managed Disk

In previous posts I have shown how to restore the disks of a VM to a storage account and how to create managed disks from those VHD blobs. In this post, I will show how to create a new VM from a managed disk. When these 3 steps are done together, this is an easy way to restore an Azure virtual machine from backup to an availability set.

I previously created a managed disk from a restored VHD blob, and stored it in a resource group called demorestore. I deliberately named the new managed disk after the VM that I am going to create.

image

You can only create a new VM from a managed disk that contains an operating system. In the below screenshot, you can see that this disk contains Windows. If this is an OS disk, then you can click the magic button called + Create VM.

image

What you are doing by clicking the button is shorting the usual Create Virtual Machine blade/wizard. A blade you probably know appears, but some of the features are greyed out because they’re already selected by choosing to create a VM from an existing managed disk.

Enter the name of the new VM, and select the resource group.

image

In the Size blade, choose the size of the new VM. In settings, choose the availability set (key to restoring a VM to an availability set) and then all the other stuff like network, subnet, extensions, etc.

When you complete the wizard, a VM (which is just metadata) is created using your pre-existing OS managed disk. If you have any data disks to re-use, open Disks in the settings of the VM and add those managed disks with the required host caching mode. And that’s all there is to it!

Microsoft Publishes Some Details on the New Azure B-Series VMs

Last week I blogged about how the pricing of a new B-Series (burstable CPU) virtual machine appeared online. At the time, we knew almost nothing about the machines other than their intended workloads: anything with normally low CPU utilization that could temporarily burst, such as test/dev or low-end web/application servers.

While updating an article for Petri.com, I found that the official specs of Azure VMs had been updated to include the B-Series:

The B-Series provides these customers the ability to purchase a VM size with a price conscience baseline performance that allows the VM instance to build up credits when the VM is utilizing less than its base performance. When the VM has accumulated credit, the VM can burst above the VM’s baseline using up to 100% of the CPU when your application requires the higher CPU performance.

That means that this is very similar to the AWS T2 Instances. By default, your machine’s CPU is artificially capped. By underutilizing the CPU, the machine can earn & bank credits that can be later used. This bank has a hard limit, depending on the size of the machine. Should the service in the machine need more CPU, those credits can be burned to go beyond the artificial cap to use the underlying physical cores potential. In other words, the less you use the CPU, the more horsepower you get for those times when you need it.

Here are some details on the sizes in the B-Series.

  • All of the machines are S-variants
  • Each machine has a small amount of SSD temporary storage.
  • Note how the disk stats refer to “max local disk”. Hmm!

image

Right now, there is a limited access preview for the B-Series in just a few regions:

  • West Europe
  • West US 2
  • East US
  • Asia Pacific – Southeast

I can see the B-Series in my subscriptions, but I cannot deploy it – the quota is set to 0 and the blade for requesting an increase does not include the B-Series. I guess this is still a private preview for now, and things might change on Sept 25th (Ignite).

Restore an Azure Virtual Machine’s Hard Disks

In this post, I’ll show you how to restore just the disks of an Azure virtual machine. This is useful if you want to restore a virtual machine to an availability set, or restore it as a different series/size.

Restoring to Availability Sets

For some reason that I do not know, we cannot restore a virtual machine to an availability set in Azure. It probably has something to do with the restriction in ARM that prevents a VM from being able to join an availability set after creation (vote for change here).

As a workaround, Azure Backup allows you to restore the disks, and then use those disks to create a new virtual machine (metadata) that is joined to the availability set. On the official docs pages, there is some pretty messy looking PowerShell to re-create the VM from those disks.

Thanks to some features of Managed Disks, if you have used managed disks for the VM, then you don’t need to go anywhere near that nasty PowerShell or JSON! I’ll post about that soon.

Restoring Disks

Browse to to the recovery services vault, open it, go to Backup Items > Azure Virtual Machine, and select the VM in question. Below is a screenshot of my web server in Azure. Click Restore VM.

image

A blade with recovery points appears. Choose a restore point, i.e. a point in time from when you want to restore from, and click OK.

image

The Restore Configuration blade appears. Choose Restore Disks as the Restore Type, and choose a storage account as the Staging Location. Click OK to start the backup job.

image

Some time later, the disk(s) of the virtual machine are restored as blobs in a container in the storage account. You’ll also find a JSON file with details of the disk(s) that were restored.

image

By the way, if you cannot tell which of the VHD blobs is your OS disk, download the JSON file and open it in Notepad (VS Code refuses to open it for me). The “osDisk” setting will tell you the path of the VHD blob that was the original data disk.

Microsoft’s solution would have you restore the virtual machine using PowerShell and that JSON file. I’ve read through it – it’s not pretty! My solution, in a later post, would create managed disks from the VHD blob(s), and then create a VM from the OS disk … and that’s nice and easy using the Azure Portal and a few mouse clicks.

Azure Low Cost “Burstable” CPU Virtual Machines

Microsoft has released pricing for a new kind of virtual machine in Azure, called the B-Series. The key traits of this VM type are:

  • It is 1/4 the price of a similar A_v2-Series machine.
  • The CPU runs at a low rate, and “bursts” on demand for higher capacity jobs.

I’d love to have more information to share, but all we have is all I stumbled upon in the pricing pages last week:

image

As you can see in the names, they comply with the “new” format. That S in the names suggests that these machines support Premium (SSD) storage disks.

These are low end machines, as you can see by the entry level 1 core & 1 GB RAM model. The Microsoft VM pricing page says that they are good for:

… development and test servers, low traffic web servers, small databases, micro services, servers for proof-of-concepts, build servers, and code repositories.

The costs are really low. The B2S is just €20.71 per month, compared with €85.33 for the A2_v2 – both having 2 cores and 4 GB RAM. If you want a low end web server, then that’s a seriously cheap offering!

AWS does have something called T2 Instances. These are VMs that offer CPU burst-ability based on credits earned for low CPU utilization. The rough language of suitable roles is similar to that of the Azure B-Series. However, we have no detailed information on the B-Series yet – my bet is that will be published on September 25th (Ignite day 1).

Azure VM Sizes Missing When Resizing

When you are resizing a running virtual machine, you might find that many sizes are not available. There is a workaround – shut the VM down! Here’s how I resized the Azure virtual machine that hosts this site, which started the day as an A2_v2 virtual machine.

image

First, I powered down the VM in the Azure Portal. Then I browsed to Size. All of the possible sizes were presented to me then. I selected a DS2_v2 Promo size, knowing that the price will increase to normal DS2_v2 pricing once the D3 is live in North Europe (I’ll upgrade then).

image

I clicked OK, and then powered up the VM.

image

StorSimple–The Answer I Thought I’d Never Give

Lately I’ve found myself recommending StorSimple for customers on a frequent basis. That’s a complete reversal since February 28th, and I’ll explain why.

StorSimple

Microsoft acquired StorSimple, a physical appliance that is made in Mexico by a subsidiary of Seagate called Xyratex, several years ago. This physical appliance sucked for several reasons:

  • It shared storage via iSCSI only so it didn’t fit well into a virtualization stack, especially Hyper-V which has moved more to SMB 3.0.
  • The tiering engine was as dumb as a pile of bricks, working on a first in-first out basis with no measure of access frequency.
  • This was a physical appliance, requiring more rackspace, in an era when we’re virtualizing as much as possible.
  • The cost was, in theory, zero to acquire the box, but you did require a massive enterprise agreement (large enterprise only) and there were sneaky costs (transport and import duties).
  • StorSimple wasn’t Windows, so Windows concepts were just not there.

Improvements

As usual, Microsoft has Microsoft-ized StorSimple over the years. The product has improved. And thanks to Microsoft’s urge to sell more via MS partners, the biggest improvement came on March 1st.

  • Storage is shared by either SMB 3.0 or iSCSI. SMB 3.0 is the focus because you can share much larger volumes with it.
  • The tiering engine is now based on a heat map. Frequently accessed blocks are kept locally. Colder blocks are deduped, compressed, encrypted and sent to an Azure storage account, which can be cool blob storage (ultra cheap disk).
  • StorSimple is available as a virtual appliance, with up to 64 TB (hot + cold, with between 500 GB and 8 TB of that kept locally) per appliance.
  • The cost is very low …
  • … because StorSimple is available on a per-day + per GB in the cloud basis via the Microsoft Cloud Solution Provider (CSP) partner program since March 1st.

You can run a StorSimple on your Hyper-V or VMware hosts for just €3.466 (RRP) per appliance per day. The storage can be as little as €0.0085 per GB per month.

FYI, StorSimple:

  • Backs itself up automatically to the cloud with 13 years of retention.
  • Has it’s own patented DR system based on those backups. You drop in a new appliance, connect it to the storage in the cloud, the volume metadata is downloaded, and people/systems can start accessing the data within 2 minutes.
  • Requires 5 Mbps data per virtual appliance for normal usage.

Why Use StorSimple

It’s a simple thing really:

  • Archive: You need to store a lot of data that is not accessed very frequently. The scenarios I repeatedly encounter are CCTV and medical scans.
  • File storage: You can use a StorSimple appliance as a file server, instead of a classic Windows Server. The shares are the same – the appliance runs Windows Server – and you manage share permissions the same way. This is ideal for small businesses and branch offices.
  • Backup target: Veeam and Veritas support using StorSimple as a backup target. You get the benefit of automatically storing backups in the cloud with lots of long term retention.
  • It’s really easy to set up! Download the VHDX/VHD/VMDK, create the VM, attach the disk, configure networking, provision shares/LUNs from the Azure Portal, and just use the storage.

 

So if you have one of those scenarios, and the cost of storage, complexities of backup and DR are questions, then StorSimple might just be the answer.

I still can’t believe that I just wrote that!

My Azure Load Balancer NAT Rule Won’t Work (Why & Solution)

I’ve had a bug in Azure bite me in the a$$ every time I’ve run an Azure training course. I thought I’d share it here. The course that I’ve been running recently focuses on VM solutions in a CSP subscription – so it’s all ARM, and the problem might be constrained to CSP subscriptions.

When I create a NAT rule via the portal, most of the time, the NAT rule fails to work. For example, I create a VM, enable an NSG to allow RDP inbound, and create a load balancer NAT rule to enable RDP inbound (TCP 50001 –> 3389 for a VM) It appears like there’s a timing issue behind the portal, because eventually the NAT rule starts to work.

There’s actually a variety of issues with load balancer administration in the Azure Portal:

  • The second step in creating a NAT rule is when the target NIC is updated; this fails a high percentage of the time (note the target being set to “–“ in the rule summary).
  • Creating/updating a backend pool can fail, with some/none of the virtual machines being added to the pool.

These problems are restricted to the Azure Portal. I have no such issues when configuring these settings using PowerShell or deploying a new resource group using a JSON template. That’s great, but not perfect – a lot of general administration is done in the portal, and the GUI is how people learn.