I Am Running My “Starting Azure Infrastructure” Course in London on Feb 22/23

I am delighted to announce the dates of the first delivery of my own bespoke Azure training in London, UK, on February 21st and 22nd. All the details can be found here.

In my day job, I have been teaching Irish Microsoft partners about Azure for the past three years, using training materials that I developed for my employer. I’m not usually one to brag, but we’ve been getting awesome reviews on that training and it has been critical to us developing a fast growing Azure market. I’ve tweeted about those training activities and many of my followers have asked about the possibility of bringing this training abroad.

So a new venture has started, with brand new training, called Cloud Mechanix. With this business, I am bringing brand-new Azure training to the UK and Europe.  This isn’t Microsoft official training – this is my real world, how-to, get-it-done training, written and presented by me. We are keeping the classes small – I have learned that this makes for a better environment for the attendees. And best of all – the cost is low. This isn’t £2,000 training. This isn’t even £1,000 training.

The first course is booked and will be running in London (quite central) on Feb 22-23. It’s a 2-day “Starting Azure Infrastructure” course that will get noobies to Azure ready to deploy solutions using Azure VMs. And experience has shown that my training also teaches a lot to those that think they already know Azure VMs. You can learn all about this course, the venue, dates, costs, and more here.

I’m excited by this because this is my business (with my wife as partner). I’ve had friends, such as Mark Minasi, telling me to do this for years. And today, I’m thrilled to make this happen. Hopefully some of you will be too and register for this training Smile

Protect Your Data With Microsoft Azure Backup


  • Vijay Tandra Sistla, Principal PM Manager
  • Aruna Somendra, Senior Program Manager

Aruna is first to speak. It’s a demo-packed session. There was another session on AB during the week – that’s probably worth watching as well.

All the attendees are from diverse backgrounds, and we have one common denominator: data. We need to protect that data.

Impact of Data Loss

  • The impact can be direct, e.g. WannaCry hammering the UK’s NHS and patients.
  • It can impact a brand
  • It can impact your career

Azure Backup was built to:

  • Make backups simple
  • Keep data safe
  • Reduce costs

Single Solution

Azure Backup covers on-premises and Azure. It is one solution, with 1 pricing system no matter what you protect: instance size + storage consumed.

Protecting Azure Resources

A demo will show this in action, plus new features coming this year. They’ve built a website with some content on Azure Web Apps – images in Azure FIles and data in SQL in an IaaS VM. Vijay refreshes the site and the icons are ransomwared.

Azure Backup can support:

  • Azure IaaS VMs – the entire VM, disks, or file level recovery
  • Azure Files via Storage account snapshots (NEW)
  • SQL in an Azure IaaS VM (NEW)

Discovery of databases is easy. An agent in the guest OS is queried, and all SQL VMs are discovered. Then all databases are shown, and you back them up based on full / incremental / transaction log backups, using typical AB retention.

For Azure File Share, pick the storage account, select the file share, and then choose the backup/retention policy. It keeps up to 120 days in the preview, but longer term retention will be possible at GA.

When you create a new VM, the Enable Backup option is in the Settings blade. So you can enable backup during VM creation instead of trying to remember to do it later – no longer an afterthought.

Conventional Backup Approaches

What happens behind the scenes in AB. Instead of using on-prem SQL, file servers, you’re starting to use Azure Files and SQL in VMs. Instead of hacking backups into Azure storage (doesn’t scale, and messy) you enable Azure Backup which offers centralized management, In Azure, it is infrastructure-free. SQL is backed up using a backup extension, VM’s are backed up using a backup extension.

28-09-2017 14-34 Office Lens

Azure File Sync is supported too:

In preview, there is short-term retention using snpashots in the source storage account. After GA they will increase retention and enable backups to be storage in the RSV.

28-09-2017 14-38 Office Lens


When you backup a Linux VM, you can run a pre-script, do the backup, and then run a post-script. This can enable application-consistent backups in Linux VMs in Azure. Aruna logs into a Linux VM via SSH. There are Linux CLI commands in the guest OS, e.g. az backup. There is a JSON file that describes the pre-and post scripts. There’s some scripts by a company by a company called capside for MySQL. The pre-script creates database dumps and stops the databases.

28-09-2017 14-49 Office Lens

az backup recoverypoint list and some flags can be used to list the recovery points for the currently logged in VM. The results show if they are app or file consistent.

az backup restore files and some parameters can be used to mount the recovery point – you then copy files from the recovery point, and unmount the recovery point when done.

28-09-2017 14-45 Office Lens

Restore as a Service

28-09-2017 14-50 Office Lens


2/3 of customers keeping on-premises data.

Two solutions in AB for hybrid backup:

  • Microsoft Azure Backup Server (MABS) / DPM: Backup Hyper-V, VMware, SQL, SharePoint, Exchange, File Server & System State to local storage (short-term retention)  and to the cloud (long term retention)
  • MARS Agent: Files & Folders, and System State backed up directly to the cloud.

System State

Protects Active Directory, IIS metadata, file server metadata. registry, COM+ Cert Services, Cluster services info, AD, IIS metabase.

Went live in MARS agent last month.

In a demo, Vijay deletes users from AD. He restores system state files using MARS. Then you reboot the DC in AD restore mode. And then use the wbadmin tool to restore the system state. wbadmin start systemstaterecovery. You reboot again, and the users are restored.

Vijay shows MARS deployment, and shows the Project Honolulu implementation.

Next he talks about the ability to do an offline backup instead of an online full backup. This leverages the Azure storage import service, which can leverage the new Azure Data Box – a tamper proof storage solution of up to 100 TB.


Using cloud isolates backup data from the production data. AB includes free multi-approval process to protect destructive operations to hybrid backups. All backup data is encrypted. RBAC offers governance and control over Azure Backup.

There are email alerts (if enabled) for destructive operations.

If data is deleted, it is retained for 14 days so you can still restore your data, just in case.

Hybrid Backup Encryption

Data is encrypted before it leaves the customer site.

Customers want:

  • To be able to change keys
  • Keep the key secret from MS

A passphrase is used to create they key. This is a key encryption key process. And MS never has your KEK.

Azure VM Disk Encryption

You still need to be able to backup your VMs. If a disk is encrypted using a KEK/BEK combination in the Key Vault, then Azure Backup includes the keys in the backup so you can restore from any point in time in your retention policy.

Isolation and Access Control

Two levels of authorization:

  • You can control access/roles to individual vaults for users.
  • There are permissions or roles within a vault that you can assign to users.

Monitoring & Reporting

Typical questions:

  • How much storage am I using?
  • Are my backups healthy?
  • Can I see the trends in my system?

Vijay does a tour of information in the RSV. Next he shows the new integration with OMS Log Analytics. This shows information from many RSVs in a single tenant. You can create alerts from events in Log Analytics – emails, webhooks, runbooks, or trigger an ITSM action. The OMS data model, for queries, is shared on docs.microsoft.com.

For longer term reporting, you can export your tenant’s data to an AB Content Pack in PowerBI – note that this is 1 tenant per content pack import, so a CSP reseller will need 100 imports of the content pack for 100 customers. Vijay shows a custom graphical report showing the trends of data sources over 3 months – it shows growth for all sources, except one which has gone down.

Power BI is free up to 1 GB of data, and then it’s a per-user monthly fee after that.


  • Backup of SQL in IaaS – preview
  • Backup of Azure file – preview
  • Azure CLI
  • Backup of encrypted VMs without KEK
  • Backup of VMs with storage ACLs
  • Backup of large disk VMs
  • Upgrade of classic Backup Vault to ARM RSV
  • Resource move across RG and subscription
  • Removal of vault limits
  • System State Backup

Windows Server Fall Release (1709) Technical Foundation

Speaker: Jeff Woolsey, Principal Program Manager

WS2016 Recap

Design points

  • Layered security for emerging threats:  Jeff has been affected by 4 of the big, well publicised hacks. CEOs are being fired because of this stuff now.
  • Build the software-defined data centre
  • Create a cloud-optimized application platform

Security in WS2016

  • Long laundry list of features: Defender, Control Flow Guard, Devices Guard, Credential Guard, Remote Credential Guard.
  • Shielded VMs – you don’t trust the operators
  • vTPM – encrypt the disks
  • JIT Administration


  • Compute: rolling upgrades with no downtime, hot/add remove, more resilient to transient storage, compute, network issues.
  • Network: Azure code brought to Windows Server 2016: SDN scale and simplicity. L4 load balancer, distributed data centre firewall.

He tells a very funny story on RAM support: 24 TB physical, and 12 TB RAM in Hyper-V VMs.

  • Storage: Hyper-Converged, Storage Replica, cluster wide QoS
  • RDS: Lots there too.

Hyper-Converged Infrastructure

Built into WS2016 Datacenter edition: Storage Spaces Direct (S2D). Uses SATA, SAS, SSD, and NVME, Working with storage industry to add new flash types.

  • Cloud design points: used in Azure Stack
  • RDMA at the core for performance and latency benefits.
  • Simplifying the datacenter: Add servers to add compute and storage capacity. No more SAN network. Storage controller is s/w.

Working on adding NVDIMMS: Intel Persistent Memory. Not as fast as real memory, but you can add lots of it in, e.g. 100 TB of “RAM”. Supported in WS2016 and SQL Server 2017 and later.

SATADOM is supported in WS2016 and later. It’s flash but its attached to a SATA connector (see image below). The idea is to do the “boot from USB” to free up a drive bay. This tiny drive plugs directly onto the SATA controller on the motherboard. Faster than USB/SD boot and more reliable.

Cloud Ready Application Platform

  • Windows Server Containers: The next generation of compute, following virtualization. Both are different techs, and going forward, both will probably exist. But containers will be the tech of choice for deploying applications: speed, ease of deployment, better densities, and more performance.
  • Nano Server: Ideal for the microkernal in Hyper-V Containers
  • Automation: PowerShell 5.0 and DSC

Now on to the new stuff

Azure File Sync

Klaas Langhout comes on stage.

I’ve covered this in depth already.

Back to Jeff. He asks Klaas if customers access the shares any differently on prem. Nope – it’s the same old file share and any Azure connectivity/tiering/sync is hidden.

Windows Defender Advanced Threat Protection (WDATP)

Using cloud intelligence to protect Windows.

  • Built into Windows Server
  • Behaviour-based, cloud-powered breach detection
  • Best of breed investigation experience
  • And more

You can sign into the Windows Defender Security Center to analyse activity to do forensics on an attack or suspicious activity, and learn how to remediate the attack.

Modern, Remote Management for Windows Server

I covered Project Honolulu earlier today.

Honolulu will remain a free download outside of Windows Server – expect updates every month.

FAQ on Honolulu

  • Price: Free
  • Edge, Chrome, Safari on Mac and more to be tested.
  • Installs on WS2012 R2 and later, Windows 10.
  • Manages Hyper-V Server 2012 and later and WS2012 and later.
  • Azure is not required.
  • AD is not required either.
  • Security: HTTS LAPS, Delegation
  • Configuration: No IIS, Agents not required. SQL not required. If you are pre-2016. you have to install WMF 5.1.
  • Positioning: Evolution of “in-box” tools. Does not replace System Center. Complementary to SycCtr, OMS, RSAT. Hopefully will eventually replace MMC-based RSAT.
  • Feedback: Via Windows Server UserVoice.
  • Extensions: It’s plugable, with alpha SDK today.


On to the next release of Windows Server, coming in October.

Application Innovation

  • Container-optimized Nano Server image increase container density and performance.
  • .NET Core 2.0
  • SMB Support for containers
  • Linux Containers with hyper-V isolation
  • Windows Subsystem for Linux – to manage the above primarily

Where to Start With Containers

  • Containerize suitable existing applications. GUI-based apps aren’t suitable.
  • Transform monoliths into microservices, with new code and transforming existing code.
  • Accelerate new applications with cloud-app development.

What’s Next

Windows Server Insiders is a program to beta test and learn the new stuff in the semi-annual channel.

Post 1709 Improvements


  • Honolulu integration
  • Shielded Linux VMs
  • Guest RDMA


  • Honolulu integration
  • Encrypted virtual networks
  • NTLM no longer required
  • SMB1 Disabled by default
  • and more


  • S2D Support for NVMe
  • S2D support for NV-DIMMs
  • Dedupe for ReFS
  • Cluster Sets to enable large scale HVI
  • Storage Replica test failover
  • Scoped volumes
  • Something on multi-resilient volumes

Azure Files With Sync


  • Klaas Langhout, Principal Program Manager, Azure Storage
  • Mine Tanrinian Demir, Principal Program Manager, Azure Storage

This is the one feature that is announced this week that I know for certain will turn into business for my customers so I’ve been looking forward to it finally going public.


  • Simplify share management using the cloud.
  • Leverage snapshots to backup your data
  • Use files to sync between offices
  • Tier cold storage to the cloud.

Azure is a bunch of lego blogs that can be assembled to produce services. A keystone is Azure Storage. Hyperscale at >30 trillion transactions per second at the moment across trillions of objects. It’s durable, secure, highly available, and OpenSource friendly.

One distributed storage system system offers, blob, files, disks, tables and queues, across more regions than any other cloud.

Azure Files (Preview)

Originally launched for lift-and-shift. If you had a legacy LOB app that needed a file share, you deployed Files instead of a VM file server. It was not intended for end user access. Offers SMB 2.1 and SMB 3.0. And if offers encryption at rest.

Why File Servers?

People still do not store things in the cloud. OneDrive and SharePoint online aren’t for everyone. Reasons:

  • App compat: file path lengths, etc.
  • Performance: latency to the cloud is an issue for things like AutoCAD.

Customer Pain

They still want to use file servers, but they’re struggling:

  • Cold data that must be kept
  • Capacity management
  • DR
  • Backup/restore

Companies with branch offices have a multiplier effect of the above.

Value Prop

  • Centralize file services in a managed cloud service
  • Reduce complexity associated with server sprawl
  • Preserve the end user experience – keep the file servers and performance

What it Does

A customer with a file server and the disk storage is a problem. Join the file server to a sync group in Azure Files. Older (actually all) files are moved to the cloud (transparent tiering with “stubs” on prem). If you lose the file server, you build a new one, add it into the existing Files namespace, and the meta data is downloaded. That means users see the shares/data very quickly. Over time, hot data is downloaded as files are used.

You can add another file server and join it to the same sync group, or create more. This synchronizes the files between the file servers via Azure Files (the master now).

Coming soon, not in the current preview), you can synchronize Azure Files from one Azure region to another for DR/performance reasons. You can than hang servers close to that region off of that copy, with inter-region sync if you need it. If one region dies, the file servers associate with it fail over to the other region.

Existing file server access doesn’t change.

If you are using Work Folders (HTTPS access to file shares from Windows, iOS or Android) then this continues to work with the file server.

Users can access file shares ove3r SMB/REST directly via Azure Files.

There is Azure Backup integration so you can backup your file shares in Azure without doing any backup at all on-prem. Killer!

Demo – Setup

He’s in the Azure Portal and searches for Azure File Sync. He clicks Create. Simple creation of entering name and resource group. Supports West US, Souteast Asia, East Australia, and West Europe today, but more will be added.

He’s already downloaded the MSI for the agent. Installs this on a file server. Today, you must installed Azure RM PowerShell but this will be folded into the agent install later. The file server is registered via an Azure sign-in. Then picks a subscription, picks a resource group, selects the Storage Sync Service. This requires another sign-in and a trust is created between the file server and Azure Files.

Back in the portal, he opens the sync service resource, and the file server is shown as Online, with OS version and agent version info.

He creates a sync group and associates it with a pre-created Azure File Share. There are no server endpoints – things we sync to the cloud from a file server, e.g. a path. You can synchronize multiples sets of folders, using sync endpoints as policy objects. You cannot sync the system root.

In the Azure File Share – Storage Account > Files – we can see the contents of the file share are now in Azure. He renames a file on the file server, and 2 seconds later it’s renamed in Azure.


  • Multi-site sync
  • Cloud tiering
  • Direct cloud access
  • Integrated cloud backup
  • Rapid file server DR

Demo – Tiering & Rapid Restore

There are 2 sync groups. One of them has two file servers sycnrhonizing to it. One of them has a policy to keep 95% free space (not realistic but engineered for demo reasons). This means that you can control tiering, to ensure that there’s always at least a certain amount of free space on a file server. Server 2 has a policy to keep 10% free space.

Tiering takes time to quiesce. Attributes show if a file is offline (O) or in Azure. The icon also shows the file as being offline by being transparent.

Questions from the audience:

  • About synchronized locking. Today, there is no lock sync. It operates like OneDrive. If there are two clashing writes, both will succeed. But, one will be written as a copy. MS knows that lock sync is a hot request.
  • This has nothing to do with DFS-R. It uses something called the Microsoft Sync Framework that is around for over 5 years and is used by SQL Server.
  • How is StorSimple affected? StorSimple is intended as on-prem storage in a single site. It uses blob storage which isn’t user accessible. Azure File Sync
  • Is this in CSP? He’s not sure, but if it’s not, it will be soon.
  • Are there file size limits, etc? There are file size limits but there are things being done. They’re published in the release notes. 5 file servers per sync group in the preview. 1 TB per file. They’ve tested up to approx 30 million files. The maximums will grow as they test during the preview.

Back to demo. He added a blank server to the sync group with contents. Meta data of the share/files appears almost instantly. That’s “rapid restore” in action:

  • Add file share to a new file server
  • DR scenario.

Talon Storage – Charles Foley

Customer: TSK that designs & fits out workplaces. They want as little on-prem IT as possible. Not a huge company. They had people in multiple locations with file servers, collaborating. They used Talon FAST in front of Azure Files, enabling sites to see a single share across sites. And this supports file locks in Azure Files, preventing the overwrite scenario.

Azure Files Use Cases – What’s New

Mine from Microsoft takes over.

Top Use Cases:

  • Highly available FTP Server. Creating load balanced stateless FTP servers that use Azure Files to store shared content. Results in scalable and highly available FTP server.
  • Store scripts in Azure Files instead of on a file server VM. SMB 3.0 encryption should be used in hybrid scenarios. Output sent to Azure Files and can be processed later on-prem.

New in 2017

  • Security: Encryption At Rest using your own key (Key Vault), SMB encryption for Linux.
  • End-to-end integration: Data import, a new tamper proof 100 TB disk device announced yesterday. Getting start tools for Windows and Linux. Export is coming.

Announcing Today

  • Azure File Sync Preview
  • Network ACLs Preview – secure your storage account with layer 4 firewall rules.
  • Azure Monitor Preview to troubleshoot or manage performance

Coming soon:

  • Share Snapshots Preview – a data consistent share snapshot
  • Azure Backup Integration Preview – create policies to backup a storage account.
  • LRS price reduction of 25%

Demo – Storage Accounts

She opens Files in a storage account. There are some shares. She shows that you can use Net Use or Sudo to connect to a file share over the network. She creates a snapshot. Then she views snapshots. Loads of them there already because Azure Backup is enabled. In the recovery services vault, she opens Backup Items. We can see shares in there. She adds another in the same Backup wizard as usual. A backup policy is selected.  We see that we can manually restore a share or a file. On a VM file server, she shows a mounted file share with files in it. She has also mounted a snapshot. Because of this method, Previous Versions in the file share can be used to view/mount snapshots.

Azure Backup is Azure Files Sync aware.

Retention up to 120 days. Storage costs are incremental. You pay per storage account being backed up.


I met with some of the Azure Backup team later in the week to discuss backup of Azure File Sync because the above system worried me. Here’s what I learned. The above system is just for the preview. The system will change when Azure File Sync goes GA:

  • Backups will be to the recovery services vault
  • Longer retention will be possible


  • AD integration and ACLs
  • Larger shares (~100 TB instead of 5 TB)
  • Azure file sync GA
  • Cross region sync of storage
  • ZRS – sync writes across three availability zones


  • Supported OS for File Sync: WS2012 R2 and WS2016. PCs are not affected because they connect to file servers.
  • Expansion of file share max capacity will roll out to all existing shares.
  • Any road map on compliance and legal hold? Bit of a woolly answer.
  • Any character file path limits? Published publicly. Some characters are not supported, but they’re using telemetry to monitor that for future support. Non-compliant files are skipped, and an error is created on the server. Same happens with files that are too large.
  • You can do around 10-20 sync groups per file server … that can be lots of shares.
  • Deduplicated volumes are not support at this time, but they plan on adding support. They are investigating using dedupe to reduce transmission and storage costs.
  • Egress charges: The Talon guy talks up. Their customer’s egress charges are under 1% of their total bill, in the 10s or 100s of dollars range.
  • The file sync protocol is REST-based.

AzureStorageFeedback@microsoft.com for any feedback/questions.

Restore An Azure VM to an Availability Set From Azure Backup in the Azure Portal

Microsoft has shared how to restore an Azure VM to an availability set using PowerShell from Azure Backup. It’s nasty-hard looking PowerShell, and my problem with examples of VM creation using PowerShell is that they’re never feature complete.

While writing some Azure VM training recently, I stumbled across a cool option in the Azure Portal that I tried out … and it worked … and it means that I never have to figure that nasty PowerShell out Smile

The key to all this is to start using Managed Disks. Even if your existing VMs are using un-managed (storage account) disks, that’s not a problem because you can still use this restore method. The other thing you should remember is that the metadata of the VM is irrelevant – everything of value is in the disks.

Restore the Disks of the VM

Using these steps you can restore the disks of your VM, managed or un-managed, to a storage location, referred to as the staging account.. Each disk is restored as a blob VHD file, and a JSON file describes the disks so that you can identify which one is the “osDisk”.

Create Managed Disks from the Restored VHDs

In this process, you create a managed disk from each restored VHD or blob file in the staging location. You have the option to restore the disks as Standard (HDD) or Premium (SSD) disks, which offers you some flexibility in your restore (you can switch storage types!). Make sure you ID the osDisk from the JSON file and mark it as either a Windows or Linux OS disk, depending on the contents.

Create a VM From the OS Managed Disk

The third set of steps bring your VM back online. You use the previously restored/identified osDisk and create a new virtual machine using that managed disk. Make sure you select the availability set that you want to restore the VM to.

Clean Up

The last step is the clean up. If you had any data disks in the original machine then you need to re-attach them to the new virtual machine. You’ll also need to configure the network settings of the Azure NIC resource. For example, if the new VM is replacing the old one, you should enter the IP settings of the old VM into the new NIC Azure resource, change any NAT/load balancing rules, NSGs, PIPs, etc.

And that’s it! There’s no PowerShell, and it’s all pretty simple clicking in the Azure Portal that won’t take that long to do after the disks are restored from the recovery services vault.

Create an Azure Managed Disk from a VHD Blob

This post will show you how to create a managed disk from a VHD blob file, such as one you’ve uploaded or restored from a virtual machine backup. In my example, I have restored the virtual hard disks of an Azure VM to a storage account called aidanfinnrestore. I am going to create a new managed disk from the VHD blob, and (in another post) create a new VM from the managed disk that I am creating in this post.


Open the Azure Portal, and go to Disks in the navigation bar on the left – this is where all managed disks are listed. Click + Add. A Create Manage Disk blade appears. Enter the following information:

  • Name: Give the new managed disk a name. My naming standard names the disk after the VM with a suffix to denote a role. In my example, it’s an OS disk.
  • Subscription: Select the subscription in your tenant. Note that you must create the managed disk in the same subscription as the storage account that contains the blob – you can always move the disk to a different subscription later.
  • Resource Group: Restore the disk to a new or existing resource group – typically this is where the virtual machine will be.
  • Location: Pick the region of the desired VM, which must also match the storage account.
  • Account Type: What kind of managed disk do you want – Standard (HDD) or Premium (SSD). You can change this later, one of the nice features of managed disks.
  • Source Type: I have selected Storage Blob – this is how the restored (or uploaded) VHD is stored.
  • Source Blob: Click browse, and navigate to & select the VHD blob that was restored/uploaded.
  • OS Type: If this is a data disk then select either Windows or Linux, depending on the guest OS in the VHD.
  • Size: To make like easy, select the size of the existing blob. I restored a managed disk to a blob, so I went with the original size of 128 GiB.

Once you’re happy with all the settings, click Create. In my case, with a 128 GiB VHD, the creation just around 30 seconds:


Now you can either create a VM from the disk or attach it as a data disk to an existing VM in the Azure Portal – life is easy with managed disks!

Restore an Azure Virtual Machine’s Hard Disks

In this post, I’ll show you how to restore just the disks of an Azure virtual machine. This is useful if you want to restore a virtual machine to an availability set, or restore it as a different series/size.

Restoring to Availability Sets

For some reason that I do not know, we cannot restore a virtual machine to an availability set in Azure. It probably has something to do with the restriction in ARM that prevents a VM from being able to join an availability set after creation (vote for change here).

As a workaround, Azure Backup allows you to restore the disks, and then use those disks to create a new virtual machine (metadata) that is joined to the availability set. On the official docs pages, there is some pretty messy looking PowerShell to re-create the VM from those disks.

Thanks to some features of Managed Disks, if you have used managed disks for the VM, then you don’t need to go anywhere near that nasty PowerShell or JSON! I’ll post about that soon.

Restoring Disks

Browse to to the recovery services vault, open it, go to Backup Items > Azure Virtual Machine, and select the VM in question. Below is a screenshot of my web server in Azure. Click Restore VM.


A blade with recovery points appears. Choose a restore point, i.e. a point in time from when you want to restore from, and click OK.


The Restore Configuration blade appears. Choose Restore Disks as the Restore Type, and choose a storage account as the Staging Location. Click OK to start the backup job.


Some time later, the disk(s) of the virtual machine are restored as blobs in a container in the storage account. You’ll also find a JSON file with details of the disk(s) that were restored.


By the way, if you cannot tell which of the VHD blobs is your OS disk, download the JSON file and open it in Notepad (VS Code refuses to open it for me). The “osDisk” setting will tell you the path of the VHD blob that was the original data disk.

Microsoft’s solution would have you restore the virtual machine using PowerShell and that JSON file. I’ve read through it – it’s not pretty! My solution, in a later post, would create managed disks from the VHD blob(s), and then create a VM from the OS disk … and that’s nice and easy using the Azure Portal and a few mouse clicks.

StorSimple–The Answer I Thought I’d Never Give

Lately I’ve found myself recommending StorSimple for customers on a frequent basis. That’s a complete reversal since February 28th, and I’ll explain why.


Microsoft acquired StorSimple, a physical appliance that is made in Mexico by a subsidiary of Seagate called Xyratex, several years ago. This physical appliance sucked for several reasons:

  • It shared storage via iSCSI only so it didn’t fit well into a virtualization stack, especially Hyper-V which has moved more to SMB 3.0.
  • The tiering engine was as dumb as a pile of bricks, working on a first in-first out basis with no measure of access frequency.
  • This was a physical appliance, requiring more rackspace, in an era when we’re virtualizing as much as possible.
  • The cost was, in theory, zero to acquire the box, but you did require a massive enterprise agreement (large enterprise only) and there were sneaky costs (transport and import duties).
  • StorSimple wasn’t Windows, so Windows concepts were just not there.


As usual, Microsoft has Microsoft-ized StorSimple over the years. The product has improved. And thanks to Microsoft’s urge to sell more via MS partners, the biggest improvement came on March 1st.

  • Storage is shared by either SMB 3.0 or iSCSI. SMB 3.0 is the focus because you can share much larger volumes with it.
  • The tiering engine is now based on a heat map. Frequently accessed blocks are kept locally. Colder blocks are deduped, compressed, encrypted and sent to an Azure storage account, which can be cool blob storage (ultra cheap disk).
  • StorSimple is available as a virtual appliance, with up to 64 TB (hot + cold, with between 500 GB and 8 TB of that kept locally) per appliance.
  • The cost is very low …
  • … because StorSimple is available on a per-day + per GB in the cloud basis via the Microsoft Cloud Solution Provider (CSP) partner program since March 1st.

You can run a StorSimple on your Hyper-V or VMware hosts for just €3.466 (RRP) per appliance per day. The storage can be as little as €0.0085 per GB per month.

FYI, StorSimple:

  • Backs itself up automatically to the cloud with 13 years of retention.
  • Has it’s own patented DR system based on those backups. You drop in a new appliance, connect it to the storage in the cloud, the volume metadata is downloaded, and people/systems can start accessing the data within 2 minutes.
  • Requires 5 Mbps data per virtual appliance for normal usage.

Why Use StorSimple

It’s a simple thing really:

  • Archive: You need to store a lot of data that is not accessed very frequently. The scenarios I repeatedly encounter are CCTV and medical scans.
  • File storage: You can use a StorSimple appliance as a file server, instead of a classic Windows Server. The shares are the same – the appliance runs Windows Server – and you manage share permissions the same way. This is ideal for small businesses and branch offices.
  • Backup target: Veeam and Veritas support using StorSimple as a backup target. You get the benefit of automatically storing backups in the cloud with lots of long term retention.
  • It’s really easy to set up! Download the VHDX/VHD/VMDK, create the VM, attach the disk, configure networking, provision shares/LUNs from the Azure Portal, and just use the storage.


So if you have one of those scenarios, and the cost of storage, complexities of backup and DR are questions, then StorSimple might just be the answer.

I still can’t believe that I just wrote that!