Webinar Recording – Clustering for the Small/Medium Enterprise & Branch Office

I recently did another webinar for work, this time focusing on how to deploy an affordable Hyper-V cluster in a small-medium business or a remote/branch office. The solution is based on Cluster-in-a-Box hardware and Windows Server 2012 R2 Hyper-V and Storage Spaces. Yes, it reduces costs, but it also simplifies the solution, speeds up deployment times, and improves performance. Sounds like a win-win-win-win offering!


We have shared the recording of the webinar on the MicroWarehouse site, and that page also includes the slides and some additional reading & viewing.

The next webinar has been scheduled; On August 25th at 2PM UK/Irish time (there is a calendar link on the page) I will be doing a session on what’s new in WS2016 Hyper-V, and I’ll be doing some live demos. Join us even if you don’t want to learn anything about Windows Server 2016 Hyper-V, because it’s live demos using a Technical Preview build … it’s bound to all blow up in my face.

Webinar – What’s New in Windows Server 2016 Hyper-V

We just wrapped up delivering our latest webcast, which will be shared on the MicroWarehouse site in the next few days, along with the deck and digital handout. And we’ve already scheduled the next webinar, which will be the first in a series of webinars focusing on Windows Server 2016 ahead of the launch at Microsoft Ignite on 26th of September – probably followed up by being on the “new sales” price list on October 1st.

The first WS2016 webinar will focus on Hyper-V – further sessions will be scheduled on storage, clustering, networking, etc. And I’m going to be doubly brave. I’m going to do demos based on a technical preview release (what an idiot!), and I’m going to do them live in the webinar (what a moron!). Hey – it’s all fun, right?!?!?

So come on and join us on August 25th at 14:00 (UK/Ireland), 15:00 CET and 09:00 Eastern, to see if it all blows up in my face and maybe learn something new about where virtualization is going in this era of cloud computing.


Register here and download the calendar reminder.

KB3172614 To Replace/Fix Hyper-V Installations Broken By KB3161606

Microsoft released a new update rollup to replace the very broken and costly (our time = our money) June rollup, KB3161606. These issues affected Hyper-V on Windows 8.1 and Windows Server 2012 R2 (WS2012 R2).

It’s sad that I have to write this post, but, unfortunately, untested updates are still being released by Microsoft. This is why I advise that updates are delayed by 2 months.

In the case of the issues in the June 2016 update rollup, the fixes are going to require human effort … customers’ human effort … and that means customers are paying for issues caused by a supplier. I’ll let you judge what you think of that (feel free to comment below).

A month after news of the issues in the update became known (the update rollup was already in the wild for a week or two), Microsoft has issued a superseding update that will fix the issues. At the same time, they finally publicly acknowledge the issues in the June update:


So it took 1.5 months, from the initial release, for Microsoft to get this update right. That’s why I advise a 2 month delay on approving/deploying updates, and I continue to do so.

What Microsoft needs to fix?

  • Change the way updates are created/packaged. This problem has been going on for years. Support are not good at this stuff, and it needs to move into the product groups.
  • Microsoft has successfully reacted to market pressure by making a special emphasis to change, e.g. The Internet, secure coding, The Cloud. Satya Nadella needs to do the same for quality assurance (QA), something that I learned in software engineering classes was as important as the code. I get that edge scenarios are hard to test, but installing/upgrading ICs in a Hyper-V guest OS is hardly a rare situation.
  • Start communicating. Put your hands up publicly, and say “mea culpa”, show what went wrong and follow it up with progress reports on the fix.


Webinar – Affordable Hyper-V Clustering for the Small/Medium Enterprise & Branch Office

I will be presenting another MicroWarehouse webinar on August 4th at 2PM (UK/Ireland), 3 PM (central Europe) and 9AM (Eastern). The topic of the next webinar is how to make highly available Hyper-V clusters affordable for SMEs and large enterprise branch offices. I’ll talk about the benefits of the solution, and then delve into what you get from this hardware + software offering, which includes better up-time, more affordability, and better performance than the SAN that you might have priced from HPE or Dell.


Interested? Then make sure that you register for our webinar.

Moving Classic Azure VMs To A Different CSP / ARM Subscription Using MigAz

This post will show you how to migrate a cloud service-based virtual machine deployment from Classic Azure (Service Management or SM) to a different Azure subscription as an Azure Resource Manager (ARM) deployment. One example might be where you want to move virtual machines from a Direct/MOSP (credit card, trial, MSDN), Open, or EA subscription to a Cloud Solution Provider (CSP) subscription.

My focus is on migrating to CSP, but you can use this process to move VMs into ARM in any different subscription. Note that Microsoft has an official solution for migrating classic machines into ARM in the same subscription, that can feature zero downtime if you have used classic VNETs.

The Old Deployment

I have deployed a collection of virtual machines in a legacy style subscription. It’s a pretty classic deployment that was managed via the classic portal at https://manage.windowsazure.com. The virtual machines are stored on a single standard LRS storage account, they are connected to a VNet, and a cloud service is used to NAT (endpoints) the virtual machines.


One of the machines has endpoints for SMTP, another has endpoints for HTTP and HTTPS, and all of the machines have the usual RDP and remote management endpoints.



If you browse this deployment in the newer Azure Portal at https://portal.azure.com you’ll see that it’s deployed in resource groups, but the classic portal has no understanding of these groups, so there’s actually a messy collection of 3 default groups.


Migration Strategy

I’ve decided that I’m going to move these resources to my new CSP subscription using the free (unsupported) migAz toolset. I have data transactions happening on some of my machines, so I’m worried that the disk copy will leave me with data loss after a switchover. So here’s my plan:

  1. I will leave my original system running, and let users continue to use the old system during the migration.
  2. The new CSP deployment will be on a different network address.
  3. After the copy, I will create a VNet-to-VNet connection (requires a dynamic/route-based gateway, which might be incompatible with your on-premises VPN device) between the non-CSP and the CSP deployments.
  4. I will use tools like RoboCopy and SQL sync to keep the newer system updated while I test the new system.
  5. I will switch users over to the new system when I am happy with it and can schedule a very brief maintenance window.
  6. I will remove the old deployment after I am satisfied that the migration worked.

Otherwise I could schedule a maintenance window, shut down the older deployment, and do the migration/copy, and redirect users to the new deployment as quickly as I can.

Note that my cloud service has a reserved IP address, but I cannot bring that IP address with me to the CSP subscription. At some point, I am going to have to redirect users to a new static public IP address that is assigned to an ARM load balancer – probably by changing public DNS records. Any ExpressRoute/VPN connections will also have to be rebuilt to connect to a new gateway – I will have to manually deploy the gateway.


First thing’s first: document your deployment and see if you can find anything that isn’t compatible with ARM or that you might need to re-create afterwards. We don’t have a way to migrate an Azure Backup vault at the moment, so document your Azure VM backup policies so that you can recreate them in the CSP subscription using a recovery services vault.

Next you need to update and get some tools on your PC:

Time to start migrating!

Export ARM Template

The migAz tool creates an ARM template (JSON file) that describes how your non-ARM deployment would look if it was deployed in ARM (or CSP). This includes converting a cloud service into a load balancer, and converting endpoints and load balanced endpoints into NAT rules and load balancing rules (it really is quite clever). We can modify this file (optional). Then we import the file into CSP to create the machines, the networking components, and (importantly) the storage account – the disks aren’t copied yet, but we’ll do that later.

Browse to wherever you extracted migAz and run migAz.exe. Then:

  1. Log into your old subscription using suitable admin credentials.
  2. Select the subscription that you want to migrate from.
  3. You can click Options to tweak the export.
  4. Select the virtual network(s), storage account(s), and virtual machine(s) that you want to migrate.
  5. Enter an output folder where you want to store the created JSON files in.
  6. Click Export.


The JSON Files

It takes a few minutes for migAz to interrogate your old subscription to build up 2 JSON files:

  • CopyBlobDetails.json: This file contains details of the virtual hard disks that must be copied to the CSP subscription. This includes the source URIs and the storage access keys – so keep this file safe because anyone can use these details to download the disks!
  • Export.json: This file is the meat of the export, containing the template that will be used to redeploy diskless machines with all of their ARM dependencies.


We’ll return to CopyBlobDetails.json later on, so let’s focus on Export.json. If you open this file you’ll find it describes everything that will be created in ARM when you import it into your CSP subscription. You can edit this file to make changes. Maybe you want to tweak NAT rules or add machines. I want to make a few changes to my JSON file. Everything that follows in this section is optional!

Before you go anywhere near an editor, copy the two JSON files to allow you to undo edits and to have a reference to the original configuration.

When I browsed the file I noticed that the load balancer was going to be assigned a dynamic public IP address resource. I want a static IP address for external access and simple public DNS management. I also noticed that the name of the IP address will break my desired naming standard and that I want to change the domainNameLabel.


So I will edit the file and make two changes to the publicIPAddresses resource:


While I’m at it, I’m also going to rename the load balancer (under loadBalancers). Note that I also need to change the dependencies to match the new name of the public IP address:


There are loads of references (load balancer or NAT rules) to the name of the load balancer.


You need to update these references. The easy way to do a search replace. My old references were loadBalancers/cs-mig1 so I replaced them with loadBalancers/lb-mig1 to match the new name of the load balancer (above).


A load balancer requires an availability set so I’m renaming the new AV set to match my new naming standards:


There are loads of dependencies on this availability set, so do a find/replace to update those dependencies with the new name.

One possible gotcha is that the storage account won’t have a globally unique name (required). The options of migAz are configured by default to take the original storage account name and add a v2 to it for the ARM deployment. Make sure that this will still be unique. If it’s not, then you can edit the JSON file. You could also opt to change the resiliency level. Make sure that you edit CopyBlobDetails.json to make the same change.


I mentioned earlier that one of my plans was to change the network address of my deployment so that I could connect the non-ARM and the CSP deployments together to enable data synchronization before the production switchover. My old network is I want the new network to be because this will allow routing between the two VNETs if I create a VNET-to-VNET VPN. I will also need to update my subnet(s) and any DNS servers that are on the VNET.


My changes are:


All  of my machines have reserved IP addresses so I’m going to do a find/replace to change 10.0.0 with 10.1.0.


My naming stuff is almost all completely fixed up. Almost. What’s left? The virtual hard disks in the new CSP deployment are all going to be named after the original cloud service. My cloud service was called cs-mig1. I can see that the disks are called cs-mig1*.vhd.


I am going to change the names to match the name of my new resource group (which I will manually create later):


But that’s not enough for the disks. You will also need to edit CopyBlobDetails.json because that file contains instructions on how to name the virtual hard disks’ blobs when they are copied to the new CSP subscription.


Tweak the names to match your changes in export.json.


Now when I search export.json for the old cloud service name (cs-mig1) there are no more references to the cloud service, and I have configured my preferred ARM naming standard for every resource (prefix-name-optional number).

Create the ARM Deployment

Now the fun begins! Launch your Azure PowerShell window and sign into your CSP / ARM subscription using:


View the subscriptions that your account has access to:


Copy the ID of the subscription that you want to deploy the VMs into, and run:

Select-AzureRMSubscription -SubscriptionID xxxxxxxx-yyyy-zzzz-aaaa-bbbbbbbbbbbb

You should then create a new resource group in the Azure region of your choice. My naming standard will have me create a group called rg-mig1, and I’ll create it in Dublin.

New-AzureRmResourceGroup -Location NorthEurope -Name "rg-mig1"

Now is the moment of truth. I am going to import my (heavily modified) export.json file into the CSP subscription to create all of my virtual machines and their dependencies.

New-AzureRmResourceGroupDeployment -Name "rg-mig1" -ResourceGroupName "rg-mig1" -TemplateFile "C:\Temp\cs-mig1\export.json" –Verbose

Note that the disks have not been copied yet, so there will be a bunch of errors at the end of this import. The errors refer to missing virtual hard disks.

Unable to find VHD blob with URI

We will fix those errors later.


Copy Virtual Hard Disks

Browse (in PowerShell) to where you extracted the migAz zip file. You are going to run a script called BlobCopy.ps1, and point it at CopyBlobDetails.json. This script will create a snapshot of the disks in the source subscription, and copy the disks (using the Azure network) directly to the new storage account in the CSP/ARM subscription.

.\BlobCopy.ps1 -ResourcegroupName "rg-mig1" -DetailsFilePath "C:\Temp\cs-mig1\copyblobdetails.json" -StartType StartBlobCopy

You can track the progress of the copy using:

.\BlobCopy.ps1 -ResourcegroupName "rg-mig1" -DetailsFilePath "C:\Temp\cs-mig1\copyblobdetails.json" -StartType MonitorBlobCopy


If you paid attention, you might have noticed that CopyBlobDetails.json had fields for tracking the copy. You can get a bunch of information from that file about each of the disk copy operations.


Fix Up Virtual Machines

The previous creation of the virtual machines had disk-related errors. The disks are in place now, so we can re-run the import to fix up the machines.

New-AzureRmResourceGroupDeployment -Name "rg-mig1" -ResourceGroupName "rg-mig1" -TemplateFile "C:\Temp\cs-mig1\export.json" –Verbose


Verify the CSP/ARM Deployment

You should find that your virtual machines are now running in the ARM / CSP subscription. Note how everything is in the single eg-mig1 resource group and has my preferred naming standard:


The load balancer is configured with a public IP address with static configuration:


The inbound NAT rules have been copied over:


And the network has a new network address as I required to enable a VNET-to-VNET connection with the original deployment.



The migAz tool creates some log files in %USERPROFILE%\appdata\Local. Look for migAz-<YYYYMMDD>.log and migAz-XML-<YYYYMMDD>.log.

If you have issues during the import of the export.json then you need to pay attention to the errors in the PowerShell screen and manually troubleshoot the export file. In my case, my heavily edited exoprt.json had a typo in one of the renamed virtual hard disks so it didn’t match what was copied (details in CopyBlobDetails.json). The fix was easy:

  1. The error was clear that the specified disk (with the wrong name) didn’t exist.
  2. I corrected the JSON file.
  3. I removed the new virtual machine from the CSP subscription.
  4. I re-ran the import, which re-created that machine and attached the disk (no duplicates of existing resources are created).


So what’s next?

  1. Re-deploy Azure Backup using the recovery services vault to protect my VM workloads.
  2. Deploy a gateway subnet and gateway.
  3. Create a VNET-to-VNET VPN with the old deployment to allow data synchronization.
  4. Test the new deployment.
  5. Schedule a maintenance window to switch production over to the new deployment in CSP.
  6. Change DNS, etc, to redirect users to the CSP deployment.
  7. Optionally reverse data synchronization.
  8. Remove the old non-CSP deployment after a suitable waiting period, and remove all inter-VNET comms.


If you want migAz to be easy, then it can be – just don’t modify the json files unless your new storage account name won’t be globally unique. It’s actually a pretty simple process:

  1. Export
  2. Import
  3. Copy disks
  4. Import (fix up)

The only complexity in my migration was caused by my desire to implement naming standards across all of my ARM resources.

The migAz toolset might not be supported, but it is the only way to migrate existing virtual machine workloads to Azure. It works pretty well, so I’m happy to use and recommend it.

Technorati Tags: ,

Choosing A Strategy To Migrate Azure VMs to CSP

This post is intended to help you understand how you can migrate your classic (Service Management or SM) Azure virtual machines from an old Azure subscription to a new CSP subscription where the only API available to you is Azure Resource Manager (ARM).

Official Migration Options From Microsoft

This will be a short paragraph. There are no official migration paths to Azure in CSP. The official text that CSP resellers get from Microsoft is nearly as short!

What Options Have You?

Let’s start with the painful options:

  1. You do nothing and leave machines in an old subscription. You can migrate them to ARM within the old subscription using the official migration solution from Microsoft, but it means that you cannot avail of the customer/partner benefits of CSP.
  2. Rebuild the VMs in CSP and migrate your data (using application features), maybe over a VNET to VNET VPN. Eeeek! There’s a lot of work, but you can get into CSP with your data in sync.

And then there’s what I want to talk about: migAz, a solution that a Microsoft employee (still not supported) has shared on GitHub.

The migAz toolset will:

  1. Record your old deployment as what it would look like in ARM using a JSON file.
  2. Create a listing of disks to move in a second JSON file.
  3. Allow you to create the VMs and their dependencies in the CSP subscription.
  4. Create a one-time snapshot of the original disks and copy them (inside of the Azure network) to a new storage account in the CSP subscription.
  5. Fix up the ARM deployment and start your VMs in CSP.
  6. Then you can redirect users to your CSP deployment.

Downtime Versus Data

The disk copy is done using a one-time snapshot. So consider this:

  1. Users are using your services and making data changes.
  2. You copy the disks from old services, which are still running, to the CSP subscription.
  3. Users are continuing to use the old services and making data changes.
  4. You switch users over to the CSP deployment.

That means data changes between 1 and 3 have been lost. So you have to make a choice from the below options:

  • Switch off the virtual machines in the old subscription before the move.
  • Don’t use migAz with data machines. Find another method.
  • Leave all your machines running while copying with migAz. Deploy the CSP solution with a different network address. Connect the old deployment with the CSP one, maybe using VNET-to-VNET VPN, and use application sync features to keep data synchronized from the old system to the CSP one. Perform a switchover at a time of your choosing.

I’ll show you how to use migAz in a later post.

Technorati Tags: ,

RunAsRadio Podcast – Hyper-V in Server 2016

I recently recorded an episode of the RunAsRadio podcast with Richard Campbell on the topic of Windows Server 2016 (WS2016) Hyper-V. We covered a number of areas, including containers, nested virtualization, networking, security, and PowerShell.


New F-Series Virtual Machines in Azure

Last week, Microsoft announced a new series of virtual machines called the F-Series. There’s quite a bit in this announcement.

New Sizing

One of the things that has wrecked my head in Azure is that the virtual machines had unusual memory sizes:

  • 1.75 GB RAM
  • 3.5 GB RAM
  • 7 GB RAM
  • 14 GB RAM
  • etc

And someone will ask for pricing assistance with a request for machines with 8 GB RAM … OK … do you want 7 GB or 14 GB, because Azure is McDonalds, not a Michelin star restaurant so you get what’s on the menu. not what you fancy.


Other pieces of the sizing fall in line. So for example:

The F2 has:

  • 2 cores
  • 4 GB RAM (2x cores)
  • Up to 4 data disks (2x cores)

As you go up the size chart, the same pattern emerges. A F16 has:

  • 16 cores
  • 32 GB RAM (2x cores)
  • Up to 32 data disks (2x cores)

This should make sizing easier.

Note that the processor is the same 2.4-GHz Intel Xeon E5-2673 v3 (up to 3.1 GHz with Intel Turbo Boost Technology 2.0) as in the Dv2-Series, but at a lower price per core.

New Naming Standard

While Microsoft is simplifying the sizing, they have decided to change the naming standard to match the sizes. In the past we had:

  • Standard A1
  • Standard A2
  • Standard A3
  • Standard A4

The name was nothing but a label that had no correlation to either the spec or the price – in some cases, there was a drop in price as you moved up the “sizes” (see A4 to A5 or D4 to D11).

The name of the F-Series is tied to the number of cores in the machine. So, an F1 machine has 1 core. An F16 machine has 16 cores.

Before, we showed special features, such as the use of Premium Storage (S is for SSD), by adding a letter to the series of the machine. For example, a D4 virtual machine could be deployed as a DS4 virtual machine.

Starting with the F-Series, any special features are shown by adding a letter to the end of the spec. So, an F4 might be deployed as an F4s.


The F-Series is pretty widely available right now, through Azure V1 and Azure V2. Note that I am seeing a some glitches with the displayed pricing in the Azure Portal (via Open). Please get your direct/Open pricing from the official site.


Technorati Tags: ,

Webinar – What’s New In Windows Server 2016 Hyper-V

I’ll be joining fellow Cloud and Datacenter Management (Hyper-V) MVP Andy Syrewicze for a webcast by Altaro on June 14th at 3PM UK/Irish time, 4PM CET, and 10AM Eastern. The topic: What’s new in Windows Server 2016 Hyper-V (and related technologies). There’s quite a bit to cover in this new OS that we expect to be release during Microsoft Ignite 2015. I hope to see you there!


Cloud & Datacenter Management 2016 Videos

I recently spoke at the excellent Cloud and Datacenter Management conference in Dusseldorf, Germany. There was 5 tracks full of expert speakers from around Europe, and a few Microsoft US people, talking Windows Server 2016, Azure, System Center, Office 365 and more. Most of the sessions were in German, but many of the speakers (like me, Ben Armstrong, Matt McSpirit, Damian Flynn, Didier Van Hoye and more) were international and presented in English.


You can find my session, Azure Backup – Microsoft’s Best Kept Secret, and all of the other videos on Channel 9.

Note: Azure Backup Server does have a cost for local backup that is not sent to Azure. You are charged for the instance being protected, but there is no storage charge if you don’t send anything to Azure.