Upcoming Webinar: 4 Important Azure IaaS features for building your Hybrid Cloud

I’m going to be presenting in a webinar by Altaro on July 18th. It’s an interesting topic, Azure, because lots of IT pros are wondering if/how they’ll use Azure and how they should get started. As an IT pro, your first ventures into The Cloud will probably be infrastructure, so I’ll talk about a few topics that will hopefully get you better prepared.

Altaro has a big audience around the world, so the webinar will be run twice:

  • Time for EU attendees: (2pm CEST)
  • Time for US attendees: (10am PDT / 1pm EDT)

image

Azure Backup Central Reporting – Pay Attention MS Partners!

Microsoft has launched a preview of Azure Backup Reporting; this is a solution where you can export backup data to Power BI, and this allows you to consume, visualize, and subscribe to information about backup from many recovery services vaults in many subscriptions.

The way the system works is that you configure the recovery services vault to export data regularly to a storage account (must be in the same tenant as the recovery services vault).

Configure storage account step 3

You then sign into Power BI (a free subscription can be used but this is limited to 1 GB of data) and import the Azure Backup content pack.

Import content pack

Data is exported as JSON files into a folder (container) in the storage account, and Power BI will consume/process that data. The timing of this varies on the data, but Microsoft advises that it can take 24 hours for your first data sets to be consumed.

Azure Backup Reports data push frequency

The default screens show lots of useful information:

  • Summary of job health
  • Cloud storage usage
  • Quantities of instances
  • Cloud storage growth trends

Azure Backup dashboard

While the solution is not perfect yet (read more and vote here) it can be used today. Note that DPM, MABS, and Azure VM backup are not supported yet by the preview. I have set up 3 demo subscriptions (each in a different tenant as is normal for deployments by MS partners), each with a MARS backup job. I imported the Azure Backup content pack 3 times, 1 for each tenant. I made a custom report for each subscription and pinned them to a single dashboard. Now I can see the results of each and every backup job in one screen. I can also create daily/weekly email subscriptions to each report – that means I can send out these reports to my customers!

image

I can also publish the reports either to a web site or a private SharePoint (including Online) site – here’s an example that I did for work.

image

The end result is that we finally have a centralized reporting solution for Azure Backup. With one quick scroll, I can easily see the health of all of my customers’ backups.

Year 10 as an MVP – Adding The Azure Expertise

Today was a stressful day – it was the annual date of my MVP renewal. The program has changed quite a bit in the last year, and this is the only renewal date from now on, so you might have seen more MVPs than usual sharing their nerves online.

I was extremely nervous, especially because my profile on the MVP directory went offline. I was sure that I was a goner. But later in the day my profile re-appeared, with a change.

NewMVPStatus

To mark year 10 as a Microsoft Valuable Professional, I have been awarded with a double expertise:

  • Cloud & Datacenter Management (Hyper-V)
  • Microsoft Azure

And a little later in the afternoon, the notification email arrived:

MVP2017Email

My eldest daughter, who is 10 years old, had noticed my stress and wanted to congratulate me. I was banished from the kitchen and later I was presented with this cake – I’m a proud Dad:

MVP10Cake

 

These are fun times ahead for IT pros. My double status, with on-premises virtualization and public cloud, mirrors what’s going on in many of our careers, either already or pretty soon.  My career has changed so much over the years:

  • UNIX programmer
  • Have-a-go-hero Windows consultant
  • Re-inventing myself to be a better Microsoft engineer
  • Senior sysadmin in an international company
  • MVP in SCCM
  • Virtualization engineer
  • MVP in Hyper-V
  • Author
  • Technical sales
  • Writer
  • Lead on Azure IaaS
  • MVP in Azure

And now I can see somewhat of a return to development. I don’t see myself coding, but I’m heading to Ignite with the intention of spending as much time as posisble learning PaaS stuff, while trying to figure out what’s happening in Windows Server 1709, Azure IaaS developments, and soooo much more!

Microsoft Azure Backup Server v2 Launched

Microsoft has launched version 2 of MABS, the Microsoft Azure Backup Server v2, with support for Windows Server 2016 and vSphere 6.5.

image

So far we’ve had 2 versions (v1 and v1 update 1) of MABS, the freely licensed (but your pay Azure Backup pricing) slightly modified version of System Center Data Protection Manager. MABS v1 was based on DPM 2012 R2, and MABS v2 is based on DPM 2016, with the cool features of DPM 2016:

  • Modern Storage, which improves performance and reduces consumption by leveraging ReFS Block Cloning, VHDX, and Deduplication.
  • Improves Hyper-V backup, by supporting WS2016 hosts and by using the built-in (WS2016 Hyper-V) Resilient Change Tracking (RCT) for incremental backups without 3rd party software being placed into the kernal of the host’s management OS.
  • Support for Shielded Virtual Machines, the ultra-secure platform on WS2016 Hyper-V.
  • Support for Storage Spaces Direct (S2D).
  • The ability to install MABS v2 on WS2016.

MABS v1 Update 1 added support for VMware vCenter & ESXi 5.5 and 6.0. MABS v2 adds vCenter & ESXi 6.5 to the list. Note that if you install MABS v2 on WS2016 then VMware protection will be in preview mode, while we wait for VMware to release support for VDDK 6.5 for WS2016. You can learn more on from this video.

You can download MABS v2 from here or from a recovery services vault in the Azure Portal.

The supported backup server configuration is:

  • Windows Server 2012 R2, Windows Server 2016
  • Processor: Minimum: 1 GHz, dual-core CPU. Recommended: 2.33 GHz quad-core CPU
  • RAM: Minimum: 4GB. Recommended: 8GB
  • Hard Drive Space (program files): Minimum: 3GB, Recommended: 3GB
  • Disks for backup storage pool: 1.5 times size of data to be protected

Microsoft Azure Backup MARS Agent Supports System State

Microsoft has announced that the Azure Backup MARS agent will support the protection of System State on Windows Server. This is a preview release.

I started talking about Azure Backup 3 years ago, and one of the “we’re not doing it” questions was “does it backup system state”. The answer was no. Azure Backup listened and now you can backup your system state to Azure using the MARS agent.

Scenarios discussed by the Azure Backup team include Active Directory, file server configurations, and IIS server configurations, where restoring files & folders is not enough; the metadata that makes those files & folders useful is stored in System State so the ability to restore that meta data is also important.

Supported versions of Windows Server in this preview release are:

  • W2008 R2
  • WS2012
  • WS2012 R2
  • Windows Server 2016

Do you want support for Windows Server 2003? Let me sell you some Ace of Base and Vanilla Ice cassettes!

This is good news, a part of the continuous improvement of Azure Backup driven by your feedback.

StorSimple–The Answer I Thought I’d Never Give

Lately I’ve found myself recommending StorSimple for customers on a frequent basis. That’s a complete reversal since February 28th, and I’ll explain why.

StorSimple

Microsoft acquired StorSimple, a physical appliance that is made in Mexico by a subsidiary of Seagate called Xyratex, several years ago. This physical appliance sucked for several reasons:

  • It shared storage via iSCSI only so it didn’t fit well into a virtualization stack, especially Hyper-V which has moved more to SMB 3.0.
  • The tiering engine was as dumb as a pile of bricks, working on a first in-first out basis with no measure of access frequency.
  • This was a physical appliance, requiring more rackspace, in an era when we’re virtualizing as much as possible.
  • The cost was, in theory, zero to acquire the box, but you did require a massive enterprise agreement (large enterprise only) and there were sneaky costs (transport and import duties).
  • StorSimple wasn’t Windows, so Windows concepts were just not there.

Improvements

As usual, Microsoft has Microsoft-ized StorSimple over the years. The product has improved. And thanks to Microsoft’s urge to sell more via MS partners, the biggest improvement came on March 1st.

  • Storage is shared by either SMB 3.0 or iSCSI. SMB 3.0 is the focus because you can share much larger volumes with it.
  • The tiering engine is now based on a heat map. Frequently accessed blocks are kept locally. Colder blocks are deduped, compressed, encrypted and sent to an Azure storage account, which can be cool blob storage (ultra cheap disk).
  • StorSimple is available as a virtual appliance, with up to 64 TB (hot + cold, with between 500 GB and 8 TB of that kept locally) per appliance.
  • The cost is very low …
  • … because StorSimple is available on a per-day + per GB in the cloud basis via the Microsoft Cloud Solution Provider (CSP) partner program since March 1st.

You can run a StorSimple on your Hyper-V or VMware hosts for just €3.466 (RRP) per appliance per day. The storage can be as little as €0.0085 per GB per month.

FYI, StorSimple:

  • Backs itself up automatically to the cloud with 13 years of retention.
  • Has it’s own patented DR system based on those backups. You drop in a new appliance, connect it to the storage in the cloud, the volume metadata is downloaded, and people/systems can start accessing the data within 2 minutes.
  • Requires 5 Mbps data per virtual appliance for normal usage.

Why Use StorSimple

It’s a simple thing really:

  • Archive: You need to store a lot of data that is not accessed very frequently. The scenarios I repeatedly encounter are CCTV and medical scans.
  • File storage: You can use a StorSimple appliance as a file server, instead of a classic Windows Server. The shares are the same – the appliance runs Windows Server – and you manage share permissions the same way. This is ideal for small businesses and branch offices.
  • Backup target: Veeam and Veritas support using StorSimple as a backup target. You get the benefit of automatically storing backups in the cloud with lots of long term retention.
  • It’s really easy to set up! Download the VHDX/VHD/VMDK, create the VM, attach the disk, configure networking, provision shares/LUNs from the Azure Portal, and just use the storage.

 

So if you have one of those scenarios, and the cost of storage, complexities of backup and DR are questions, then StorSimple might just be the answer.

I still can’t believe that I just wrote that!

Speaking At European SharePoint, Office 365 & Azure Conference 2017

I will be speaking at this year’s European SharePoint, Office 365, and Azure Conference, which is being held in the National Conference Center in Dublin between 13-16 November. I’ll be talking about Azure Site Recovery (ASR):

image

It’s a huge event with lots of tracks, content and speakers from around the world.

 

For those of you in Ireland, this is a rare opportunity to attend a Microsoft-focused conference of such a scale here in Ireland.

My Experience at Cloud & Datacenter Conference Germany

Last week I was in Munich for the Cloud & Datacenter Germany conference. I landed in Munich on Wednesday for a pre-conference Hyper-V community event, and 2 hours later I was talking to a packed room of over 100 people about implementing Azure Site Recovery with Windows Server 2016 Hyper-V. This talk was very different to my usual “When Disaster Strikes” talk; I wanted to do something different so instead of an hour of PowerPoint, I had 11 slides, half of which were the usual title, who I am, etc, slides. Most of my time was spent doing live demos and whiteboarding using Windows 10 Ink on my Surface Book.

image

Photo credit: Carsten Rachfahl (@hypervserver)

On Friday I took the stage to do my piece for the conference, and I presented my Hidden Treasures in Windows Server 2016 Hyper-V talk. This was slightly evolved from what I did last month in Amsterdam – I chopped out lots of redundant PowerPoint and spent more time on live demos. As usual with this talk, which I’d previously done on WS2012 R2 for TechEd Europe 2014 and Ignite 2015, I ran all of my demos using PowerShell scripts.

Media preview

Photo credit: Benedikt Gasch (@BenediktGasch)

 

One of the great things about attending these events is that I get to meet up with some of my Hyper-V MVPs friends. It was great to sit down for dinner with them, and a few of us were still around for a quieter dinner on the Friday night. Below you can see me hanging out with Tudy Damian, Carsten Rachfahl, Ben Armstrong (Virtual PC Guy), and Didier Van Hoye.

Media preview

As expected, CDC Germany was an awesome event with lots of great speakers sharing knowledge over 2 days. Plans have already started for the next event, so if you speak German and want to stay up to speed with Hyper-V, private & public cloud in the Microsoft world, then make sure you follow the news on https://www.cdc-germany.de/

Template For ASR Recovery Plan Runbook

I’m sharing a template PowerShell-based Azure Automation runbook that I’ve written, to enable advanced automation in the Recovery Plans that can be used in Azure Site Recovery (ASR) failover.

ASR allows us to orchestrate the failover and failback of virtual machines. This can be a basic ordering of virtual machine power-up and power-down actions, but we can also inject runbooks from Azure Automation. This allows us to do virtually anything inside of Azure during a failover or failback.

I needed to write a runbook for ASR recently and I found that I needed this runbook to be able to work differently in different scenarios:

  • Planned failover
  • Unplanned failover
  • Test failover
  • Cleanup after a test failover

That last one is tricky, but I’ll come back to it. They key to the magic is a parameter in the runbook called $RecoveryPlanContext. When an ASR executes a runbook it passes some information into the script via this parameter. The parameter has multiple attributes, as documented by Microsoft here. Some of the interesting values we can query for are:

  • FailoverType: test, unplanned, or planned.
  • FailoverDirection: primary or secondary are the destinations.
  • VmMap: An array listing the names of the VMs included in the runbook.
  • ResourceGroupName: The name of the resource group that the failover VMs are in.

My template currently only queries for FailoverType, because my goal was to automate failover to Azure. However, there is as much value in dealing with FailoverDirection, because the runbook can be used to orchestrate a failback.

Let’s assume that I use a runbook to start up some machine(s) during a failover. This machine could be a jump box, enabling remote access into that machine, and from there we can jump to failed over machines. That would be useful in all kinds of failover, including a test failover. But what happens if I run a test, am happy with everything and run a cleanup task? Guess what … our recovery plan does not execute in reverse order, and the runbook is not executed. So what I’ve come up with is a way to tell the runbook that I want to do a cleanup. The runbook will typically be executed before the cleanup (some tasks must be done before VMs are removed to make scripting easier, e.g. finding PIPs used by VMs before the VMs and their NICs are deleted). The runbook still expects a parameter; you are prompted to enter a value, so I enter “cleanup”. Once my script sees that task it runs a cleanup function.

My template has 4 functions, one for each of the above 4 scenarios. There is an if statement to look for cleanup, and if that’s not the value, a switch statement checks $RecoveryPlanContext.FailoverType to see what the scenario is. If some unexpected value is found, the runbook will exit.

param ( 
[Object]$RecoveryPlanContext 
)

function DoTestCleanup ()
{
Write-Output ("This is a cleanup after test failover")
# Do stuff here
}


function DoTestFailover ()
{
Write-Output ("This is a test failover")
# Do stuff here
}


function DoPlannedFailover ()
{
Write-Output ("This is a planned failover")
# Do stuff here
}


function DoUnplannedFailover ()
{
Write-Output ("This is an unplanned failover")
# Do stuff here
}


# The runbook starts here
Sleep 10
$connectionName = "AzureRunAsConnection"
try
{
# Get the connection "AzureRunAsConnection "
$servicePrincipalConnection=Get-AutomationConnection -Name $connectionName

"Logging in to Azure using $connectionName ..."
Add-AzureRmAccount -ServicePrincipal -TenantId $servicePrincipalConnection.TenantId -ApplicationId $servicePrincipalConnection.ApplicationId -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
if (!$servicePrincipalConnection)
{
$ErrorMessage = "Connection $connectionName not found."
throw $ErrorMessage
} else{
Write-Error -Message $_.Exception
throw $_.Exception
}
}

write-output ("The failover type parameter is $RecoveryPlanContext")


if($RecoveryPlanContext -eq 'cleanup')
{
DoTestCleanup
}
else
{
switch ($RecoveryPlanContext.FailoverType)
{
"Test" { DoTestFailover }
"Planned" { DoPlannedFailover }
"Unplanned" { DoUnplannedFailover }
default { Write-Output ("Runbook aborted because there no failover type was specified") }
}
}