Generation 2 Virtual Machines Make Their First Public Appearance in Microsoft Azure

Microsoft has revealed that the new preview series of confidential computing virtual machines, the DC-Series, which went into public preview overnight are based on Generation 2 (Gen 2) Hyper-V virtual machines. This is the first time that a non-Generation 1 (Gen 1) VM has been available in Azure.

Note that ASR allows you to migrate/replicate Generation 2 machines into Azure by converting them into Generation 1 at the time of failover.

These confidential compute VMs use hardware features of the Intel chipset to provide secure enclaves to isolate the processing of sensitive data.

The creation process for a DC-Series is a little different than usual – you have to look for Confidential Compute VM Deployment in the Marketplace and then you work through a (legacy blade-based) customised deployment that is not as complete as a normal virtual machine deployment. In the end a machine appears.

I’ve taken a screenshot from a normal Azure VM including a view of Device Manager from Windows Server 2016 with the OS disk.

image

Note that both the OS disk and the Temp Drive are IDE drives on a Virtual HD ATA controller. This is typical a Generation 1 virtual machine. Also note the IDE/ATA controller?

Now have a look at a DC-Series machine:

image

Note how the OS disk and the Temp Drive are listed as Microsoft Virtual Disk on SCSI controllers? Ah – definitely a Generation 2 virtual machine! Also do you see the IDE/ATA controller is missing from the device listing? If you expand System Devices you will find that the list is much smaller. For example, the Hyper-V S3 Cap PCI bus video controller (explained here by Didier Van Hoye) of Generation 1 is gone.

Did you Find This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Frankfurt on December 3-4, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Physical Disks are Missing in Disk Management

In this post, I’ll explain how I fixed a situation where most of my Storage Spaces JBOD disks were missing in Disk Management and Get-PhysicalDisk showed their OperationalStatus as being stuck on “Starting”.

I’ve had some interesting hardware/software issues with an old lab at work. All of the hardware is quite old now, but I’ve been trying to use it in what I’ll call semi-production. The WS2016 Hyper-V cluster hardware consists of a pair of Dell R420 hosts and an old DataON 6 Gbps SAS Storage Spaces JBOD.

Most of the disks disappeared in Disk Management and thus couldn’t be added to a new Storage Spaces pool. I checked Device Manager and they were listed. I removed the devices and rebooted but the disks didn’t appear in Disk Management. I then ran Get-PhysicalDisk and this came up:

image

As you can see, the disks were there, but their OperationalStatus was hung on “Starting” and their HealthStatus was “Unknown”. If this was a single disk, I could imagine that it had failed. However, this was nearly every disk in the JBOD and spanned HDD and SSD. Something else was up – probably Windows Server 2016 or some firmware had threw a wobbly and wasn’t wrapping up some task.

The solution was to run Reset-PhysicalDisk. The example on docs.microsoft.com was incorrect, but adding a foreach loop fixed things:

$phydisk = (Get-Physicaldisk | Where-Object -FilterScript {$_.HealthStatus -Eq “Unknown”})

foreach ($item in $phydisk)
{
Reset-PhysicalDisk -FriendlyName $item.FriendlyName
}

A few seconds later, things looked a lot better:

image

I was then able to create the new pool and virtual disks (witness + CSVs) in Failover Cluster Manager.

(SOLUTION) Azure File Sync–Tiering & Synchronisation Won’t Work

I recently had a problem where I could not get Azure File Sync (AFS) to work correctly for me. The two issues I had were:

  • I could not synchronise a share to a new file server (new office or disaster recovery) when I set the new server endpoint to be tiered.
  • When I enabled tiering to an existing server endpoint, the cloud tiering never occurred.

I ran FileSyncErrorsReport.ps1 from the sync agent installation folder. The error summary was:

0x80c80203 – There was a problem transferring a file but sync will try again later

Each file in the share had an additional message of:

0x80c80203 There was a problem transferring a file but sync will try again later.

Both problems seemed to indicate that there was an issue with tiering. I suspected that an old bug from the preview v2.3 sync agent had returned – I was wrong because it was something different. I decided to disable tiering on a new server endpoint that wasn’t synchronising – and the folder started to synchronise.

When this sort of thing happens in AFS, you suspect that there’s a problem with the storagesync filter, which you can investigate using fltmc.exe. I reached out to the AFS product group and they investigated over two nights (time zone differences). Eventually the logs identified the problem.

In my lab, I deployed 3 file servers as Hyper-V virtual machines. Each machine had Dynamic Memory enabled:

  • Startup Memory: 1024MB
  • Minimum Memory: 512MB
  • Maximum Memory: 4096MB

This means that each machine has access to up to 4 GB RAM. The host was far from contended so there should not have been an issue. But it turns out, there was an issue. The AfsDiag traces that I created showed that one of the machines had only 592 MB RAM free of 1907 MB free… remember that’s RAM free from the currently assigned RAM, not from the possible maximum RAM.

The storagesync filter requires more than that – the release notes for the sync agent that that the agent requires 2 GB of RAM. The team asked me to modify the dynamic memory settings of one of the file servers as follows to test. Shut down the VM and modified the memory settings to:

  • Startup Memory: 2048MB
  • Minimum Memory: 2048MB
  • Maximum Memory: 4096MB

I started up the VM and things immediately started to work as expected. The new server endpoints populated with files and the tiered endpoints started replacing cold files with reparse pointers to the cloud replicas.

The above settings might not work for you. Remember that the storage sync agent requires 2 GB RAM. Your settings might require more RAM. You’ll have to tune things specifically to your file server, particularly if you are using Dynamic Memory; tt might be worth exploring the memory buffer setting to ensure that there’s always enough free RAM for the sync agent, e.g. if the VM is set up as above set the buffer to 50% to add an extra 1 GB to the startup amount.

Thanks to Will, Manish, and Jeff in the AFS team for their help in getting to the bottom of this.

Not A Hyper-V MVP Anymore

It’s with some sadness that I have to report that I am no longer a Hyper-V MVP.

11 years ago, I got and email to say that I had been awarded MVP status … in System Center Configuration Manager. Yes, I used to do a lot of stuff on ConfigMgr. But by the time I’d been awarded, that had all stopped and I had refocused on server stuff, particularly virtualization and especially Hyper-V. A year later, my expertise was changed to that of Hyper-V, which later merged into a larger grouping of Cloud & Datacenter Management.

Being a Hyper-V MVP changed my career. I had early access to information and I was able to pose questions about things to my fellow MVPs and the program managers of Hyper-V, Failover Clustering, networking, and Windows Server storage. I learned an incredible amount, and the many posts on this site and my books all had input from my time as an MVP. Job openings appeared because of the knowledge I obtained, and I got to write for Petri.com. And being an MVP opened up speaking opportunities at many events around the world, including TechEd Europe and the very first Ignite.

There’s so many people to thank from over the years. I won’t name names because I’ll offend someone  because I’ll surely forget someone. My (ex-)fellow Hyper-V MVPs are an awesome bunch. We all found are niche areas and I can remember many times we’d meet at a user group event and pool our knowledge to make each other better. In particular, I remember speaking at an event in Barcelona during the build-up to WS2012 and spending hours in a meeting room, going over things that we’d learned in that dizzyingly huge release.

I want to thank the Program Managers in Windows Server, Hyper-V, Failover Clustering & Storage, and Networking for the many hours of deep dive sessions, the answers they’ve given, the time they’ve taken to explain, the tips given, and the opportunity to contribute. Yes, I got a lot out of being a Hyper-V MVP, and I love looking at the feature list and thinking to myself, “me and <person X> were the ones that asked for that”. The PMs are a patient bunch … they have to be to deal with the likes of me … but they’re the ones that make the MVP program work. I’d love to tell stories, but you know … NDAs Smile

I knew that this day when I’d stop being a Hyper-V MVP was coming. Actually, that suspicion started back in the WS2012 era when I saw where MS was going with Hyper-V. The product was evolving for a market that is very small in Ireland. I knew I had to change, and that was triggered when Microsoft Ireland came to our office at work, and asked us to help develop the Azure business with Microsoft Partners. 4.5 years ago, I made the change, and I started to work with the largest Hyper-V clusters around.

Last year I was made a dual-expertise MVP with Azure being added. I work nearly 100% on Azure, and I have always written about what I work with. Anytime I find a solution, or learn something cool (that I can talk about) I write about it. I was re-awarded yesterday as an Azure MVP, but my Cloud & Datacenter Management expertise was dropped. I expected it because I simply had not earned the privilege over the last year to be re-awarded. I have a full and happy family life and I don’t have enough time to give a dual-expertise status what I think it deserves from me. I was not surprised, but I was a bit sad because being a Hyper-V MVP was a career changer for me and I made lots of great friends.

For those of you who are new to the program or who want to get involved in being an MVP, I have some advice: Make the most of it. The opportunity is awesome but you only get from it what you put in. Take part, learn, contribute, and share. It’s a virtuous cycle, and the more you do, the more you get out from it.

Being a part of the community hasn’t ended for me. I’ll still be writing and speaking about Azure. In fact, my employers are running a big community event on October 17th in Dublin (details to come soon) on Azure, Windows Server 2019, and more. And who knows … maybe I’ll still write some about Hyper-V every now and then Smile

Left to right: Tudor Damian, me, Carsten Rachfahl, Ben Armstrong (Hyper-V), Didier Van Hoye – Hyper-V MVPs with Ben at Cloud & Datacenter Conference Germany 2017.

Feedback Required By MS – Storage Replica in WS2019 STANDARD

Microsoft is planning to add Storage Replica into the Standard Edition of Windows Server 2019 (WS2019). In case you weren’t paying attention, Windows Server 2016 (WS2016) only has this feature in the Datacenter edition – a large number of us campaigned to get that changed. I personally wrecked the head of Ned Pyle (@NerdPyle) who, when he isn’t tweeting gifs, is a Principal Program Manager in the Microsoft Windows Server High Availability and Storage group – he’s one of the people responsible for the SR feature and he’s the guy who presents it at conferences such as Ignite.

What is SR? It’s volume based replication in Windows Server Failover Clustering. The main idea what to enable replication of LUNs when companies couldn’t afford SAN replication licensing. Some SAN vendors charge a fortune to enable LUN replication for disaster recovery and SR is a solution for this.

A by product of SR is a scenario for smaller businesses. With the death of cluster-in-a-box (manufacturers are focused on larger S2D customers) the small-medium business is left looking for a new way to build a Hyper-V cluster. You can do 2-node S2D clusters but they have single points of failure (4 nodes are required to get over this) and require at least 10 GBE networking. If you use SR, you can create an active/passive 2-node Hyper-V cluster using just internal RAID storage in your Hyper-V hosts. It’s a simpler solution … but it requires Datacenter Edition today, and in the SME & branch office scenario, Datacenter only makes financial sense when there are 13+ VMs per host.

Ned listened to the feedback. I think he had our backs Smile and understood where we were coming from. So SR has been added to WS2019 Standard in the preview program. Microsoft wants telemetry (people to use it) and to give feedback – there’s a survey here. SR in Standard will be limited. Today, those limits are:

  • SR replicates a single volume instead of an unlimited number of volumes.
  • Servers can have one partnership instead of an unlimited number of partners.
  • Volume size limited to 2 TB instead of an unlimited size.

Microsoft really wants feedback on those limitations. If you think those limitations are too low, then TALK NOW. Don’t wait for GA when it is too late. Don’t be the idiot at some event who gives out shite when nothing can be done. ACT NOW.

If you cannot get the hint, complete the survey!

Online Windows Server Mini-Conference – June 26th

Microsoft wants to remind you that they have this product called Windows Server, and that it has a Windows Server 2016 release, a cool new administration console, and a future (Windows Server 2019). In order to do that, Microsoft will be hosting an online conference on June 26th with some of the big names behind the creation of Windows Server called the Windows Server Summit.

This event will have a keynote featuring Erin Chapple, Director of Program Management, Cloud + AI (which includes Windows Server). Then the event will break out into a number of tracks with multiple sessions each, covering things like:

  • Hybrid scenarios with Azure
  • Security
  • Hyper-converged infrastructure (Storage Spaces Direct/S2D)
  • Application platform (containers on Windows Server)

The event, on June 26th, starts at 5pm UK/Irish time and runs for 4 hours (12:00 EST). Don’t worry if this time doesn’t suit; the sessions will be available to stream afterwards. Those who tune in live will also have the opportunity to participate in Q&A.

Q&A Webinar with Ben Armstrong (Microsoft/Hyper-V)

Altaro are hosting an “AMA” webinar where you will get the chance to ask your burning questions to Ben Armstrong (previously known as The Virtual PC Guy), Principal Program Manager at Microsoft, and one of the brains behind Hyper-V … and thus the platform of Azure!

if you’ve ever wondered where some of my uber-detailed posts on odd little hyper-V details came from … it was from Ben. He’s got tonnes of stories, lots of info, and this shouldn’t be missed if you have the chance to tune in.

Windows Server 2019 Announced for H2 2018

Last night, Microsoft announced that Windows Server 2019 would be released, generally available, in the second half of 2018. I suspect that the big bash will be Ignite in Orlando at the end of September, possibly with a release that week, but maybe in October – that’s been the pattern lately.

LTSC

Microsoft is referring to WS2019 as a “long term servicing channel release”. When Microsoft started the semi-annual channel, a Server Core build of Windows Server released every 6 months to Software Assurance customers that opt into the program, they promised that the normal builds would continue every 3 years. These LTSC releases would be approximately the sum of the previous semi-annual channel releases plus whatever new stuff they cooked up before the launch.

First, let’s kill some myths that I know are being spread by “someone I know that’s connected to Microsoft” … it’s always “someone I know” that is “connected to Microsoft” and it’s always BS:

  • The GUI is not dead. The semi-annual channel release is Server Core, but Nano is containers only since last year, and the GUI is an essential element of the LTSC.
  • This is not the last LTSC release. Microsoft views (and recommends) LTSC for non-cloud-optimised application workloads such as SQL Server.
  • No – Windows Server is not dead. Yes, Azure plays a huge role in the future, but Azure Stack and Azure are both powered by Windows, and hundreds of thousands, if not millions, of companies still are powered by Windows Server.

Let’s talk features now …

I’m not sure what’s NDA and what is not, so I’m going to stick with what Microsoft has publicly discussed. Sorry!

Project Honolulu

For those of you who don’t keep up with the tech news (that’s most IT people), then Project Honolulu is a huge effort by MS to replace the Remote Server Administration Toolkit (RSAT) that you might know as “Administrative Tools” on Windows Server or on an admin PC. These ancient tools were built on MMC.EXE, which was deprecated with the release of W2008!

Honolulu is a whole new toolset built on HTML5 for today and the future. It’s not finished – being built with cloud practices, it never will be – but but’s getting there!

Hybrid Scenarios

Don’t share this secret with anyone … Microsoft wants more people to use Azure. Shh!

Some of the features we (at work) see people adopt first in the cloud are the hybrid services, such as Azure Backup (cloud or hybrid cloud backup), Azure Site Recovery (disaster recovery), and soon I think Azure File Sync (seamless tiered storage for file servers) will be a hot item. Microsoft wants it to be easier for customers to use these services, so they will be baked into Project Honolulu. I think that’s a good idea, but I hope it’s not a repeat of what was done with WS2016 Essentials.

ASR needs more than just “replicate me to the cloud” enabled on the server; that’s the easy part of the deployment that I teach in the first couple of hours in a 2-day ASR class. The real magic is building a DR site, knowing what can be replicated and what cannot (see domain controllers & USN rollback, clustered/replicating databases & getting fired), orchestration, automation, and how to access things after a failover.

Backup is pretty easy, especially if it’s just MARS. I’d like MARS to add backup-to-local storage so it could completely replace Windows Server Backup. For companies with Hyper-V, there’s more to be done with Azure Backup Server (MABS) than just download an installer.

Azure File Sync also requires some thought and planning, but if they can come up with some magic, I’m all for it!

Security

In Hyper-V:

  • Linux will be supported with Shielded VMs.
  • VMConnect supported is being added to Shielded VMs for support reasons – it’s hard to fix a VM if you cannot log into it via “console” access.
  • Encrypted Network Segments can be turned on with a “flip of a switch” for secure comms – that could be interesting in Azure!

Windows Defender ATP (Advanced Threat Protection) is a Windows 10 Enterprise feature that’s coming to WS2019 to help stop zero-day threats.

DevOps

The big bet on Containers continues:

  • The Server Core base image will be reduced from 5GB by (they hope) 72% to speed up deployment time of new instances/apps.
  • Kubernetes orchestration will be natively supported – the container orchestrator that orginated in Google appears to be the industry winner versus Docker and Mesos.

In the heterogeneous world, Linux admins will be getting Windows Subsystem on Linux (WSL) for a unified scripting/admin experience.

Hyper-Converged Infrastructure (HCI)

Storage Spaces Direct (S2D) has been improved and more changes will be coming to mature the platform in WS2019. In case you don’t know, S2D is a way to use local (internal) disks in 2+ (preferably 4+) Hyper-V hosts across a high speed network (virtual SAS bus) to create a single cluster with fault tolerance at the storage and server levels. By using internal disks, they can use cheaper SATA disks, as well as new flash formats don’t natively don’t support sharing, such as NVME.

The platform is maturing in WS2019, and Project Honolulu will add a new day-to-day management UI for S2D that is natively lacking in WS2016.

The Pricing

As usual, I will not be answering any licensing/pricing questions. Talk to the people you pay to answer those questions, i.e. the reseller or distributor that you buy from.

OK; let’s get to the messy stuff. Nothing has been announced other than:

It is highly likely we will increase pricing for Windows Server Client Access Licensing (CAL). We will provide more details when available.

So it appears that User CALs will increase in pricing. That is probably good news for anyone licensing Windows Server via processor (don’t confuse this with Core licensing).

When you acquire Windows Server through volume licensing, you pay for every pair of cores in a server (with a minimum of 16, which matched the pricing of WS2012 R2), PLUS you buy User CALs for every user authenticating against the server(s).

When you acquire Windows Server via Azure or through a hosting/leasing (SPLA) program, you pay for Windows Server based only on how many cores that the machine has. For example, when I run an Azure virtual machine with Windows Server, the per-minute cost of the VM includes the cost of Windows Server, and I do not need any Windows Server CALs to use it (RDS is a different matter).

If CALs are going up in price, then it’s probably good news for SPLA (hosting/leasing) resellers (hosting companies) and Azure where Server CALs are not a factor.

The Bits

So you want to play with WS2019? The first preview build (17623) is available as of last night through the Windows Server Insider Preview program. Anyone can sign up.

image

Would You Like To Learn About Azure Infrastructure?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Speaking at Cloud & Datacenter Conference Germany in May

Lots of air miles this year! I will be travelling to Hanau, Germany, to present at the CDC Germany conference, which is running May 15-16.

About The Conference

If you are not familiar with this conference, it’s a professionally run Microsoft-centric event with expert speakers from around Europe (and further) talking about on-premises and cloud technologies, and aimed at attendees from the DACH (German-speaking) region. The speakers are mostly MVPs, who are known for their expertise in their chosen areas, and are veteran speakers/trainers. For example:

  • Dider van Hoye & Carsten Rachfahl are both Cloud & Datacenter Management (Hyper-V) MVPs known for their knowledge of virtualization, storage, and networking.
  • Martina Grom is a well known Office 365 MVP
  • Thomas Maurer, also a Hyper-V MVP, has been doing lots on containers and Azure Stack (on-premises Azure)
  • Dr. Benny Tritsch, MVP, is the best RDS person I know
  • Jan Kappen (MVP), along with Carsten, is the best at Storage Spaces Direct (Hyper-V hyper-converged infrastructure) there is in Europe.
  • Tudor Damian (Hyper-V MVP) is the go-to guy for Linux on Hyper-V – that’s why Microsoft have him at their booths!
  • Florian Klaffenbach, ex MVP but now working at Microsoft, knows more about connecting to Azure than anyone else I know.

And that’s just a small sample of the speakers.

The Content

The cloud & on-premises content is balanced to reflect the attendees. There’s plenty of on-premises content because that’s where people are, but there’s also lots of cloud content because migrations & deployments are happening or have happened.

My Session

I will be presenting a session called “Azure PaaS For The IT Pro”

Does the phrase “platform-as-a-service” scare you? Do you want to hit back or scroll to the next session in your browser? If so, then this session is for you! Aidan Finn, an IT pro, has been learning about Azure’s platform for developers. If you come to this session, you’ll learn what these services are, why your business or customers might like them, why you might like them, and why PaaS isn’t the end of the IT pro.

Why You Should Go

Why should you go to this conference? To be honest, you’d be dumb not to! Microsoft doesn’t run big conferences in Europe anymore, and there’s never been a greater need to learn … and re-learn. My career is defined by relearning and adapting to the changing environment. Cloud changes at an incredible pace, and now we see the semi-annual channel bringing this rate of change to on-premises. Those who refuse to learn and adapt will become redundant to needs – a threat to their employers, even! Those who want to excel and boost their careers – they will decide that learning is important … and where else are you going to find a collection of expert speakers & trainers than an event like Cloud & Datacenter Germany?

Register Here

Application-Aware Disaster Recovery For VMware, Hyper-V, and Azure IaaS VMs with Azure Site Recovery

Speaker: Abhishek Hemrajani, Principal Lead Program Manger, Azure Site Recovery, Microsoft

There’s a session title!

The Impact of an Outage

The aviation industry has suffered massive outages over the last couple of years costing millions to billions. Big sites like GitHub have gone down. Only 18% of DR investors feel prepared (Forrester July 2017 The State of Business Technology Resiliency. Much of this is due to immature core planning and very limited testing.

Causes of Significant Disasters

  • Forrester says 56% of declared disasters are caused by h/w or s/w.
  • 38% are because of power failures.
  • Only 31% are caused by natural disasters.
  • 19% are because of cyber attacks.

Sourced from the above Forrester research.

Challenges to Business Continuity

  • Cost
  • Complexity
  • Compliance

How Can Azure Help?

The hyper-scale of Azure can help.

  • Reduced cost – OpEx utility computing and benefits of hyper-scale cloud.
  • Reduced complexity: Service-based solution that has weight of MS development behind it to simplify it.
  • Increased compliance: More certifications than anyone.

DR for Azure VMs

Something that AWS doesn’t have. Some mistakenly think that you don’t need DR in Azure. A region can go offline. People can still make mistakes. MS does not replicate your VMs unless you enable/pay for ASR for selected VMs. Is highly certified for compliance including PCI, EU Data Protection, ISO 27001, and many, many more.

  • Ensure compliance: No-impact DR testing. Test every quarter or, at least, every 6 months.
  • Meet RPO and RTO goals: Backup cannot do this.
  • Centralized monitoring and alerting

Cost effective:

  • “Infrastructure-less” DR sites.
  • Pay for what you consume.

Simple:

  • One-click replication
  • One-click application recovery (multiple VMs)

Demo: Typical SharePoint Application in Azure

3 tiers in availability sets:

  • SQL cluster – replicated to a SQL VM in a target region or DR site (async)
  • App – replicated by ASR – nothing running in DR site
  • Web – replicated by ASR – nothing running in DR site
  • Availability sets – built for you by ASR
  • Load balancers – built for you by ASR
  • Public IP & DNS – abstract DNS using Traffic Manager

One-Click Replication is new and announced this week. Disaster Recovery (Preview) is an option in the VM settings. All the pre-requisites of the VM are presented in a GUI. You click Enable Replication and all the bits are build and the VM is replicated. You can pick any region in a “geo-cluster”, rather than being restricted to the paired region.

For more than one VM, you might enable replication in the recovery services vault (RSV) and multi-select the VMs for configuration. The replication policy includes recovery point retention and app-consistent snapshots.

New: Multi-VM consistent groups. In preview now, up to 8 VMs. 16 at GA. VMs in a group do their application consistent snapshots at the same time. No other public cloud offers this.

Recovery Plans

Orchestrate failover. VMs can be grouped, and groups are failed over in order. You can also demand manual tasks to be done, and execute Azure Automation runbooks to do other things like creating load balancer NAT rules, re-configuring DNS abstraction in Traffic Manager, etc. You run the recovery plan to failover …. and to do test failovers.

DR for Hyper-V

You install the Microsoft Azure Recovery Services (MARS) agent on each host. That connects you to the Azure RSV and you can replicate any VM to that host. No on-prem infrastructure required. No connection broker required.

DR for VMware

You must deploy the ASR management appliance in the data centre. MS learned that the setup experience for this is complex. They had a lot of pre-reqs and configurations to install this in a Windows VM. MS will deliver this appliance as an OVF template from now on – familiar format for VMware admins, and the appliance is configured from the Azure Portal. Replicate Linux and Windows VMs to Azure, as with Hyper-V from then on.

Demo: OVF-Based ASR Management Appliance for VMware

A web portal is used to onboard the downloaded appliance:

  1. Verify the connection to Azure.
  2. Select a NIC for outbound replication.
  3. Choose a recovery services vault from your subscription.
  4. Install any required third-party software, e.g. PowerCLI or MySQL.
  5. Validate the configuration.
  6. Configure vCenter/ESXi credentials – this is never sent to Azure, it stays local. The name of the credential that you choose might appear in the Azure portal.
  7. Then you enter credentials for your Windows/Linux guest OS. This is required to install a mobility service in each VMware VM. This is because VMware doesn’t use VHD/X, it uses VMDK. Again, not sent to MS, but the name of the credential will appear in the Azure Portal when enabling VM replication so you can select the right credentials.
  8. Finalize configuration.

This will start rolling out next month in all regions.

Comprehensive DR for VMware

Hyper-V can support all Linux distros supported by Azure. On VMware they’re close to all. They’ve added Windows Server 2016, Ubuntu 14.04 and 16.04 , Debian 7/8, managed disks, 4 TB disk support.

Achieve Near-Zero Application Data Loss

Tips:

  • Periodic DR testing of recovery plans – leverage Azure Automation.
  • Invoke BCP before disasters if you know it’s coming, e.g. hurricane.
  • Take the app offline before the event if it’s a planned failover – minimize risks.
  • Failover to Azure.
  • Resume the app and validate.

Achieve 5x Improvement in Downtime

Minimize downtime: https://aka.ms/asr_RTO

He shows a slide. One VM took 11 minutes to failover. Others took around/less than 2 minutes using the above guidance.

Demo: Broad OS Coverage, Azure Features, UEFI Support

He shows Ubunu, CentOS, Windows Server, and Debian replicating from VMware to Azure. You can failover from VMware to Azure with UEFI VMs now – but you CANNOT failback. The process converts the VM to BIOS in Azure (Generation 1 VMs). OK if there’s no intention to failback, e.g. migration to Azure.

Customer Success Story – Accenture

They deployed ASR. Increased availability. 53% reduction in infrastructure cost. 3x improvement in RPO. Savings in work and personal time. Simpler solution and they developed new cloud skills.

They get a lot of alerts at the weekend when there’s any network glitches. Could be 500 email alerts.

Demo: New Dashboard & Comprehensive Monitoring

Brand new RSV experience for ASR. Lots more graphical info:

  • Replication health
  • Failover test success
  • Configuration issues
  • Recovery plans
  • Error summary
  • Graphical view of the infrastructure: Azure, VMware, Hyper-V. This shows the various pieces of the solution, and a line goes red when a connection has a failure.
  • Jobs summary

All of this is on one screen.

He clicks on an error and sees the hosts that are affected. He clicks on “Needs Attention” in one of the errors. A blade opens with much more information.

We can see replication charts for a VM and disk – useful to see if VM change is too much for the bandwidth or the target storage (standard VS premium). The disk level view might help you ID churn-heavy storage like a page file that can be excluded from replication.

A message digest will be sent out at the end of the day. This data can be fed into OMS.

Some guest speakers come up from Rackspace and CDW. I won’t be blogging this.

Questions

  • When are things out: News on the ASR blog in October
  • The Hyper-V Planner is out this week, and new cost planners for Hyper-V and VMware are out this week.
  • Failback of managed disks is there for VMware and will be out by end of year for Hyper-V.