Microsoft News Summary-29 April 2014

There is a lot of reading material this morning.

Microsoft News Summary–24 April 2014

Here’s some interesting bits from the last few days that I have not blogged:

Build 2014 Keynote 2 – Azure

The first presenter is Scott Guthrie, executive VP of cloud and enterprise, in a red t-shirt as usual. He wants to talk about a strategy that uses IaaS and PaaS together to give customers the best of breed service. 44 new features and services will be announced in this keynote. 2 new regions in Shanghai and Beijing:


Huge growth:


Titanfall was a huge multiplayer game, powered by Azure. The game cannot be played without the cloud. >100,000 Azure VMs powered this thing on launch day. That’s incredible; I’d love to see the virtual network design for that. We get some stuff about NBC using Azure. Tuning out for a while – most people do that with NBC.

New enhancements in Iaas:

Virtual machines:

  • This week Visual Studio will allow devs to create/destroy/debug VMs in Azure
  • New support to capture images with any number of drives. You then can deploy easily from that image.
  • Can configure VM images using DSC, Puppet (?), and PowerShell.

Mark Russinovich comes out. He demos Visual Studio to create VMs. Very easy wizard. He then runs PowerShell to create an image from a VM.


He then shows Puppet puppet master from the gallery. Luke Kanies of Puppet Labs. He gives a demo. Looks like it’s doing a lot of the service template concept that you get from SCVMM in the private cloud. Getty Images (huge pro stock library) dude comes out. They’re moving to Azure. They use Puppet for automation & configuration management. Now they can burst from their own data centre into Azure. Azure gives them Puppet labs and support for Windows & Linux VMs.

Guthrie is back out. Also announcing:

  • GA of auto-scaling: Great for creating automated elasticity for services based on demand.
  • Dynamic routing: I wonder if this is the “iBGP dynamic routing with best path selection” that was talked about at TechEd in 2013?
  • Point-site VPN GA
  • Subnet migration
  • Static internal IP address: This is a big simplification requirement for deploying hybrid cloud.

Moving on to PaaS. Azure Web Site service is one of the most popular services in Azure. And other PaaS stuff. I tune out.

Looks like the IT pro stuff is done, as am I.

Technorati Tags: ,

How My New Azure VM Web Server Is Configured

Following yesterday’s “I’ve moved to Azure” post, I decided to write a bit more about what I’ve done. For obvious reasons, I will not get into deep specifics.

The first step was to create a cloud service. Each cloud service in Azure should be seen as an external point of contact … a public IP address if you want to think of it that way.

I then created a single subnet virtual network.

A storage blob was created in Azure to store the VHD files of the new virtual machine.

A small spec VM (single core, 1.7 GB RAM) was created. An endpoint was created for HTTP in the Azure portal to allow incoming web traffic. I don’t need HTTPS and I don’t use the FTP functionality of WordPress.

I then created a WS2012 R2 Datacenter virtual machine. I configured patching using GPEDIT.MSC, and a few other things. I added IIS and ran the Web Platform Installer to install MySQL, PHP and a few other WordPress prerequisites. I also installed MySQL Workbench … I can’t be bothered googling for MySQL commands.

Two websites were created in IIS and two databases/service accounts were created in MySQL. I have this blog and my photography website to host. I downloaded and extracted 2 copies of the WordPress files, and configured each blog.

I’ve only migrated this site so far – the photography site will be next (more complex because of galleries). I decided against exporting the database from the old server; this was an opportunity to go with whole new versions of everything. So I did WordPress export/import. The export file was bigger than the 2 MB max so I split the export file using a free tool called WXR File Splitter. 2 MB files were too large and caused the import to timeout, so I went with 512 KB. Apparently a hack of PHP would have been an alternative, but I want to avoid hacks.

I added all my WordPress plug-ins and configured them, making sure that my advertisers were OK. And then I tested a bit. And then came the next step: switching the A records for my domain to switch to the new server. That’s the REAL test – will this server work for you.

The last steps were to configure backup. I configured a MySQLDump job to export all databases using Task Scheduler and a batch file. That backs up to a folder called Backup. I then configured an Azure Recovery Services backup Vault for Azure Online Backup. I created a 3 year 2048 bit certificate using the CA in the lab, uploaded the public key to Azure Backup, and imported the private key into the My Computer – Personal Store in the guest OS of the VM. I downloaded the Azure backup agent and configured a daily backup job to backup the Inetpub and the Backup folders. That’s the data of the two WordPress sites saved.

And that’s the lot!

There’s a new Basic VM configuration coming this week. I’ll consider migrating again to a higher spec one of those.

The one question I’ve gotten over and over is “how much does this cost?”. The answer: nothing. I’m using the benefits of my personal MSDN subscription (€75/month). The other one (which I answered in the previous post) was “Why not use an Azure web site?”. Simple: it does not offer enough disk capacity.

Technorati Tags: ,

My AidanFinn.Com Blog Has Moved To Microsoft Azure

Tonight I completed the migration of this WordPress blog to Windows Azure.



I was having performance and health issues with the VM that I was renting from a local hosting company. The admin portal was proving to be a nightmare. I had upgrade the VM but the VM wasn’t upgraded. The hard disk was filling frequently and killing MySQL, and therefore killing the WordPress blog.

Why was I on a VM? Because I needed more processor & bandwidth capacity.

A failure last week led me to look at my options. I’ve grown comfortable with Microsoft Azure so this was the place that I decided to move to. My free €75 credit per month thanks to my MSDN account doesn’t hurt either!

I looked at the website hosting options but they provide too little disk space. The VMs, even the smaller ones, give you loads of disk space. I decided to fire up a cloud service, blob, virtual network and a small VM instance just for my new web server VM. I installed IIS, added the sites, installed PHP, WordPress, MySQL, and a few other bits and bobs and started the laborious process of migrating from the old VM.

I could have cheated but I decided to do a fresh install. It was more time consuming, especially when I had to split the WordPress export file into 40 smaller export files (the import of 2MB files was timing out). I added and configured all the plugins. And then the final steps:

  • After some tests I configured the website to bind to and
  • I changed the DNS A records for those two URLs to switch to the public IP of the Azure cloud service.

My next steps will be:

  • Configure MySQL automated export
  • Deploy Windows Azure Online Backup to backup the IIS Inetpub folder and the MySQL export

And maybe I’ll configure the endpoint monitoring option in the Azure portal Smile

Set A Static IP Address For An Azure VM

Windows Azure (errr Microsoft Azure) has a weird system for assigning IP addresses to VMs in virtual networks. Like VMM, it uses a pool of IP addresses. And that’s where the similarities end. Azure’s method appears to be more like DHCP.

For example:

  • When you log into the guest OS, the VM is configured to use DHCP
  • The address is not reserved like with DHCP. It is possible that a VM could be offline, come back, and get a new IP address.

The latter bit is bad, especially for services such as Active Directory and DNS where a predictable IP address is required.

Note: The first step in configuring a valid network configuration is to set the DNS servers and subnet masks for your virtual network in the Azure portal.

There is no nice GUI method for reserving an IP address. There is a PowerShell method, which gives you a clue as to how this stuff works under the hood.

The first step is to get your VM:

$VM=Get-AzureVM -ServiceName “Demo-MWH-A” -Name “Azure-DC1”

As you can see above, I am configuring a static IP address for a domain controller. Next, I set the static IP. Note that we are configuring a static virtual network IP for the VM.

Set-AzureStaticVNetIP -VM $VM -IPAddress “” | Update-AzureVM

Also note, that in my tests, most of the time that I run Update-AzureVM, the VM is restarted. It doesn’t happen all of the time with these two cmdlets, but it happens most of the time.

Armed with these two cmdlets, you could set up a CSV file with Service/VM names and IP addresses, and run a loop to configure lots of VMs at once.


To be clear, the above steps do not configure a static IP inside the guest OS – you should not do that. The above steps simply configure the virtual network to assign the same IP to your VM’s vNIC every time the VM starts up. You are manipulating the system to get the results you need.

Technorati Tags: ,,

Hyper-V Recovery Manager Is Generally Available – The Pros & The Cons

Microsoft announced the general availability of Hyper-V Recovery Manager (HRM) overnight. HRM is an Azure-based subscription service that allows you manage and orchestrate your Hyper-V Replica disaster recovery between sites.

As you can see in the below diagram, HRM resides in Azure. You have an SCVMM-managed cloud in the primary site.  You have another SCVMM-managed cloud in a secondary site; yes, there is a second SCVMM installation – this probably keeps things simple to be honest. Agents are downloaded from HRM to each SCVMM install to allow both SCVMM installations to integrate with HRM in the cloud. Then you manage everything through a portal. Replication remains direct from the primary site to the secondary site; replication traffic never passes through Azure. Azure/HRM are only used to manage and orchestrate the process.

There is a big focus on failover orchestration in HRM, including the ability to tier and build dependencies, just as real-world applications require.

I’ve not played with the service yet. I’ve sat through multiple demos and read quite a bit. There are nice features but there is one architectural problem that concerns me, and an economic issue that Microsoft can and must fix or else this product will go the way of Google Reader.


  • Simple: It’s a simple product. There is little to set up (agents) and the orchestration process has a pretty nice GUI. Simple is good in these days of increasing infrastructure & service complexity.
  • Orchestration: You can configure nice and complex orchestration. The nature of this interface appears to lend itself to being quite scalable.
  • Failover: The different kinds of failover, including test, can be performed.


  • Price: HRM is stupid expensive. I’ve talked to a good few people who knew about the pricing and they all agreed that they wouldn’t pay €11.92/month per virtual machine for an replication orchestration tool. That’s €143.04 per year per VM – just for orchestration!!! Remember that the replication mechanism (Hyper-V Replica) is built-in for free into Hyper-V (a free hypervisor).
  • Reliance on System Center: Microsoft touts the possibility of hosting companies using HRM in multi-tenant DR services. Let’s be clear here; the majority of customers that will want a service like this will be small-to-medium enterprises (SMEs). Larger enterprises will either already have their own service or have already shifted everything into public cloud or co-location hosting (where DR should already exist). Those SMEs mostly have been priced out of the System Center market. That means that service providers would be silly to think that they can rely on HRM to orchestrate DR for the majority of their customers – the many small ones that need the most automation because of the high engineering time versus profit ratio.
  • Location! Location! Location!: I need more than a bullet point for this most critical of problems. See below.

I would never rely on a DR failover/orchestration system that resides in a location that is outside of my DR site. I can’t trust that I will have access to that tool. Those of us who were working during 9/11 remember what the Internet was like – yes, even 3000 miles away in western Europe; The Internet ground to a halt. Imagine a disaster on the scale of 9/11 that drew the same level of immediate media and social interest. Now imagine trying to invoke your business continuity plan (BCP) and logging into the HRM portal. If the Net was stuffed like it was on 9/11 then you would not be able to access the portal and would not be able to start your carefully crafted and tested failover plan. And don’t limit this to just 9/11; consider other scenarios where you just don’t have remote access because ISPs have issues or even the Microsoft data centre has issues.

In my opinion, and I’m not alone here, the failover management tool must reside in the DR site as an on-premise appliance where it can be accessed locally during a disaster. Do not depend on any remote connections during a disaster. Oh; and at least halve the price of HRM.

Windows Azure Backup Is Generally Available & Other Azure News

The following message came in an email overnight:

Windows Azure Backup is now generally available, Windows Azure AD directory is created automatically for every subscription, and Hyper-V Recovery Manager is in preview.

What does that mean?  Some backup plans charge you based on the amount of data that you are protecting.  Personally, I prefer that approach because it is easy to predict – I have 5 TB of data and it’s going to cost me 5 * Y to protect it.  Azure Online Backup has gone with the more commonly used approach of charging you based on how many GB/month of storage that you consume on Microsoft’s cloud.  This is easy for a service provider to create bills, but it’s hard for the consumer to estimate their cost … because you have elements like deduplication and compression to account for.

The pricing of Azure Online Backup looks very competitive to me. 

Windows Azure Backup is billed in units based on your average daily amount of compressed data stored during a monthly billing period.

Some plans get the first 5GB free and then it’s €00.3724 per GB per month.  In the USA, it will be $00.50 per GB per month.  Back when I worked in backup, €1/GB per month was considered economic.

In other Azure news:

A Windows Azure AD directory is created automatically for every subscription:

Starting today, every Windows Azure subscription is associated with an autocreated directory in Windows Azure Active Directory (AD). By using this enterprise-level identity management service, you can control access to Windows Azure resources.

To accommodate this advancement, every Windows Azure subscription can now host multiple directories. Additionally, Windows Azure SDK will no longer rely on static management certificates but rather on user accounts in Active Directory. Existing Active Directory tenants related to the same user account will be automatically mapped to a single Windows Azure subscription. You can alter these mappings from the Windows Azure Management Portal.

Take advantage of the new Windows Azure Hyper-V Recovery Manager preview.

Windows Azure Hyper-V Recovery Manager helps protect important applications by coordinating the replication of Microsoft System Center clouds to a secondary location, monitoring availability, and orchestrating recovery as needed.

The service helps automate the orderly recovery of applications and workloads in the event of a site outage at the primary data center. Virtual machines are started in an orchestrated fashion to help restore service quickly.

The Euro GA pricing for Hyper-V Recovery Manager was included in the email.  It will cost 11,9152€ per virtual machine per month to use this service.  The website is not updated with GA pricing.

Oracle Software Will Be Supported On Hyper-V & Azure

Up to now, the line on Oracle software was that it was only supported by Oracle on Oracle virtualisation.  Prepare to be stunned … Microsoft Corp. and Oracle Corp. today announced a partnership.

Customers will be able to deploy Oracle software — including Java, Oracle Database and Oracle WebLogic Server — on Windows Server Hyper-V or in Windows Azure and receive full support from Oracle. Terms of the deal were not disclosed.

Damn.  BTW, where’s Oracle’s partnership with VMware for the same support?  Oh yeah, VMware will “support” your Orcale software on their virtualization.  Before the vFanboys start barfing, sure, Larry Ellison will be at VMWorld to announce a partnership there too …

Bzzz Bzzz Bzzz

Back to the serious stuff, I’m gobsmacked by this.  It makes sense for both parties.  Sure MSFT wants to push MSFT BI solutions, but there’s a hardcore set of customers who have deeply embedded Oracle software.  You don’t cut off your nose to spite your face; instead you get over the past and figure out a way where one hand can wash the other.  Microsoft wants Oracle customers running on Microsoft’s Cloud OS.  Oracle sees the writing on the wall about hybrid cloud computing and doesn’t want to be left behind.  Is this an everyone-is-a-winner deal for customer/Microsoft/Oracle?

Server Posterpedia –Windows Server Poster App

A new app that features the feature poster apps for a number of server products, not just Hyper-V, has been released. You can download this app from the Microsoft Store for Windows 8.


Click on a poster, and it’s displayed for you:


You can zoom and scroll through the poster. Cleverly, the actions that you can run from the app will link you to additional information on TechNet. And there is even a link to download the original poster.  What a handy way to start learning the features of server products.  This is worth installing Windows 8 for!

Ben Armstrong posted about the app overnight, including a video of the app in action.