What Courier Can You Use For Azure Drive Import?

Azure offers a service where you can do out-of-band data transfers to/from Azure by sending physical disks via courier. Sounds great – right? Except….

image

OK, I have had some pretty awful experiences with FedEx – including receiving parcels from Microsoft, funnily enough – but I can get over that.

Let’s go over to FedEx and see how much it will cost me to send a parcel of disks from my office to the nearby Azure North region, here in Ireland.

Untitled

Hmm, so I cannot send data from a location in Ireland, where Azure is officially sold by Microsoft, to Azure Europe North, which is located “somewhere” in Ireland (use Google and it won’t take you long to find what I am not allowed to share).

So I reached out to @AzureSupport on Twitter. They came back with a response.

Courier to ship disks to Azure

So you don’t need to use FedEx to ship to Microsoft, but you need an account with FedEx to get your disk(s) back.

Windows 10 Being Pushed Out To Domain-Joined PCs

Brad Sams (my boss at Petri.com) published a story last night about how Microsoft has started to push out Windows 10 upgrades to domain-joined PCs.

Note that the PC doesn’t upgrade via Windows Update; the user will be prompted if they want to update, and then a deliberately confusing screen “encourages” the user to upgrade.

Brad notes that the environment must meet certain requirements:

  • The machine must be running and licensed for Windows 7 Pro or Windows 8.1 Pro (Enterprise doesn’t do this stuff).
  • There is no WSUS, ConfigMgr, etc – the machine gets updates directly from MSFT – this means smaller businesses for the most part.
  • The machine must be a domain member.

As you can see, this affects SMEs with a domain (no WSUS, etc). But I’d be surprised if larger businesses weren’t targeted at a later point in order to help MSFT hit their 1 billion PCs goal.

In my opinion, this decision to push upgrades to business is exactly the sort of action that gives Microsoft such a bad name with customers. Most SMEs won’t know this is coming. A lot of SMEs run systems that need to be tested, upgraded, or won’t support or work on newer operating systems. So Microsoft opting to force change and uncertainty on those businesses that are least ready is down right dumb. Brad reports that Microsoft claims that people asked for this upgrade. Right – fine – let those businesses opt into an upgrade via GPO instead of the other way around. Speaking of which …

There is a blocker process. I work in a small business and I’ve deployed the blocker. Windows Update added new GPO options to our domain controllers, and I enabled the GPO to block Windows upgrades via Windows Update:

image

As you can see – I’ve deployed this at work. We will upgrade to Windows 10 (it’s already started) but we will continue to do it at our own pace because we cannot afford people to be offline for 2 hours during the work day while Windows upgrades.

The Genuine Need for Disaster Recovery In Ireland/EU

How many times have you watched or read the news, saw some story about an earthquake, hurricane, typhoon, or some other disaster and think “that will never happen here”? Stop kidding yourself; disasters can happen almost everywhere.

I’ve always considered Ireland to be relatively safe. We don’t have (anything you’d notice) earthquakes, typhoons, or tornadoes; our cattle and sheep don’t need flying licenses. Our weather is dominated by the gulf stream, keep Ireland temperate. It doesn’t get hot here (we are quite northerly) and our winters consist of cloud, rain, and normally about half a day of snow. We get the tail end of some of those hurricanes that hit the east coast US, but there’s not much left by the time they reach us – some trees get knocked over, some tiles knocked on our roofs, but it’s not too bad. Even when we look at our neighbours in England, we see how their more extreme climate causes them disasters that we don’t get. Natural disasters just don’t happen here. Or do they?

The last month or so has revealed that to be a lie. Ireland has been battered by 6 storms in the past month. The latest, Storm Frank, was preceded with warnings that the country was saturated. That means that the ground has absorbed all of the water that it can; any further rainfall will not be absorbed, and it will pool, flow, and flood.

This morning, I woke to these scenes:

image

Enniscorthy, Co. Wexford [Image source: Paddy Banville]

Embedded image permalink

Graignamanagh, Co. Kilkenny [Image source: Graignamanagh G.A.A]

image

Middleton, Co. Cork [Image source: Fiona Donnelly]

Frank isn’t finished. It’s still blowing outside my office and more rain is sure to fall. There are stories of communities being evacuated to hotels, and the above photos are just the easy ones for the media to access.

This isn’t just a case of cows trapped in fields, stick a sandbag on it and you’re sorted, or somewhere far away. This is local. And Ireland is a relatively safe place – we’re not Oklahoma, a place that some deity has decided should be subject to cat 5 tornadoes every time you’re not looking. Dorothy, the point is, that disasters happen everywhere, including in the EU where we think it safe.

Let’s bring this back to business. Businesses have been put out of action by these floods. Odds are any computers or servers were either on the ground floor or in the basement. Those machines are dead. That means those businesses are dead. They might be lucky enough to have tapes (let’s leave that for another time) stored offsite but how reliable are they and will bare-metal restore work, or will it take forever? How much money will those businesses lose, or more critically, will those businesses survive loss of customers?

This is exactly why these businesses need a disaster recovery (DR) solution. There are several reasons why they don’t have one now:

  • Fires and other unnatural disasters happen everywhere
  • They couldn’t afford one
  • The business owners didn’t think there was a need for one
  • Some resellers didn’t think there was demand for one so they never brought it up with their customers

The need is there, as we can clearly see above. And thanks to Microsoft Azure, DR has never been so affordable. FYI, it comes in at a price that is a small fraction of the cost of solutions from the likes of Irish companies such as KeepITSafe – I’ve done the competitive pricing – and it opens that customer up to more technical opportunities with hybrid cloud solutions.

Microsoft Azure Site Recovery Services (ASR) is a disaster recovery-as-a-service (DRaaS) or cloud DR site offering from Microsoft. The beauty of it is that it’s there for everyone from the small business to the large enterprise. It works with Hyper-V, vSphere or physical machines, and it works with Windows or Linux as long as the OS is supported by Azure (W2008 R2 or later on the Windows side).

Note: There is a cost overhead for vSphere or physical machines to allow for on-premises conversion and forward and in-cloud management and storage, so you need a certain scale to absorb that cost. This is why I describe ASR as being perfect for SMEs with Hyper-V and mid-large companies with Hyper-V, vSphere or physical machines.

If I had ASR in place, and I has a business on the quayside in Cork, near the Slaney in Enniscorthy, or anywhere else where the rivers were close to bursting the banks then I would perform a planned failover, requiring about 2 minutes of my time to started a pre-engineered and tested one-click failover. My machines would shut down in the desired order, flush the last bit of replication to Azure, and start up the VMs in the desired order in Azure, and my machines and data would be safe. I can failback to new equipment or stay in Azure if the disaster wipes out my servers. And if that disaster doesn’t happen, I can easily failback to new equipment, or choose to stay in Azure and not worry about local floods again.

Technorati Tags: ,,,,,

Happy New Year

Hi all,

I’m winding down for Christmas so I wanted to wrap up the blog for the rest of the year.

2015 has been an amazing year for me. Tech was fun – Windows 10 came along and we got to start playing with WS2016. I did lots more work with Azure, and had the pleasure of seeing that work turn into adoption – even if it’s just the trickle ahead of the flood. I spoke at Ignite – finally achieving the ambition of speaking at a big Microsoft conference in the USA.

It was great to see friends from afar at the various conference and user group events, including MVP Summit, E2EVC, Future Decoded, and Experts Live.

But most important of all, I got married to the amazing Nicole, and became a dad to a bouncing cart-wheeling 8 year old girl.

How do I top 2015? 2016 is shaping up to be a different kind of fantastic 😀 I hope you’ve all have a great Christmas (or whatever you celebrate or do this season) and that 2016 will be a great year for you.

Aidan.

Broadcom & Intel Network Engineers Need A Good Beating

Your virtual machines lost network connectivity.

Yeah, Aidan Smash … again.

READ HERE: I’m tired of having to tell people to:

Disable VMQ on 1 GbE NICs … no matter what … yes, that includes you … I don’t care what your excuse is … yes; you.

That’s because VMQ on 1 GbE NICs is:

  • On by default despite the requests and advice of Microsoft
  • It breaks Hyper-V networking

Here’s what I saw on a brand new dell R730, factory fresh with a NIC firmware/driver update:

image

Now what do you think is the correct action here? Let me give you the answer:

  1. Change Virtual Machine Queues to Disabled
  2. Click OK
  3. Repeat on each 1 GbE NIC on the host.

Got any objections to that? Go to READ HERE above. Still got questions? Go to READ HERE above. Got some objections? Go to READ HERE above. Want to comment on this post? Go to READ HERE above.

This BS is why I want Microsoft to disable all hardware offloads by default in Windows Server. The OEMs cannot be trusted to deploy reliable drivers/firmware, and neither can many of you be trusted to test/configure the hosts correctly. If the offloads are off by default then you’ve opted to change the default, and it’s up to you to test – all blame goes on your shoulders.

So what modification do you think I’m going to make to these new hosts? See READ HERE above 😀

EDIT:

FYI, basic 1 GbE networking was broken on these hosts when I installed WS2012 R2 with all Windows Updates – the 10 GbE NICs were fine. I had to deploy firmware and driver updates from Dell to get the R730 to reliably talk on the network … before I did what is covered in READ HERE above.

My WS2016 Hyper-V Session at Future Decoded

I had fun presenting at this Microsoft UK event in London. Here’s a recording of my session on Windows Server 2016 (WS2016) Hyper-V, featuring failover clustering, storage, and networking:

 

More sessions can be found here.

Windows Server 2016 Licensing is Announced

Some sales/marketing/channel type in Microsoft will get angry reading this. Good. I am an advocate of Microsoft tech, and I speak out when things are good, and I speak out when things are bad. Friends will criticise each other when one does something stupid. So don’t take criticism personally and get angry, sending off emails to moan about me. Trying to censor me won’t solve the problem. Hear the feedback. Fix the issue.

We’re still around many months away from the release of Windows Server 2016 (my guess: September, the week of Ignite 2016) but Microsoft has released the details of how licensing of WS2016 will be changing. Yes; changing; a lot.

In 2011, I predicted that the growth of cores per processor would trigger Microsoft to switching from per-socket licensing of Windows Server to per-core. Well, I was right. Wes Miller (@getwired) tweeted a link to a licensing FAQ on WS2016 – this paper confirms that WS2016 and System Center 2016 will be out in Q3 2016.

image

There are two significant changes:

  • Switch to per-core licensing
  • Standard and Datacenter editions are not the same anymore

Per-Core Licensing

The days when processors got more powerful by becoming faster are over. We are in a virtualized multi-threaded world where capacity is more important than horsepower – plus the laws of physics kicked in. Processors now grow by adding cores.

The largest processor that I’ve heard of from Intel (not claiming that it’s the largest ever) has 60 (SIXTY!) cores!!! Imaging you deploy a host with 2 of those Xeon Phi processors … you can license a huge amount of VMs with just 2 copies of WS2012 R2 Datacenter (no matter what virtualization you use). Microsoft is losing money in the upper end of the market because of the scale-out of core counts, so a change was needed.

I hoped that Microsoft would preserve the price for normal customers – it looks like they have, for many customers, but probably not all.

Note – this is per PHYSICAL CORE licensing, not virtual core, not logical processor, not hyperthread.

image

Yes, the language of this document is horrendous. The FAQ needs a FAQ.

It reads like you must purchase a minimum of 8 cores per physical proc, and then purchase incremental counts of 2 cores to meet your physical core count. The customer that is hurt most is the one with a small server, such as a micro-server – you must purchase a minimum of 16 cores.

image

One of the marketing lines on this is that on-premises licensing will align with cloud licensing – anyone deploying Windows Server in Azure or any other hosting company is used to the core model. A software assurance benefit was allegedly announced in October on the very noisy Azure blog (I can’t find it). You can move your Windows Server (with SA) license to the cloud, and deploy it with a blank VM minus the OS charge. I have no further details – it doesn’t appear on the benefits chart either. More details in Q1 2016.

CALs

The switch to core-focused licensing does not do away with CALs. You still need to buy CALs for privately owned licenses – we don’t need Windows Server CALs in hosting, e.g. Azure.

System Center

You’re switching to per-core licensing too.

image

Nano?

This is just an installation type and is not affected by licensing or editions.

Editions?

We know about the “core” editions of WS2016: Standard and Datacenter – more later in this post.

As for Azure Stack, Essentails, Storage Server, etc, we’re told to wait until Q1 2016 when someone somewhere in Redmond is going to have to eat some wormy crow. Why? Keep reading.

Standard is not the same as Datacenter

I found out about the licensing announcement after getting an email from Windows Server User Voice to tell me that my following feedback was rejected:

image

I knew that some stuff was probably going to end up in Datacenter edition only. Many of us gave feedback: “your solutions for reducing infrastructure costs make no sense if they are in Datacenter only because then your solution will be more expensive than the more mature and market-accepted original solution”.

image

The following are Datacenter Edition only:

  • Storage Spaces Direct
  • Storage Replica
  • Shielded Virtual Machines
  • Host Guardian Service
  • Network Fabric

I don’t mind the cloud stuff being Datacenter only – that’s all for densely populated virtualization hosts that Datacenter should be used on. But it’s freaking stupid to put the storage stuff only in this SKU. Let’s imagine a 12 node S2D cluster. Each node has:

  • 2 * 800 GB flash
  • 8 * 8 TB SATA

That’s 65 TB of raw capacity per node. We have roughly 786 TB raw capacity in the cluster, and we’ll guestimate 314 TB of usable capacity. If each node costs $6155 then the licensing cost alone (forget RDMA network switches, NICs, servers, and flash/HDD) will be $73,860. Licensing for storage will be $73,860. Licensing. How much will that SAN cost you? Where was the cost benefit in going with commodity hardware there, may I ask?

This is almost as bad a cock-up as VMware charging for vRAM.

As for Storage Replica, I have a hard time believing that licensing 4 storage controllers for synch replication will cost more than licensing every host/application server for Storage Replica.

S2D is dead. Storage Replica is irrelevant. How are techs that are viewed with suspicion by customers going to gain any traction if they cannot compete with the incumbent? It’s a pity that some marketing bod can’t use Excel, because the storage team did what looks like an incredible engineering job.

If you agree that this decision was stupid then VOTE here.

image

How Much Does X In Azure Cost?

If you want to make me angry/ridicule you, this is the question to ask me.

Let me ask you a question. How much does it cost to buy a car?

Well, sir/mam, that depends. Would you like this fine classic?

Or would you like something more mobile but on the basic end?

How about something with a bit more oomph?

What sort of features would you like added?

Or would you like to go all out?

The answer to “how much is a car?” is anywhere between $0 and (last recorded at auction) $52,000,000.

So, when there are so many variations in Azure VMs, storage types, ways to connect to Azure, options for backing up, enabling DR, etc, how do you expect to price up an cloud solution with a question like “how much is a VM in the cloud?”.

Azure is a technical sale.

Azure is a technical sale.

Azure is a technical sale.

Azure is a technical sale.

Azure is a TECHNICAL sale.

Let me repeat that one more time …

Azure is a TECHNICAL sale.

No design = no pricing. Simples.

In a technical pre-sale, then you’ll do something called a “design”. This “design” allows you to do something called “specification”. In the “design” you figure out which bits you need. The “specification” allows you figure out the sizes of those bits. Strangely enough, there is this thing called “Google” that allows you to search for available specs and pricing, such as:

  • Azure sizes VM
  • Azure pricing VM
  • Azure pricing storage
  • Azure pricing gateway
  • Azure pricing data transfer
  • Azure pricing backup
  • AZURE PRICING site recovery
  • AZURE PRICING RemoteApp

I really doubt that I have some unique intellect that has identified a search pattern that no one else can find. But some days … I do wonder.

If you are a potential Azure customer, I have two tips for working with consultants/sales people:

  • Technical requirement: NO ONE can price Azure without a technical engagement. As an end customer, I never took a meeting without a technical pre-sales person present. No techie there in reception, then I didn’t come down. Was I being a d*ck? Yeah, but I made it clear how I bought.
  • Give them a test: Ask the consulting company to price up a solution … in front of you .. right there … with no notice. Informed questions that look to fix down a design/spec are indicators of knowledge. Signs of panic, procrastination. non-vibrating phones being answered … they’re bad signs.

For you consulting companies, the advice is simple. Cop on and skill up. Take advantage of every opportunity, and there are lots with many of them free, that there is to skill up: “we’re too busy” is bull$h1t. Classic sales people do have a role: sell the concepts to the managers. After that, you need to bring in the techs and design/spec up a solution. Failing to do so, will get you nowhere, other than looking like an idiot trying to build castles on sand.

image

Russinovich on Hyper-V Containers

We’ve known since Ignite 2015 that Microsoft was going to have two kinds of containers in Windows Server 2016 (WS2016):

  • Windows Server Containers: Providing OS and resource virtualization and isolation.
  • Hyper-V Containers: The hypervisor adds security isolation to machine & resource isolation.

Beyond that general description, we knew almost nothing about Hyper-V Containers, other than expect them in preview during Q4 of 2015 – Technical Preview 4 (TPv4), and that it is the primary motivation for Microsoft to give us nested virtualization.

That also means that nested virtualization will come to Windows Server 2016 Hyper-V in TPv4.

We have remained in the dark since then, but Mark Russinovich appeared on Microsoft Mechanics (a YouTube webcast by Microsoft) and he explained a little more about Hyper-V Containers and he also did a short demo.

Some background first. Normally, a machine has a single user mode running on top of kernel mode. This is what restricts us to the “one app per OS” best practice/requirement, depending on the app. When you enable Containers on WS2016, an enlightenment in the kernel allows multiple user modes. This gives us isolation:

  • Namespace isolation: Each container sees it’s own file system and registry (the hives in the containers hosted files).
  • Resource isolation: How much process, memory, and CPU a container can use.

Kernel mode is already running when you start a new container, which improves the time to start up a container, and thus it’s service(s). This is great for deploying and scaling out apps because a containerised app can be deployed and started in seconds from a container image with no long term commitment, versus minutes for an app in a virtual machine with a longer term commitment.

image

But Russinovich goes on to say that while containers are great for some things that Microsoft wants to do in Azure, they also have to host “hostile multi-tenant code” – code uploaded by Microsoft customers that Microsoft cannot trust and that could be harmful or risky to other tenants. Windows Server Containers, like their Linux container cousins, do not provide security isolation.

In the past, Microsoft has placed such code into Hyper-V (Azure) virtual machines, but that comes with a management and direct cost overhead. Ideally, Microsoft wants to use lightweight containers with the security isolation of machine virtualization. And this is why Microsoft created Hyper-V Containers.

Hyper-V provides excellent security isolation (far fewer vulnerabilities found than vSphere) that leverages hardware isolation. DEP is a requirement. WS2016 is introducing IOMMU support, VSM, and Shielded Virtual Machines, with a newly hardened hypervisor architecture.

Hyper-V containers use the exact same code or container images as Windows Server Containers.That makes your code interchangeable – Russinovich shows a Windows Server Container being switched into a Hyper-V container by using PowerShell to change the run type (container attribute RuntimeType).

The big difference between the two types, other than the presence f Hyper-V, is that Hyper-V Containers get their own optimized instance of Windows running inside of them, as the host for the single container that they run.

image

The Hyper-V Container is not a virtual machine – Russinovich demonstrates this by searching for VMs with Get-VM. It is a container, and is manageable by the same commands as a Windows Server Container.

In his demos he switches a Windows Server Container to a Hyper-V Container by running:

Set-Container -Name <Container Name> -RuntimeType HyperV

And then he queries the container with:

Get-Container -Name <Container Name> | fl Name, State, RuntimeType

So the images and the commands are common across Hyper-V Containers and Windows Server Containers. Excellent.

It looked to me that starting this Hyper-V Container is a slower operation than starting a Windows Server Container. That would make sense because the Hyper-V container requires it’s own operating system.

I’m guessing that Hyper-V Containers either require or work best with Nano Server. And you can see why nested virtualization is required. A physical host will run many VM hosts. A VM host might need to run Hyper-V containers – therefore the VM Host needs to run Hyper-V and must have virtualized VT-x instructions.

Russinovich demonstrates the security isolation. Earlier in the video he queries the processes running in a Windows Server Container. There is a single CSRSS process in the container. He shows that this process instance is also visible on the VM host (same process ID). He then does the same test with a Hyper-V Container – the container’s CSRSS process is not visible on the VM host because it is contained and isolated by the child boundary of Hyper-V.

What about Azure? Microsoft wants Azure to be the best place to run containers – he didn’t limit this statement to Windows Server or Hyper-V, because Microsoft wants you to run Linux containers in Azure too. Microsoft announced the Azure Container Service, with investments in Docker and Mesospehere for deployment and automation of Linux, Windows Server, and Hyper-V containers. Russinovich mentions that Azure Automation and Machine Learning will leverage containers – this makes sense because it will allow Microsoft to scale out services very quickly, in a secure manner, but with less resource and management overhead.

That was a good video, and I recommend that you watch it.

 

Microsoft News – 19 October 2015

It turns out that Microsoft has been doing some things that are not Surface-related. Here’s a summary of what’s been happening in the last while …

Hyper-V

image

Windows Server

Windows Client

Azure

Office 356

Miscellaneous