Video–Azure File Sync

I’ve produced and shared a short video (12:33 minutes) to explain what Azure File Sync is, what it will do for you, and there’s a quick demo at the end. If you want to:

  • Synchronise file shares between offices
  • Fix problems with full file servers by using tiered storage in the cloud
  • Use online backup
  • Get a DR solution for file servers, e.g. small business or branch office

… then Azure File Sync is for you!

Was This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Webinar Recording – Clustering for the Small/Medium Enterprise & Branch Office

I recently did another webinar for work, this time focusing on how to deploy an affordable Hyper-V cluster in a small-medium business or a remote/branch office. The solution is based on Cluster-in-a-Box hardware and Windows Server 2012 R2 Hyper-V and Storage Spaces. Yes, it reduces costs, but it also simplifies the solution, speeds up deployment times, and improves performance. Sounds like a win-win-win-win offering!

image

We have shared the recording of the webinar on the MicroWarehouse site, and that page also includes the slides and some additional reading & viewing.

The next webinar has been scheduled; On August 25th at 2PM UK/Irish time (there is a calendar link on the page) I will be doing a session on what’s new in WS2016 Hyper-V, and I’ll be doing some live demos. Join us even if you don’t want to learn anything about Windows Server 2016 Hyper-V, because it’s live demos using a Technical Preview build … it’s bound to all blow up in my face.

KB3172614 To Replace/Fix Hyper-V Installations Broken By KB3161606

Microsoft released a new update rollup to replace the very broken and costly (our time = our money) June rollup, KB3161606. These issues affected Hyper-V on Windows 8.1 and Windows Server 2012 R2 (WS2012 R2).

It’s sad that I have to write this post, but, unfortunately, untested updates are still being released by Microsoft. This is why I advise that updates are delayed by 2 months.

In the case of the issues in the June 2016 update rollup, the fixes are going to require human effort … customers’ human effort … and that means customers are paying for issues caused by a supplier. I’ll let you judge what you think of that (feel free to comment below).

A month after news of the issues in the update became known (the update rollup was already in the wild for a week or two), Microsoft has issued a superseding update that will fix the issues. At the same time, they finally publicly acknowledge the issues in the June update:

image

So it took 1.5 months, from the initial release, for Microsoft to get this update right. That’s why I advise a 2 month delay on approving/deploying updates, and I continue to do so.

What Microsoft needs to fix?

  • Change the way updates are created/packaged. This problem has been going on for years. Support are not good at this stuff, and it needs to move into the product groups.
  • Microsoft has successfully reacted to market pressure by making a special emphasis to change, e.g. The Internet, secure coding, The Cloud. Satya Nadella needs to do the same for quality assurance (QA), something that I learned in software engineering classes was as important as the code. I get that edge scenarios are hard to test, but installing/upgrading ICs in a Hyper-V guest OS is hardly a rare situation.
  • Start communicating. Put your hands up publicly, and say “mea culpa”, show what went wrong and follow it up with progress reports on the fix.

 

Webinar – Affordable Hyper-V Clustering for the Small/Medium Enterprise & Branch Office

I will be presenting another MicroWarehouse webinar on August 4th at 2PM (UK/Ireland), 3 PM (central Europe) and 9AM (Eastern). The topic of the next webinar is how to make highly available Hyper-V clusters affordable for SMEs and large enterprise branch offices. I’ll talk about the benefits of the solution, and then delve into what you get from this hardware + software offering, which includes better up-time, more affordability, and better performance than the SAN that you might have priced from HPE or Dell.

image

Interested? Then make sure that you register for our webinar.

Don’t Deploy KB3161606 To Hyper-V Hosts, VMs, or SOFS

Numerous sources have reported that KB3161606, an update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 (WS2012 R2), are breaking the upgrade of Hyper-V VM integration components. This has been confirmed & Microsoft is aware of the situation.

As noted below by the many comments, Microsoft eventually released a superseding update to resolve these issues.

The scenario is:

  1. You deploy the update to your hosts – which upgrades the ISO for the Hyper-V ICs
  2. You deploy the update to your VMs because it contains many Windows updates, not just the ICs.
  3. You attempt to upgrade the ICs in your VMs to stay current. The upgrade will fail.

Note that if you upgrade the ICs before deploying the update rollup inside of the VM, then the upgrade works.

My advice is the same as it has been for a while now. If you have the means to manage updates, then do not approve them for 2 months (I used to say 1 month, but System Center Service Manager decided to cause havoc a little while ago). Let someone else be the tester that gets burned and fired.

Here’s hoping that Microsoft re-releases the update in a way that doesn’t require uninstalls. Those who have done the deployment already in their VMs won’t want another painful maintenance window that requires uninstall-reboot-install-reboot across all of their VMs.

EDIT (6/7/2016)

Microsoft is working on a fix for the Hyper-V IC issue. After multiple reports of issues on scale-out file servers (SOFS), it’s become clear that you should not install KB3161606 on SOFS clusters either.

DataOn CiB-9112 V12 Cluster-in-a-Box

In this post I’ll tell you about the cluster-in-a-box solution from DataOn Storage that allows you to deploy a Hyper-V cluster for a small-mid business or branch office in just 2U, at lower costs than you’ll pay to the likes of Dell/HP/EMC/etc, and with more performance.

Background

So you might have noticed on social media that my employers are distributing storage/compute solutions from both DataON and Gridstore. While some might see them as competitors, I see them as complimentary solutions in our portfolio that are for two different markets:

  • Gridstore: Their hyper-converged infrastructure (HCI) products remove fear and risk by giving you a pre-packaged solution that is easy and quick to scale out.
  • DataON: There are two offerings, in my opinion. SMEs want HA but at a budget they can afford – I’ll focus on that area in this article. And then there are the scaled-out Storage Spaces offerings, that with some engineering and knowledge, allow you to build out a huge storage system at a fraction of the cost of the competition – assuming you buy from distributors that aren’t more focused on selling EMC or NetApp 🙂

The Problem

There is a myth out there that the cloud has or will remove servers from SMEs. The category “SME” covers a huge variety of companies. Outside of the USA, it’s described as a business with 5-250 users. I know that some in Microsoft USA describe it as a company with up to 2,500 users. So, sure, a business with 5-50 users might go server-less pretty easily today (assuming broadband availability), but other organizations might continue to keep their Hyper-V (more likely in SME) or vSphere (less likely in SME) infrastructures for the foreseeable future.

These businesses have the same demands for applications, and HA is no less important to a 50 user business than it is for a giant corporation; in fact, SMEs are hurt more when systems go down because they probably have a single revenue operation that gets shut down when some system fails.

So why isn’t the Hyper-V (or vSphere) cluster the norm in an SME? It’s simple: cost. It’s one thing to go from one host to two, but throw in the cost of a modest SAS/iSCSI SAN and that solution just became unaffordable – in case you don’t know, the storage companies allegedly make 85% margin on the list price of storage. SMEs just cannot justify the cost of SAN storage.

Storage Spaces

I was at the first Build conference in LA when Microsoft announced Windows 8 and Windows Server 2012. WS2012 gave us Storage Spaces, and Microsoft implored the hardware vendors to invest in this new technology, mainly because Microsoft saw it as the future of cluster storage. A Storage Spaces-certified JBOD can be used instead of a SAN as shared cluster storage, and this could greatly bring down the cost of Hyper-V storage for customers of all sizes. Tiered storage (SSD and HDD) that combines the speed of SSD with the economy of large hard drives (now up to 10 TB) with transparent and automatic demand-based block based tiering meant that economy doesn’t mean a drop in performance – it actually increases performance!

Cluster-in-a-Box

One of the sessions, presented by Microsoft Clustering Principal PM Lead Elden Christensen, focused on a new type of hardware solution that MSFT wanted to see vendors develop. A Cluster-in-a-Box (CiB) would provide a small storage or Hyper-V cluster in a single pre-packaged and tested enclosure. That enclosure would contain:

  • Up to 2 or 4 independent blade servers
  • Shared storage in the form of a Storage Spaces “JBOD”
  • Built in cluster networking
  • Fault tolerant power supplies
  • The ability to expand via SAS connections (additional JBODs)

I loved this idea; here was a hardware solution that was perfect for a Hyper-V cluster in an SME or a remote office/branch office (ROBO), and the deployment could be really simple – there are few decisions to make about the spec, performance would be awesome via storage tiering, and deployment could be really quick.

DataON CiB-9112 V12

This is the second generation of CiBs that I have worked with from DataON, a company that specialises in building state-of-the-art and Mcirosoft-certified Storage Spaces hardware. My employers, MicroWarehouse Ltd. (an Irish company that has nothing to do with an identically named UK company) distributes DataON hardware to resellers around Europe – everywhere from Galway in west Ireland to Poland so far.

The CiB concept is simple. There are two blade servers in the 2U enclosure. Each has the following spec:

  • Dual Intel® Xeon® E5-2600v3 (Haswell-EP)
  • DDR4 Reg. ECC memory up to 512GB
  • Dual 1G SFP+ & IPMI management “KVM over IP” port
  • Two PCI-e 3.0 x8 expansion slots
  • One 12Gb/s SAS x4 HD expansion port
  • Two 2.5” 6Gb/s SATA OS drive bays

Networking wise, there are 4 NICs per blade:

  • 2 x LAN facing Intel 1 GbE NICs, which I team for a virtual switch with management OS sharing enabled (with QoS enabled).
  • 2 x internal Intel 10 GbE , which I use for cluster communications and SMB 3.0 Live Migration. These NICs are internal copper connections so you do not need an external 10 GbE switch. I do not team these NICs, and they should be on 2 different subnets for cluster compatibility.

You can use the PCI-e expandability to add more SAS or NIC interfaces, as required, e.g. DataON work closely with Mellanox for RDMA networking.

The enclosure also has:

  • 12-bay 3.5”/2.5“ shared drive slots (with caddies)
  • 1023W (1+1) redundant power

image

Typically, the 12 shared drive bays are used as a single storage pool with 4 x SSDs (performance) and 8 x 7200 RPM HDDs (capacity). Tiering in Storage Spaces works very well. Here’s an anecdote I heard while in a pre-sales meeting with one of our resellers:

They put a CiB (6 GB SAS, instead of 12 GB as on the CiB-9112)  into a customer site last year. That customer had the need to run a regular batch job that would normally takes hours, and they had gotten used to working around that dead time. Things changed when the VMs were moved onto the CiB. The batch job ran so quickly that the customer was sure that it hadn’t run correctly. The reseller double-checked everything, and found that Storage Spaces tiering and the power of the CiB blades had greatly improved the performance of the database in question, and everything was actually fine – great actually!

And here was the kicker – that customer got a 2 node Hyper-V cluster with shared storage in the form of a DataON CiB for less than the cost of a SAN, let alone the cost of the 2 Hyper-V nodes.

How well does this scale? I find that CPU/RAM are rarely the bottlenecks in the SME. There are plenty of cores/logical processors in the E5-2600v3, and 512 GB RAM is more than enough for any SME. Disk is usually the bottleneck. With a modest configuration (not the max) of 4 x 200 GB SSDs and 8 x 4 TB drives you’re looking at around 14 TB of usable 2-way mirrored (like RAID 10) storage. Or you could have 4 x 1.6 TB SSDs and 8 x 8 TB HDDs and have around 32 TB of usuable 2-way mirrored storage. That’s plenty!

And if that’s not enough, then you can expand the CiB using additional JBODs.

My Hands-On Experience

Lots of hardware goes through our warehouse that I never get to play with. But on occasion, a reseller will ask for my assistance. A couple of weeks ago, I got to do my first deployment of the 12 Gb SAS CiB-9112. We got it out of the box, and immediately I was impressed. This design indicates that engineers had designed the hardware for admins to manage. It really is a very clever and modular design.

image

The two side-bezels on the front of the 2U enclosure have a power switch and USB port for each blade server.

On the top, you can easily access the replaceable fans via a dedicated hinged panel. At the back, both fault-tolerant power supplies are in the middle, away from the clutter at the side of a rack. The blades can be removed separately from their SAS controllers. And each of the RAID1 disks for the blades’ OS (the management OS for a Hyper-C cluster) can be replaced without removing the blade.

Racking a CiB is a simple task – the entire Hyper-V cluster is a single 2U enclosure so there are no SAN controllers, SAN switches, SAN cables, and multiple servers. You slide a single 2U enclosure into it’s rail kit, plug in power, networking, and KVM, and you’re done.

Windows Server is pre-installed and you just need to modify the installation type (from eval) and enter your product key using DISM. Then you prep the cluster – DataON pre-installs MPIO, Hyper-V, and Failover Clustering to make your life easy.

My design is simple:

  • The 1 GbE NICs are teamed, connected to a weight-based QoS Hyper-V switch, and shared with the parent. A weight of 50 is assigned to the default bucket QoS rule, and 50 is assigned to the management OS virtual NIC.
  • The 10 GbE NICs are on 2 different subnets.
  • I enable SMB 3.0 Live Migration on both nodes in Hyper-V Manager.
  • MPIO is configured with the LB policy.
  • I ensure that VMQ is disabled on the 1 GbE NICs and enabled on the 10 GbE NICs.
  • I form the cluster with no disks, and configure the 10 GbE NICs for Live Migration.
  • A single clustered storage pool is created in Failover Cluster Manger.
  • A 1 GB (it’s always bigger) 2-way mirrored virtual disk is created and configured as the witness disk in the cluster.
  • I create 2 virtual disks to be used as CSVs in the cluster, with 64 KB interleaves and formatted with 64 KB allocation unit size. The CSVs are tiered with some SSD and some HDD … I always leave free space in the pool to allow expandability of one CSV over the other. HA VMs are balanced between the 2 CSVs.

What about DCs? If the customer is keeping external DCs then everything is done. If they want DCs running on the CiB then I always deploy them as non-HA DCs that are stored on the C: of each CiB blade. I know that since WS2012, we are supposed to be able to run DCs are HA VMs on the cluster, but I’ve experienced issues with that.

With some PowerShell, the above process is very quick, and to be honest, the slowest bit is always the logistics of racking the CiB. I’m usually done in the early afternoon, and that includes some show’n’tell.

Summary

If you want a tidy, quick & easy to deploy, and affordable HA solution for an SME or ROBO then the DataOn CiB-9112 V12 is an awesome option. If I was doing our IT from scratch, this is what I would use (we had existing servers and added a DataON JBOD, and recently replaced the servers while retaining the JBOD). I love how tidy the solution is, and how simple it is to set up, especially with some fairly basic PowerShell. So check it out, and see what it can do for you.

Broadcom & Intel Network Engineers Need A Good Beating

Your virtual machines lost network connectivity.

Yeah, Aidan Smash … again.

READ HERE: I’m tired of having to tell people to:

Disable VMQ on 1 GbE NICs … no matter what … yes, that includes you … I don’t care what your excuse is … yes; you.

That’s because VMQ on 1 GbE NICs is:

  • On by default despite the requests and advice of Microsoft
  • It breaks Hyper-V networking

Here’s what I saw on a brand new dell R730, factory fresh with a NIC firmware/driver update:

image

Now what do you think is the correct action here? Let me give you the answer:

  1. Change Virtual Machine Queues to Disabled
  2. Click OK
  3. Repeat on each 1 GbE NIC on the host.

Got any objections to that? Go to READ HERE above. Still got questions? Go to READ HERE above. Got some objections? Go to READ HERE above. Want to comment on this post? Go to READ HERE above.

This BS is why I want Microsoft to disable all hardware offloads by default in Windows Server. The OEMs cannot be trusted to deploy reliable drivers/firmware, and neither can many of you be trusted to test/configure the hosts correctly. If the offloads are off by default then you’ve opted to change the default, and it’s up to you to test – all blame goes on your shoulders.

So what modification do you think I’m going to make to these new hosts? See READ HERE above 😀

EDIT:

FYI, basic 1 GbE networking was broken on these hosts when I installed WS2012 R2 with all Windows Updates – the 10 GbE NICs were fine. I had to deploy firmware and driver updates from Dell to get the R730 to reliably talk on the network … before I did what is covered in READ HERE above.

How Much Memory Does My Hyper-V Host Require?

If you are trying to figure out how much RAM you have left for virtual machines then this is the post for you.

When Microsoft launched Dynamic Memory with W2008 R2 SP1, we were introduced to the concept of a host reserve (nothing to do with the SCVMM concept); the hypervisor would keep so much memory for the Management OS, and everything else was fair game for the VMs. The host reserve back then was a configurable entry in the registry (MemoryReserve [DWORD] in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization). Things changed with WS2012 when we were told that Hyper-V would look after the reserve and we should stay away from it. That means we don’t know how much memory is left for VMs. I could guess it roughly but I had no hard facts.

And then I saw a KB article from about a month ago that deals with a scenario where it appears that a host has free memory but VMs still cannot start.

There’s two interesting piece of information in that post. The first is how to check how much RAM is actually available for VMs. Do not use Task Manager or other similar metrics. Instead, use PerfMon and check Hyper-V Dynamic Memory Balancer\Available Memory (instance: System Balancer). This metric shows how much memory is available for starting virtual machines.

The second fact is is the size of the host reserve, which is based on the amount of physical RAM in the host. The following table is an approximation of results of the algorithm:

image

Microsoft goes on to give an example. You have a host with 16 GB RAM:

  • The Mangement OS uses 2 GB.
  • The host reserve is up to 2.5 GB.
  • That leaves you with 11.5 GB RAM for VMs.

So think about it:

  1. You log into the host with 16 GB RAM, and fire up Task Manger.
  2. There you see maybe 13.5 GB RAM free.
  3. You create a VM with 13 GB RAM, but it won’t start because the Management OS uses 2 GB and the host reserve is between 2-2.5 GB, leaving you with 11.5-12GB RAM for VMs.

Microsoft News – 30 September 2015

Microsoft announced a lot of stuff at AzureCon last night so there’s lots of “launch” posts to describe the features. I also found a glut of 2012 R2 Hyper-V related KB articles & hotfixes from the last month or so.

Hyper-V

Windows Server

Azure

Office 365

EMS

DataON Gets Over 1 Million IOPS using Storage Spaces With A 2U JBOD

I work for a European distributor of DataON storage. When Storage Spaces was released with WS2012, DataON was one of the two leading implementers, and to this day, despite the efforts of HP and Dell, I think DataON gives the best balance of:

  • Performance
  • Price
  • Stability
  • Up-to-date solutions

A few months ago, DataON sent us a document on some benchmark work that was done with their new 12 Gb SAS JBOD. Here are some of the details of the test and the results.

Hardware

  • DNS-2640D (1 tray) with 24 x 2.5” disk slots
  • Servers with 2x E5-2660v3 CPUs, 32 GB RAM, 2 x LSI 9300-8e SAS adapters, and 2 x SSDs for the OS – They actually used the server blades from the CiB-9224, but this could have been a DL380 or a Dell R7x0
  • Windows Server 2012 R2, Build 9600
  • MPIO configured for Least Blocks (LB) policy
  • 24 x 400GB HGST 12G SSD

Storage Spaces

A single pool was created. Virtual disks were created as follows:

image

Test Results

IOMeter was run against the aggregate storage in a number of different scenarios. The results are below:

image

The headline number is 1.1 million 4K reads per second. But even if we stick to 8K, the JBOD was offering 700,000 reads or 300,000 writes.

I bet this test rig cost a fraction of what the equivalent performing SAN would!