Windows Server 2012 R2 Hyper-V brings us a new storage feature called Storage QoS. You can optionally turn on quality of service management on selected virtual hard disks. You then have two settings, both of which default to 0 (unmanaged):
- Minimum: Unlike with networking QoS, this is the one you are least likely to use in WS2012 R2. This is not a minimum guarantee, like you find with networking. Instead, this setting is used more as an alerting system, in case a selected virtual hard disk cannot get enough IOPS. You enter the number of IOPS required.
- Maximum: Here you can specify the maximum number of IOPS that a virtual hard disk can use from the physical storage. This is the setting you are most likely to use in Storage QoS in WS2012 R2, because it allows you to limit overly aggressive VM activity on your physical storage.
This is a feature of the host, so the guest OS is irrelevant. The setting is there for VHD (which you should have stopped deploying) and VHDX (which you should be deploying).
What Storage QoS Looks Like
I’ve set up a test lab to demonstrate this. A VM has 2 additional 10 GB fixed (for fair comparison) virtual hard disks in the same folder on the host. I have formatted the drives as P and Q in the guest OS, and created empty files in each volume called testfile.dat. I then downloaded and installed SQLIO into the guest OS of the VM. This tool will let me stress/benchmark storage. I started PerfMon on the host, and added the Read Operations/Sec metric from Hyper-V Virtual Storage Device for the 2 virtual hard disks in question.
I opened two command prompt windows and ran:
- sqlio.exe -s1000 -t10 -o16 -b8 -frandom p:testfile.dat
- sqlio.exe -s1000 -t10 -o16 -b8 -frandom q:testfile.dat
That gives me 1000 seconds of read activity from the P drive (first data virtual hard disk) and the Q drive (the second data virtual hard disk). Immediately I saw that both virtual hard disk files had over 300 IOPS of read activity.
I then configured the second virtual hard disk (containing Q:) to be restricted to 50 IOPS.
There was a response in PerfMon before the settings screen could refresh after me clicking OK. The read activity on the virtual hard disk dropped to around 50 (highlighted in black), usually under and sometimes creeping just over 50 (never for long before it was clawed back down by QoS).
The non-restricted virtual hard disk immediately benefited immediately from the available bandwidth, seeing it’s read IOPS increase (highlighted in black) remains on the ceiling but the metrics rise, now getting up to over 560 IOPS.
Usage of Storage QoS
I think this is going to be a weird woolly area. The only best practice I know of is that you should know what you are doing first. Few people understand (A) what IOPS is, and (B) how many IOPS their applications need. This is why Microsoft added the Hyper-V metrics for measuring read and write operations per second of a virtual hard disk (see above). This gives you the ability to gather information (I don’t know if a System Center Operations Manager management pack has been updated) and determine regular usage patterns.
Once you know what usage is expected then you could set limits to constrain that virtual hard disk from misbehaving.
I personally think that Storage QoS will be a reactionary measure for out-of-control virtual machines in traditional virtualization deployments and most private clouds. However, those who are adopting the hands-off, self-service model of a true cloud (such as public cloud) may decide to limit every virtual hard disk by default. Who knows!
Anyway, the feature is there, and be sure that you know what you’re doing if you decide to use it.