Fellow MVP, Carsten Rachfahl, just retweeted an interesting article on the Ask PFE (Microsoft Premier Field Engineering – a consulting support service offered to customers with lots of money) that discusses best practices for Windows Server 2012 Hyper-V. A friend of mine is a PFE and I know how deep into weeds they can get in their jobs. That means this could be a very interesting article. I’ve read it. Most of it I 100% agree with. A small bit of it I don’t agree with. Some of it I’d like to expand a bit on.
On Server Core
PFEs work for Microsoft so I expected and got the company line. As you probably know, I prefer a full install because (a) it’s easier to troubleshoot when things go wrong and (b) third party management and configuration software (such as that from your h/w vendor) often relies on not just a GUI but also the presence of IE on the local machine. The ability to switch between full, Core, and Minimal UI is not there yet, in my opinion, because it requires a reboot.
I don’t care about numbers of patches, I care about numbers of reboots, which is still going to be around once per month. And thanks to Live Migration (clusters and SMB 3.0 enabled non-clustered hosts), I even don’t care about the reboots because I’ll patch during the workday with no service downtime.
As for memory: you’ll save a few MB with Core. When your hosts have 48 GB + (all the way up to 4 TB) RAM then a few MB is meanlingless. You might save 4 GB of disk space. When the smallest LUN I can put in a host for the management OS is 300 GB (that’s the smallest disk you can get delivered from HP these days) then I really couldn’t give a flying monkey about a 6 GB Windows install versus a 12 GB one
100% agree on being up to date. Some h/w vendors, such as IBM, will screw around with you to delay shipment of a replacement dead disk (firmwares, gathering logs, analysis of said logs by support, etc) so minimise the risks. Didier Van Hoye (MVP) has done some blogging and presenting on how to use Cluster Aware Updating to install firmware/drivers on clustered Dell servers.
On selection of h/w, I’m not alone in recommending that you find a mix of components that you like and are happy with, and stick to them as much as possible. Not all h/w, drivers, and firmwares are made equal, even by the same manufacturer! You’ll have a lot of eggs in these baskets and you want these baskets to be well made.
Use of GPO
I like and use this. I put my hosts, even in the lab, in their own OU and have a GPO just for these hosts. Some of it is for overrides (e.g. not to force patch installs to my hosts like with other physical servers) and some of it can be for other customizations. I like the power plan setting idea by PFE. You could also use this GPO to push out your firewall settings, AV configs, manage services, etc.
Store VM Files On Non-System Drive
This is important for non-HA VMs (typically not on a cluster). This is to avoid Dynamic VHDs, snapshots (AVHD/AVHDX), Hyper-V replica logs (HRL) growing to the point of filling the system drive and rendering the host dead while pausing the VMs. Do you really want to have to boot the host up off a WinPE USB disk to resolve this issue? The most common offenders here will be small businesses, especially uneducated field engineers who are deploying their first hosts.
Place the VMs on a dedicated LUN – I don’t care how small the company or host is. We advise this for a very valid reason! I don’t care about nor value your “virtualisation experience” on your laptop!
The BIN File
There’s a good reminder there that VMs with the “save state” automatic host shutdown action will maintain a BIN file. This used to be all VMs. Now, only those VMs maintain this placeholder file to write the memory to disk. This file matches the amount of RAM currently assigned to the VM. VMs with Dynamic Memory enabled will see this file grow and shrink, and you need to account for how big this file can get.
TIP: a host with 96 GB RAM can never assign more than 96 GB RAM, and therefore cannot generate more than 96 GB of BIN file on its storage. You also cannot have more than X GB of BIN file if your VMs with the “save state” shutdown action have a total of X maximum RAM (dynamic memory setting).
I’d never heard of this tool. Well worth noting – I have heard very interesting stories about the abilities of PFEs to troubleshoot problems based on perfmon metrics alone!
There’s much more to VMQ than just enabling it. BE VERY CAREFUL! You need to know what you are doing, especially if implementing RSS as well or doing converged fabrics or NIC teaming.
I wouldn’t be so liberal about recommending Jumbo Frames for iSCSI. Consult your h/w vendor first.
iSCSI and NIC Teaming
Correct: iSCSI NICs should not be NIC teamed. It’s not supported and it will end badly.
HOWEVER, there is a subtle exception to this in converged fabrics. Note that the iSCSI virtual NICs in this design are not NIC teamed, and MPIO is used instead. The actual NIC team is abstracted beneath the virtual switch. But you should still check with your SAN manufacturer for support of this option.
Recommended Networking on Hosts
There is something subtle here that most are missing.
1) You only need iSCSI if you are using iSCSI. That should seem obvious to everyone … but there are always a few people …
2) Note the poster talks about the recommended number of networks. They are not talking about the recommended number of physical NICs. I can quite happily create these networks using a single 10 GbE NIC. See converged fabrics.
I like that they recommend fixed VHD/X files for production. That’s what I recommend. Yes, Microsoft are back on the “Dynamic VHDs are just as good” bandwagon, just as they were with W2008 R2. And many of us found that fragmentation caused read performance issues, particularly for relational databases.
BTW, there is a near religious split in the MVP world over Dynamic versus Fixed VHD/X. Some of the optimisations in VHDX (TRIM and UNMAP) muddy the waters, but I always come back to fragmentation. Storage (particularly databases) only ever grow, and tiny growth increments lead to fragmentation. Fragmentation leads to read performance issues, and that slows down queries and user interaction with applications. And that leads to helldesk calls.
As for passthrough disks. I hate passthrough disks. If you find an engineer or consultant who says you should use passthrough disks for scalability or performance, then I want you to do this:
Kick them in the balls. Repeatedly.
Fixed VHDX will run (read and write) at nearly the same speed as the underlying physical disk. There will be contention across the physical spindles on your storage. More spindles = more IOPS. Creating a passthrough disk on the same disk group as a CSV is pointless and shows how dumb the engineer really is. And VHDX scales out to 64 TB. Few people need virtual LUNs bigger than 64 TB.
The PFE blog tells us to set the paging file to 4 GB. That is my advice … for W2008 and W2008 R2 Hyper-V. However, we have been told not to do this for WS2012 Hyper-V. It is intelligent enough to figure out how to manage its own paging file.
Management OS Memory Reserve
The PFE blog tells us to configure the MemoryReserve registry key. I also used to tell people to do this on W2008 R2 to reserve memory on the host against the needs of Dynamic Memory because the default reservation algorithm might not do enough. We are told not to use MemoryReserve in WS2012 Hyper-V unless Microsoft Support instructs you specifically to do otherwise. The memory management has changed under the hood and the default reservation algorithm should be enough.
I need to disagree with the following:
Enlightened OS’s (Server 2008 or higher, Windows 7 or higher) don’t need IS installed manually.
Yes, they contain Hyper-V ICs … as they were at the release of the media … YEARS AGO. Even the built-in ICs in WS2012 and Windows 8 are already out of date (a Windows Update late last year brought an update). You should always update the ICs to (a) have bug/security fixes and (b) gain access to new features. This can be painful if you are doing it manually (it requires a VM reboot). This is why I like ConfigMgr: I can distribute the new ICs as a custom update or as a piece of software, and schedule the install/reboot during a maintenance window (possibly configured in my collections).
Myself and the post author, Roger Osborne (PFE), have chatted offline. I’ve also reach out to the product group to get advice on the paging file and MemoryReserve. The last we (authors of the new book) had heard from Redmond, what I posted above was correct.