The Performance Tuning Guidelines for Windows Server 2012 document is available and I’m reviewing and commenting on notable text in it.
There are 3 types of storage controller:
- Virtual HBA
There are 2 IDE controllers (0 and 1), with each one having 2 channels. In other words, you can have 4 devices attached to IDE. Two quick notes:
- Your boot drive must be attached to IDE. You have no choice in this. Before the VMware fanboy shite starts, this is a software controller and has no relevance to the hardware. Your hosts don’t need IDE controllers. It is a simulated virtual device and performs just as well as SCSI, as Ben Armstrong pointed out years ago.
- The virtual CD/DVD drive will be mounted to the IDE controllers, usually IDE 1. Microsoft states that you can save host resources by removing this device if it is not used. Be careful, you need it to do the usual manual Integration Services upgrade, install s/w from ISO (as you would in a cloud via the VMM library).
Adding or removing devices to IDE requires the VM to be powered down.
The virtual SCSI controller has nothing to do with hardware either. It is a simulated virtual device. A benefit is that it allows hot add of storage to a running VM. WS2012 SCSI attached VHDX enables unmap to save physical disk space. A single SCSI controller allows up to 64 attached disks. You can have up to 4 SCSI controllers. That’s 256 SCSI attached disks. That’s a lot of storage if you use 64 TB VHDX!
You can virtualise your host’s physical HBA ports to create virtual HBAs in the VMs on that host. This allows your VMs to have their own WWNs and directly connect to the SAN, using NPIV (required). If your SAN vendor supports it, you can run their DSM/MPIO to use multiple virtual HBAs in a VM. This gives greater storage IO performance and provides fault tolerance if you design the virtual SANs correctly.
Microsoft says you can do this for large LUNs. I strongly urge you to use VHDX for this if you need up to 64 TB in your large LUN:
- More flexible/mobile (storage migration), unlike LUNs that are physically bound to the SAN
- Can be backed up at the host/storage level, unlike physical LUNs that can only be backed up by an agent in the VM
A really good reason is virtual guest clusters, where you need some shared storage between the (up to 64) nodes in the guest cluster.