WS2012 Hyper-V – Managing Virtual CPU Performance

The Performance Tuning Guidelines for Windows Server 2012 document is available and I’m reviewing and commenting on notable text in it.

Microsoft makes a very interesting point.  Just because you can allocate virtual CPUs (vCPUs) to a VM (up to 64 if your hardware is up to it), you shouldn’t.

Virtual machines that have loads that are not CPU intensive should be configured to use one virtual processor. This is because of the additional overhead that is associated with multiple virtual processors, such as additional synchronization costs in the guest operating system.

Using up to date integration components improve CPU scalability.

You can also improve the performance of your overall virtual machine estate by conserving CPU cycles.  In other words, disable or tune down CPU utilising background activity and free up cycles for VMs that are engaged in providing services.  Microsoft suggests:

· Install the latest version of the virtual machine Integration Services.

· Remove the emulated network adapter through the virtual machine settings dialog box (use the Microsoft Hyper-V-specific adapter).

· Remove unused devices such as the CD-ROM and COM port, or disconnect their media.

· Keep the Windows guest on the sign-in screen when it is not being used.

· Use Windows Server 2012, Windows Server 2008 R2, or Windows Server 2008 for the guest operating system.

· Disable the screen saver.

· Disable, throttle, or stagger periodic activity such as backup and defragmentation.

· Review the scheduled tasks and services that are enabled by default.

· Improve server applications to reduce periodic activity (such as timers).

· Use the default Balanced power plan instead of the High Performance power plan.

Some of those tips will be pipe dreams in a true cloud.  For example, I’d never disable the virtual CD-ROM because that’s an endless run of help desk calls waiting to happen. Reviewing scheduled tasks in a true cloud is ambitious considering that the compute cluster admin has no idea what’s going on in the VMs. Disabling the screen saver … I like that one, but that’s going to be a fun conversation with IT security officers who are known for being aware of current technology … oh … 😛

For client guest OSs (VDI), they also suggest:

· Disable background services such as SuperFetch and Windows Search.

· Disable scheduled tasks such as Scheduled Defrag.

· Disable Aero glass and other user interface effects (through the System application in Control Panel).

You can use weights and reserves to control how each vCPU in a VM uses physical logical processors, e.g. 100% reserve gives a vCPU 100% of a logical processor.  A handy tip is to use this for certain CPU intensive workloads such as Exchange or SharePoint to guarantee a certain SLA for application performance.  Microsoft says:

Highly intensive loads can benefit from adding more virtual processors instead, especially when they are close to saturating an entire core.

That makes sense.  2 vCPUs is not the same as 2 physical CPUs.  2 physical CPUs (2 * quad core) might be 8 logical processors.  Therefore you might want to give the VM equivalent 8 vCPUs – if the ASSESSMENT (yes I’m back on that again) says that it is required.

WS2012 Hyper-V – VM & Host Performance Monitoring

The Performance Tuning Guidelines for Windows Server 2012 document is available and I’m reviewing and commenting on notable text in it.

You cannot get true performance monitoring of a VM by running Performance Monitor or Task Manager from within a guest OS.  In fact, you can’t even get true accurate monitoring on the management OS using the normal metrics.  You should use either PerfMon or Longman.exe from the Management OS to monitor the Hyper-V counter objects.

If you want to monitor the CPU usage of the physical host or VM then you use the Hyper-V Hypervisor Logical Processor performance counters:

  • Hyper-V Hypervisor Logical Processor (*) % Total Run Time: The counter represents the total non-idle time of the logical processor(s).
  • Hyper-V Hypervisor Logical Processor (*) % Guest Run Time: The counter represents the time spent running cycles within a guest or within the host.
  • Hyper-V Hypervisor Logical Processor (*) % Hypervisor Run Time: The counter represents the time spent running within the hypervisor.
  • Hyper-V Hypervisor Root Virtual Processor (*) *: The counters measure the CPU usage of the management OS.
  • Hyper-V Hypervisor Virtual Processor (*) *: The counters measure the CPU usage of guest partitions.

WS2012 – Enabling Services To Start When A VM Can’t Get Enough Dynamic Memory

The Performance Tuning Guidelines for Windows Server 2012 document is available and I’m reviewing and commenting on notable text in it.

I normally advise that the startup amount of memory in a Dynamic Memory virtual machine is set to whatever is required to get the services up and running.  This Microsoft document has another approach:

  • Set the VM to start with some small amount
  • Configure the paging file to be able to live up to any additional requirements – the paging file maximum size being 3 times the paging file’s initial size

Now if a VM cannot get enough physical RAM, it’ll page internally.  Before the questions come in, this is different to how VMware pages; host level paging has no knowledge of prioritisation or usage of memory pages inside a VM. Using the Microsoft approach, the guest OS has complete knowledge of how to prioritise and page in/out memory to suit what is going on.

If you are sizing hosts/VMs appropriate then VMs should always get enough RAM. But I suppose there might be rare circumstances where a number have hosts in a cluster are offline and you have to squeeze more out of your physical RAM.

Performance Tuning Guidelines for Windows Server 2012

The Performance Tuning Guidelines for Windows Server 2012 document is available. I’ve previously read the performance doc for Windows Server 2008 and Windows Server 2008 R2, focusing on the Hyper-V piece. Let’s look at some notable sections for Hyper-V in the new 2012 version of the doc.  I had started writing a post with notes from this document but … well … the post was nearly as long as the document itself and that’s a bit pointless.  Read the document for yourself.  There are some very detailed notes on advanced configurations that you should be aware of, even if you might not use them.

I’ll post highlights over the coming days/weeks/months.

Performance Tuning Guidelines for Windows Server 2008 R2

Microsoft has updated the Performance Tuning Guidelines document to include W2008 R2.  It covers all aspects of the server operating system but I’m going go focus on Hyper-V here.

The guidance for memory sizing for the host has not changed.  The first 1GB in a VM has a potential host overhead of 32MB.  Each additional 1GB has a potential host overhead of 8MB.  That means a 1GB VM potentially consumes 1056MB on the host, not 1024MB.  A 2GB VM potentially costs 2088MB on the host, not 2048MB.  And a 4GB VM potentially costs 4152MB, not 4096MB.

The memory savings for a Server Core installation are listed as 80MB.  That’s seriously not worth it in my opinion given the difficulty in managing it (3rd party software and hardware management) and troubleshooting it when things go wrong. “Using Server Core in the root partition leaves additional memory for the VMs to use (approximately 80 MB for commit charge on 64-bit Windows)”.

RAM is first allocated to VM’s.  “The physical server requires sufficient memory for the root and child partitions. Hyper-V first allocates the memory for child partitions, which should be sized based on the needs of the expected load for each VM. Having additional memory available allows the root to efficiently perform I/Os on behalf of the VMs and operations such as a VM snapshot”.

There is lots more on storage, I/O and network tuning in the virtualization section of the document.  Give it a read.