Memory Page Combining

My reading of the Windows Server 2012 R2 (WS2012 R2) Performance and Tuning Guide continues and I’ve just read about a feature that I didn’t know about. Memory combining is a feature that was added in Windows 8 and Window Server 2012 (WS2012) to reduce memory consumption. There isn’t too much text on it, but I think memory combining stores a single instance of pages if:

  • The memory is pageable
  • The memory is private

Enabling page combining may reduce memory usage on servers which have a lot of private, pageable pages with identical contents. For example, servers running multiple instances of the same memory-intensive app, or a single app that works with highly repetitive data, might be good candidates to try page combining.

Bill Karagounis talked briefly about memory combining in the old Sinofsky Building Windows 8 blog (where it was easy to be lost in the frequent 10,000 word posts):

Memory combining is a technique in which Windows efficiently assesses the content of system RAM during normal activity and locates duplicate content across all system memory. Windows will then free up duplicates and keep a single copy. If the application tries to write to the memory in future, Windows will give it a private copy. All of this happens under the covers in the memory manager, with no impact on applications. This approach can liberate 10s to 100s of MBs of memory (depending on how many applications are running concurrently).

The feature therefore does not improve things for every server:

Here are some examples of server roles where page combining is unlikely to give much benefit:

  • File servers (most of the memory is consumed by file pages which are not private and therefore not combinable)
  • Microsoft SQL Servers that are configured to use AWE or large pages (most of the memory is private but non-pageable)

You can enable (memory) page combining using Enable-MMAgent and query the status using Get-MMAgent.

You’ll find that memory combining is enabled by default on Windows 8 and Windows 8.1.  That makes these OSs even more efficient for VDI workloads. It is disabled by default on servers – analyse your services to see if it will be appropriate.

There is a processor penalty for using memory combining. The feature is also not suitable for all workloads (see above).  So be careful with it.

KB2868279–Moving A VM From WS2012 R2 Hyper-V To WS2012 Hyper-V Is Not Supported

I have to admit that I find this KB article and support statement to be quite baffling.  It states that:

Moving a virtual machine (VM) from a Windows Server 2012 R2 Hyper-V host to a Windows Server 2012 Hyper-V host is not a supported scenario under any circumstances. 
When you try import a VM that is exported from a Windows Server 2012 R2 Hyper-V host into a Windows Server 2012 Hyper-V host, you receive the following error message: 

Hyper-V did not find virtual machines to import from the location <folder location>.
The operation failed with error code ‘32784’.

I am going to raise this with the product group. I see it as a genuine issue because anyone doing an upgrade-migration will require a rollback plan that will work and is supported.

You can move a VM from a Windows Server 2012 Hyper-V host to a Windows Server 2012 R2 Hyper-V host. This is a supported scenario and cane even be done with zero downtime using Cross-Version Live Migration.

Recommended Updates Lists For WS2012 R2 Hyper-, Failover Clustering & File Services

Microsoft has published pages that list the hotfixes that have been published for Windows Server 2012 R2 Hyper-V and Failover Clustering. You can find them here:

Make sure you wait a month before deploying (let someone else do the testing for you) and then use tools like Cluster Aware Updating (CAU) do the heavy lifting.

EDIT: I just added the link to the updates for file services. You’ll want those if implementing SMB 3.0 storage. You’ll find the updates for R2 on the bottom of the page (shared with WS2012).

How Much RAM & CPU Does Window Server Deduplication Optimization Require?

I’ve been asked about resource requirements for the dedupe optimization job before but I did not have the answer before now.

Processor

The CPU side is … not clear.  The dedupe subsystem will schedule one single-threaded job per volume. That means a machine with 8 logical processors is only 1/8th utilized if there is a single data volume. Microsoft says:

To achieve optimal throughput, consider configuring multiple deduplication volumes, up to the number of CPU cores on the file server.

That seems pretty dumb to me. “Go ahead and complicate volume management to optimize the dedupe processing”. Uhhhhh, no thanks.

Memory

Microsoft tells us that 1-2 GB RAM is used per 1 TB of data per volume.  They clarify this with an example:

Volume Volume size Memory used
Volume 1 1 TB 1-2 GB
Volume 2 1 TB 1-2 GB
Volume 3 2 TB 2-4 GB
Total for all volumes 1+1+2 * 1GB up to 2GB 4 – 8 GB RAM

By default a server will limit the RAM used by the optimization job to 50% of total RAM in the server.  So if the above server had just 4 GB RAM, then only 2 GB would be available for the optimization job.  You can manually override this:

Start-Dedupjob <volume> -Type Optmization  -Memory <50 to 80>

There is an additional note from Microsoft:

Machines where very large amount of data change between optimization job is expected may require even up to 3 GB of RAM per 1 TB of diskspace.

So you might see RAM become a bottleneck or increase pressure (in a VM with Dynamic Memory) if the optimization job hasn’t run in a while or if lots of data is dumped into a deduped volume.  Example: you have deployed lots of new personal (dedicated) VMs for new users on a deduped volume.

How Many SSDs Do I Need For Tiered Storage Spaces?

This is a good question.  The guidance I had been given was between 4-8 SSDs per JBOD tray.  I’ve just found guidance that is a bit more precise.  This is what Microsoft says:

When purchasing storage for a tiered deployment, we recommend the following number of SSDs in a completely full disk enclosure of different bay capacities in order to achieve optimal performance for a diverse set of workloads:

Disk enclosure slot count Simple space 2-way mirror space 3-way mirror space
12 bay 2 4 6
24 bay 2 4 6
60 bay 4 8 12
70 bay 4 8 12

Minimum number of SSDs Recommended for Different Resiliency Settings

My Most Popular Articles In 2013

I like to have a look at what people are reading on my blog from time to time.  It gives me an idea of what is working, and sometimes, what is not – for example, I still get lots of hits on out-dated articles.  Here are the 5 most viewed articles of the last year, from 5 to 1.

5) Windows Server 2012 Hyper-V Replica … In Detail

An oldie kicks off the charts … this trend continues throughout the top 5.  At least this one is a good subject that is based on WS2012 and is still somewhat relevant to WS2012 R2.  Replica is one, if not the, most popular features in WS2012 (and later) Hyper-V.

4) Rough Guide To Setting Up A Hyper-V Cluster

I wrote this article in 2010 for Windows Server 2008 R2 and it’s still one of my top draws.  I really doubt you folks still are deploying W2008 R2 Hyper-V; I really hope you folks are not still deploying W2008R2 Hyper-V!!!!  Join us in this decade with a much better product version.

Please note that the networking has change significantly (see converged networks/fabrics).  The quorum stuff has changed a bit too (much simpler).

3) Windows Server 2012 Licensing In Detail

Licensing!!! Gah!

2) Comparison of Windows Server 2012 Hyper-V Versus vSphere 5.1

There’s nothing like kicking a hornets nest to generate some web hits Smile  We saw VMware’s market share slide in 2013 (IDC) while Hyper-V continued the march forward.  More and more people want to see how these products compare.

And at number one we have … drumroll please …

1) Windows Server 2012 Virtualisation Licensing Scenarios

Wow! I still cannot believe that people don’t understand how easy the licensing of Windows Server on VMware, Xen, Hyper-V, etc, actually is.  Everyone wants to overthink this subject.  It’s really simple: It’s 2 or unlimited created Window Server VMs per assigned license to a host people!!!  This page accounted for 2.8% of all views in the last 12 months.

Sadly, not a single post from the last year makes it into the top 10.  I guess that folks aren’t reading about WS2012 R2.  Does this indicate that there is upgrade fatigue?

Linux Integration Services Version 3.5 For Hyper-V Is Released

Microsoft has released version 3.5 of the Hyper-V integration components for Linux.  This download is intended for versions of Linux that do not have the Linux Integration Services (LIS) for Hyper-V already installed in the kernel.

Version 3.5 of the LIS supports:

  • Red Hat Enterprise Linux (RHEL) 5.5-5.8, 6.0-6.3 x86 and x64
  • CentOS 5.5-5.8, 6.0-6.3 x86 and x64

Hyper-V from 2008 R2 onwards is supported, including Windows 8 and 8.1.

The below matrix describes which Hyper-V features are supported in which version of the LIS and distro/version of Linux:

image image

Notes

  1. Static IP injection might not work if Network Manager has been configured for a given HyperV-specific network adapter on the virtual machine. To ensure smooth functioning of static IP injection, ensure that either Network Manager is turned off completely, or has been turned off for a specific network adapter through its Ifcfg-ethX file.
  2. When you use Virtual Fibre Channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If LUN 0 has not been populated, a Linux virtual machine might not be able to mount Virtual Fibre Channel devices natively.
  3. If there are open file handles during a live virtual machine backup operation, the backed-up virtual hard disks (VHDs) might have to undergo a file system consistency check (fsck) when restored.
  4. Live backup operations can fail silently if the virtual machine has an attached iSCSI device or a physical disk that is directly attached to a virtual machine (“pass-through disk”).
  5. LIS 3.5 only provides Dynamic Memory ballooning support—it does not provide hot-add support. In such a scenario, the Dynamic Memory feature can be used by setting the Startup memory parameter to a value which is equal to the Maximum memory parameter. This results in all the requisite memory being allocated to the virtual machine at boot time—and then later, depending upon the memory requirements of the host, Hyper-V can freely reclaim any memory from the guest. Also, ensure that Startup Memory and Minimum Memory are not configured below distribution recommended values.

The following features are not available in this version of LIS:

  • Dynamic Memory hot-add support
  • TRIM support
  • TCP offload
  • vRSS

PowerShell Deployment Toolkit (PDT) For System Center

It takes time to deploy System Center.  It takes a long time to deploy the entire suite.  So you can imagine that I only ever (if that) have bits of System Center deployed.  That’s why it was great to see that Microsoft’s Rob Willis had written a “hydration” kit to deploy a complete System Center demo environment using PowerShell scripts and XML metadata files called the PowerShell Deployment Toolkit.

I want to stress that word: DEMO.  This kit is not to be used for deploying a production system.  Out of the so-called-box (a zip file really) it deploys an architecture that should never ever be used in production.  It’s designed to be able to run on a laptop (a large one) and it does things that any System Center expert would choke at.  But it will deploy, with very little effort, an environment that is fit for performing demonstrations.

In the zip you’ll find a few files:

  • Variable.xml: This file describes the System Center installation.  You can customize this as required (time zones, domains, passwords, etc) – and that’s probably a good idea after you’ve done a test install to see what the PDT does.
  • Downloader.ps1: This script will download all the some of the required pieces to deploy your System Center suite.  All of them!  The newest version even pulls down the new Windows Azure Pack! You’re going to be manually downloading System Center and Windows Server 2012 R2 as pointed out by Reidar Johansen here.
  • VMCreator.ps1: This script will create the Hyper-V VMs required for your demo environment.
  • Installer.ps1: This script will deploy and configure System Center from your downloads.

Before you ask, yes, the kit does download/install WS2012 R2 and System Center 2012 R2, and all of the dependencies (about 11,000 MB at the time of writing).  It’s a monumental piece of work that should be a time saver for those wanting to quickly build new demo environments.

I’m running this kit for the first time right now.  I’ll blog about my experience as time goes by.

Recommended Updates For WS2012R2 Hyper-V

A wiki page has been posted to list the hotfixes for Windows Server 2012 R2 Hyper-V.  At this point, it only contains the GA update, but that is sure to change.

I have had a look for the equivalent page for Failover Clustering but nothing has come up in my results yet.  I’ll update this post if I find something.

Migrating Two Non-Clustered Hyper-V Hosts To A Failover Cluster (With DataOn & Storage Spaces)

At work we have a small number of VMs to operate the business.  For our headcount, we actually would have lots of VMs, but distribution requires lots of systems for lots of vendors.  I generally have very little to do with our internal IT, but I’ll get involved with some engineering stuff from time to time.

2 non-clustered hosts (HP DL380 G6) were setup before I joined the company.  I upgraded/migrated those hosts to WS2012 earlier this year (networking = 4 * 1 GbE NIC team with virtualized converged networking for management OS and Live Migration). 

We decided to migrate the non-clustered hosts to create a Hyper-V cluster.  This was made feasibly affordable thanks to Storage Spaces, running on a shared JBOD.  We distribute DataOn, so we went with a single DNS-1640, to attached to both servers using the LSI 9207-8e dual port SAS card.

Yes, we’re doing the small biz option where two Hyper-V hosts are directly connected to a JBOD where Storage Spaces is running.  If we had more than 2 hosts, we would have used the SMB 3.0 architecture of Scale-Out File Server (SOFS).  Here is the process we have followed so far (all going perfectly up to now):

Step 1 – Upgrade RAM

Each host had enough RAM for it’s solo workload.  In a cluster, a single node must be capable of handling all VMs after a failover.  In our case, we doubled the RAM in each of the two servers.

Step 2 – Drain VMs from Host1

Using Shared-Nothing Live Migration, we moved VMs from Host1 to Host2.  This allows us to operate on a host for an extended period without affecting production VMs.

Note that this only worked because we had already upgraded the RAM (step 1) and we had sufficient free disk space in Host2.

Step 3 – Connect Host1

We added an LSI card into Host1.  We racked the JBOD.  And then we connected Host1 to the JBOD, one SAS cable going to port1/module1 in the JBOD, and the other SAS cable going to port1/module2 in the JBOD (for HA).

Host1 was booted up.  I downloaded the drivers, firmware, and BIOS from LSI for the adapter (never, ever use the drivers for anything that come on the Windows media if there is an OEM driver) and installed them.

Step 4 – Create Cluster

I installed two Windows features on Host1:

  • Failover Clustering
  • MPIO

I added SAS in MPIO, requiring a reboot.

Additional vNIC was added to the Management OS called Cluster2.  I then renamed the Live Migration network to Cluster 1.  QoS was configured so that the VMSwitch has 25% in the default bucket, and each of the 3 vNICs in the ManagementOS has 25% each.

SMB Multichannel constraints was configured for Cluster1 and Cluster2 for all servers.  That’s to control which NICs are used by SMB Multichannel (used by Redirected IO).

I then created a single node cluster and configured it.  Then it was time for more patching from Windows Update.

Step 5 – Hotfixes

I downloaded the recommended updates for WS2012 Hyper-V and Failover Clustering (not found on Windows Update) using a handy PowerShell script.  Then I installed them on & rebooted Host1.

Step 6 – Storage Spaces

In Failover Cluster manager I configured a new storage pool.  We’re still on WS2012 so a single hot spare disk was assigned.  Note that I strongly recommend WS2012 R2 and not assigning a hot spare; parallelized restore is a much faster and better option.

3 virtual disks (LUNs) were created:

  • Witness for the cluster
  • CSV1
  • CSV2

Rule of thumb: create 1 CSV per node in the cluster that is connected by SAS to the Storage Pool.

Step 7 – Configure Cluster Disks

The cluster is still single-node, so configuring a witness disk for quorum will cause alerts.  You can do it, but be aware of the alerts.

Each of the CSV virtual disks were converted to CSV and renamed to CSV1 and CSV2, including the mount points.

Step 8 – Test

Using Shared-Nothing Live Migration, a VM was moved to the cluster and placed on a CSV. 

This is where we are now, and we’re observing the performance/health of the new infrastructure.

Step 9 – Shared-Nothing Live Migration From Host2

All of the VMs will be moved from the D: of Host2 to the cluster and spread evenly across the two CSVs in the cluster, running on Host1.  This will leave Host1 drained.

Remember to reconfigure backups to backup VMs from the cluster!

Step 10 – Finish The Job

We will:

  1. Reconfigure the networking of Host2 as above (I’ve saved the PowerShell)
  2. Insert the LSI card in Host2 and connect it to the JBOD
  3. Install all the LSI drivers & updates on Host2 as we did on Host1
  4. Add the Failover Cluster and MPIO roles to Host2
  5. Add Host2 as a node in the cluster
  6. Patch up Host2
  7. Test Live Migration
  8. Plan out VM failover prioritization
  9. Configure Cluster Aware Updating self-updating for lunch time on the second Monday of every month – that’s a full month after Patch Tuesday, giving MSFT plenty of time to fix any broken updates (I’m thinking of Cumulative Updates/Update Rollups).

And that should be that!