When you are using VMs with a large amount of memory then NUMA topology becomes important. Hyper-V can reveal the underlying physical NUMA topology to the VM so that the guest OS and NUMA-aware apps (such as SQL Server) efficiently assign memory and schedule processes to make the most of the boundaries.
There is something important to note. Enabling Dynamic Memory in the settings of a VM disables virtual NUMA. That means that the vast majority of VMs will not have virtual NUMA. To squeeze the best processor/memory performance out of larger VMs you will need to use static RAM, as noted here under Virtual NUMA:
Virtual NUMA and Dynamic Memory features cannot be used at the same time. A virtual machine that has Dynamic Memory enabled effectively has only one virtual NUMA node, and no NUMA topology is presented to the virtual machine regardless of the virtual NUMA settings.
So you have a balancing act to do:
- Applications and large VMs that might benefit from virtual NUMA probably should have static memory. Enabling Dynamic Memory would indirectly reduce the potential performance of the services provided by that VM because virtual NUMA would be disabled.
- Note that workloads that are not NUMA-aware cannot make use of virtual NUMA. Therefore enabling Dynamic Memory will not impact performance, and it makes sense to optimize the RAM assignment.
- Maybe service performance isn’t a big deal (!?!?!?) but the cost of RAM is. Then you would always (if the app/guest OS support it) enable Dynamic Memory.
This is not ideal. Introducing a human decision into a cloud where uneducated “users” are deploying their own VMs makes things less efficient. Hopefully MSFT will overcome the Dynamic Memory versus virtual NUMA conflict in a future version, but when you think about it, this would difficult to do.