Delegation of Administration in Hyper-V

If you’re like me, you like to restrict as much as possible and delegate selected rights where possible.  I’ve only just found out that this is possible with Hyper-V without using VMM 2008.

The Virtual PC’s Guy describes the process in his blog.  This will allow you to grant selected rights to VM’s and Hyper-V to non-administrators on the Hyper-V server.  To do this you edit an Authorisation Store using the Authorisation Manager.

Note that this is in the Hyper-V release notes:

"If the Hyper-V authorization store is located in Active Directory, then the removal of a user from a role does not take immediate effect. Either the server running Hyper-V (the computer that runs the Virtual Machine Management Service (VMMS)) or Active Directory needs to be rebooted to apply the changes. To avoid this issue, use an XML file as the store type. To fix this issue, reboot the Hyper-V server hosting VMMS, restart VMMS and Network Virtual Service Provider Windows Management Instrumentation (NVSPWMI) services or reboot Active Directory".

Lesson: Use groups, not users to grant rights.

Credit: Virtual PC Guy.

Using the Hyper-V Integration Components in WinPE

I just found a few links on adding the Integration Components to WinPE.  Why would you want to do this?  Simple; say you want to deploy operating systems to VM’s via SCCM, WDS or ImageX.  There’s a lot more tools out there that will use a WinPE boot image too.  You will need drivers, especially for the NIC to work.  To get those, you’ll need the Integration Components.  Mike Sterling has documented the process.

Credit: Mike Sterling.

EDIT:

Mike has updated the script that he used in the original post.

Update: Hyper-V RAM Loading

I’ve previously talked about my observations of how my 9GB RAM Hyper-V lab box used it’s memory and how Hyper-V has a RAM overheard and how you can calculate the maximum.

Last night I upgraded the lab box from RC1 to RTM.  Today, I’ve noticed that I have a whole lot more of RAM to play with!  In fact, 1.5GB of RAM was freed up on my 9GB RAM server.  Before I could only get 7GB worth of VM’s up.  Today, Hyper-V is looking much more efficient. 

Just Upgraded To Hyper-V RTM

I’ve just upgraded my Hyper-V lab box to RTM.  It took maybe about 10 minutes (most of that was POST during reboots) to install the update and another 5 to install the updated enhancements in 8 VM’s.  It’s a very easy process as John Howard describes on his blog.

EDIT:

Snapshots are supported between RC1 and RTM.  They are not support between either beta or RC0 and RTM.

Hyper-V Has Been Released

Bink is reporting that Hyper-V has been released to manufacturing.  You can expect it to be available as a download and via Windows Updates.  There should be a smooth migration from the RC1 release to RTM.

Credit: Bink.

EDIT:

This has been confirmed.

Credit: Willem Kasdorp.

EDIT

The Hyper-V team has released some details.  The update will be available via Windows Updates on July 8th.  The direct download will be available from sometime later today (probably midday Seattle time or 20:00pm GMT).

Hyper-V RAM Requirements Update

I’ve gotten feedback on my last blog update on this subject and here’s the scenario:

  • The parent OS requires 512MB (MINIMUM) but you should allow for 2GB (recommended).
  • Hyper-V itself requires 300MB but it’s likely the 2GB assignment will compensate for this (in most cases).
  • Drivers and agents on the parent might push your requirement for more than 2GB.
  • Each VM requires at least 32MB for the 1st GB of RAM.  Each additional GB of RAM for the VM requires (at most) 8MB for each additional GB of RAM.  MS say 8MB to be conservative.  Only if the VM is trashing RAM will you hit this 8MB.
  • The second VM will require 32MB for it’s first plus another 8MB (max) for each additional GB of RAM.

So here’s a typical scenario:

  • Parent Partition: 2GB (but you might need another 200 for Hyper-V and more for drivers/agents)
  • 2GB RAM VM: 32MB + 8MB
  • 4GB RAM VM: 32MB + (3 * 8)MB
  • 1GB RAM VM: 32MB

Thanks again to Dave Northey and John Howard who took the time to dig internally in MS to help me with this problem.

Credit: Dave Northey and John Howard.

Hyper-V RAM Requirements

One of my tasks today was a bit on the tough side.  Following up on yesterday, I had to be able to calculate the amount of RAM required for each VM, whether it was assigned 1GB, 2GB or 4GB.  Form this, based purely on RAM, I had to be able to calculate numbers of VM’s I could get on a host – CPU, storage and I/O are well in hand. 

This proved tough.  At this point, Hyper-V is still RC1 so there’s little information out there.  The only result from a lot of searching was a MS page about how they virtualised 3 VM’s, each with 10GB RAM, on a 32GB host.  They said they reserved 2GB RAM for the parent partition.  Not very useful, to be honest.

I couldn’t find any more so I sent out some mails requesting some help.  In the meantime I decided to do some observational testing.  I ran VM’s on my test host and used PerfMon to measure "Hyper-V VM VID Partition – Overhead bytes"".  The overhead was as follows (rounded up):

  • 0.5GB: .0039% of assigned RAM
  • 1GB: .0049% of assigned RAM
  • 2GB: .0015% of assigned RAM
  • 3GB: .0023% of assigned RAM
  • 4GB: .0022% of assigned RAM

OK.  I loaded my VM’s to 100% RAM utilisation and that overhead didn’t change.  That gave me something to work with but I was wondering about that 2GB for the parent.  Was that official?  Did the overhead for the 3 * 10GB machines come from that?  Maybe it did?

This evening I got some replies.  Dave Northey sent me a link to a document (Performance Tuning Guidelines for Windows Server 2008) that didn’t turn up in my searches.  It says:

"You should size VM memory as you typically do for server applications on a physical machine. You must size it to reasonably handle the expected load at ordinary and peak times because insufficient memory can significantly increase response times and CPU or I/O usage. In addition, the root partition must have sufficient memory (leave at least 512 MB available) to provide services such as I/O virtualization, snapshot, and management to support the child partitions.

A good standard for the memory overhead of each VM is 32 MB for the first 1 GB of virtual RAM plus another 8 MB for each additional GB of virtual RAM. This should be factored in the calculations of how many VMs to host on a physical server. The memory overhead varies depending on the actual load and amount of memory that is assigned to each VM".

So I read this two ways (assuming 2GB RAM per VM scenario):

  • First machine charge = 32MB overhead + 8MB.  Second machine charge = 16MB overhead.
  • First machine charge = 32MB overhead + 8MB.  Second machine charge = 32MB overhead + 8MB. 

I’m assuming it’s the second scenario for now.  I’ll chase this down next week.

I also got a response from John Howard.  He said:

"Our general recommendations will be the same as for Windows Server 2008. … minimum and recommended RAM requirements which are 512MB minimum, 2GB recommended. This is for the parent partition. Our general requirements for just the Hypervisor being launched are a little under 300MB. Any driver stacks, management applications and virtual machine memory are on top of that. In the parent partition, we consume … RAM per virtual machine".

So being fairly conservative, it sounds like we allow 2GB for the parent, another 300MB for Hyper-V, a bit for the drivers of the parent (probably be safe within that 2GB) and then our overhead for RAM for each VM.  That gives me something like this:

VM RAM (GB)

Overhead (MB)

Total MB Used

0.5

32

544

1

32

1056

2

40

2088

4

56

4152

Hyper-V and NIC Teaming

NIC teaming, bonding, load balancing, A+B networking, or whatever you want to call it, is a core concept in highly available server computing.  In Windows, we’ve been able to create a virtual NIC that we configured TCP in by using 3rd party drivers from the likes of HP or Intel.

The virtual network is no different.  ESX can do this by using two host physical NIC’s to connect a virtual switch.  A VM has one virtual NIC that connects to this virtual switch and has the benefit of A+B networking.  The Hyper-V virtual switch can only use one NIC.  Hyper-V relies on drivers in the parent partition.  You’d think "OK, lets team two physical NIC’s in the parent partition and use the resulting virtual NIC to connect the virtual switch".  Right now, that’s not possible.  HP says:

"IMPORTANT

Windows Server 2008 Hyper-V RC1 does not support the Network Configuration Utility (NIC Teaming). Deselect this component before installing the PSP components".

This is not good, not good at all.  We cannot do A+B networking in Hyper-V until this changes.  I’m told that MS is relying on partners, e.g. HP and Intel, to resolve this like they have done for Windows up to now.  I’m really hoping that they do.

EDIT:

I got a respnse from Microsoft when I sent in some feedback on this issue.  Officialy, MS does not support any kind of NIC teaming on Windows.  Currently, this is no different with Hyper-V.  They are relying 100% on partners such as the likes of HP, Broadcom and Intel to provide updated versions of their teaming drivers for Hyper-V.  There is an opening in the parent paritition to allow partners to accomplish NIC teaming and present one virtual (teamed) NIC to a virtual switch in Hyper-V.

Hyper-V VM Loading

I’m using a lab box with 9GB RAM in it for testing to see what sort of load I can get out of a Hyper-V host.  Remember that Hyper-V does not do memory  over-committing like you get on ESX – who really wants paging both in the virtual machine’s virtual disk and on the physical host?  That sounds like server admin hell to me!

Anyway, I managed to get a series of virtual machines up and running, each with varying RAM assignment to reflect a production environment.  The result was that I got 7GB of VM’s up and running on the 9GB host.  It appears that a full installation of Windows Server 2008 with RC1 of Hyper-V consumes 2GB RAM.  That probably comes down if you use a core installation instead, which I’d recommend in a Hyper-V farm.

Cool thing here is that I noticed no drop in performance.  The VM’s all run quite smoothly – we’re doing all sorts of things with the VM’s so they are actually doing real (albeit just lab) work.

EDIT:

I followed this post up with some research on the memory overhead requirements and how Hyper-V uses RAM.