Hyper-V RAM Requirements Update

I’ve gotten feedback on my last blog update on this subject and here’s the scenario:

  • The parent OS requires 512MB (MINIMUM) but you should allow for 2GB (recommended).
  • Hyper-V itself requires 300MB but it’s likely the 2GB assignment will compensate for this (in most cases).
  • Drivers and agents on the parent might push your requirement for more than 2GB.
  • Each VM requires at least 32MB for the 1st GB of RAM.  Each additional GB of RAM for the VM requires (at most) 8MB for each additional GB of RAM.  MS say 8MB to be conservative.  Only if the VM is trashing RAM will you hit this 8MB.
  • The second VM will require 32MB for it’s first plus another 8MB (max) for each additional GB of RAM.

So here’s a typical scenario:

  • Parent Partition: 2GB (but you might need another 200 for Hyper-V and more for drivers/agents)
  • 2GB RAM VM: 32MB + 8MB
  • 4GB RAM VM: 32MB + (3 * 8)MB
  • 1GB RAM VM: 32MB

Thanks again to Dave Northey and John Howard who took the time to dig internally in MS to help me with this problem.

Credit: Dave Northey and John Howard.

Hyper-V RAM Requirements

One of my tasks today was a bit on the tough side.  Following up on yesterday, I had to be able to calculate the amount of RAM required for each VM, whether it was assigned 1GB, 2GB or 4GB.  Form this, based purely on RAM, I had to be able to calculate numbers of VM’s I could get on a host – CPU, storage and I/O are well in hand. 

This proved tough.  At this point, Hyper-V is still RC1 so there’s little information out there.  The only result from a lot of searching was a MS page about how they virtualised 3 VM’s, each with 10GB RAM, on a 32GB host.  They said they reserved 2GB RAM for the parent partition.  Not very useful, to be honest.

I couldn’t find any more so I sent out some mails requesting some help.  In the meantime I decided to do some observational testing.  I ran VM’s on my test host and used PerfMon to measure "Hyper-V VM VID Partition – Overhead bytes"".  The overhead was as follows (rounded up):

  • 0.5GB: .0039% of assigned RAM
  • 1GB: .0049% of assigned RAM
  • 2GB: .0015% of assigned RAM
  • 3GB: .0023% of assigned RAM
  • 4GB: .0022% of assigned RAM

OK.  I loaded my VM’s to 100% RAM utilisation and that overhead didn’t change.  That gave me something to work with but I was wondering about that 2GB for the parent.  Was that official?  Did the overhead for the 3 * 10GB machines come from that?  Maybe it did?

This evening I got some replies.  Dave Northey sent me a link to a document (Performance Tuning Guidelines for Windows Server 2008) that didn’t turn up in my searches.  It says:

"You should size VM memory as you typically do for server applications on a physical machine. You must size it to reasonably handle the expected load at ordinary and peak times because insufficient memory can significantly increase response times and CPU or I/O usage. In addition, the root partition must have sufficient memory (leave at least 512 MB available) to provide services such as I/O virtualization, snapshot, and management to support the child partitions.

A good standard for the memory overhead of each VM is 32 MB for the first 1 GB of virtual RAM plus another 8 MB for each additional GB of virtual RAM. This should be factored in the calculations of how many VMs to host on a physical server. The memory overhead varies depending on the actual load and amount of memory that is assigned to each VM".

So I read this two ways (assuming 2GB RAM per VM scenario):

  • First machine charge = 32MB overhead + 8MB.  Second machine charge = 16MB overhead.
  • First machine charge = 32MB overhead + 8MB.  Second machine charge = 32MB overhead + 8MB. 

I’m assuming it’s the second scenario for now.  I’ll chase this down next week.

I also got a response from John Howard.  He said:

"Our general recommendations will be the same as for Windows Server 2008. … minimum and recommended RAM requirements which are 512MB minimum, 2GB recommended. This is for the parent partition. Our general requirements for just the Hypervisor being launched are a little under 300MB. Any driver stacks, management applications and virtual machine memory are on top of that. In the parent partition, we consume … RAM per virtual machine".

So being fairly conservative, it sounds like we allow 2GB for the parent, another 300MB for Hyper-V, a bit for the drivers of the parent (probably be safe within that 2GB) and then our overhead for RAM for each VM.  That gives me something like this:

VM RAM (GB)

Overhead (MB)

Total MB Used

0.5

32

544

1

32

1056

2

40

2088

4

56

4152

Sometimes the Good Things Never Change

It’s rare in this business that a good product doesn’t become rubbish.  I’ve spent a little while looking at one of my favourite AV solutions, Trend Micro Office Scan (TM OS).  The last time I looked at TM OS it was at V5.58 and it only managed clients.  Now it can do both servers and desktops. 

When I first saw TM OS I didn’t like it.  I was working a 6 month contract and I was tasked with deploying this new AV solution.  It seemed different to me.  All I’d used up to that point were MuckAfee and Sinmantec.  What was different was that Server Protect and OfficeScan were simple to install, simple to deploy, simple to manage and simple to remove (i.e. the uninstall works – imagine that!?!?!?).  They were malware scanners and didn’t try to be a complete security suite that would eventually break your desktops and servers.

I quickly learned the error of my ways (thanks Thorsten) and I ended up buying in and deploying Trend Micro to manage the anti-malware for the global network I designed, deployed and managed after that contract.  There were bugs here and there (probably 3 over 2 years) but support fixed them quickly.  Updates were reliable, reports were simple and usable and it actually stopped infections.  What was best was that the management console was simple to use.  That’s something that MS have implemented with the first version of ForeFront Client Security.  Upgrading the software was easy – update the management server and the clients got updated automatically.  Deploying the client?  You could do it from a website, from a file share, via a login script or create an MSI to deploy automatically via GPO or SMS/SCCM.

I started looking at TM OS this week and was pleasantly surprised.  I knew they’d introduced firewall functionality.  Guess what – I can reliably disable it permanently from the management console!  Wow!  That shouldn’t sound impressive at all but considering the failings of some "yellow pack" software just can’t get this right at all.

And working with firewalls is simple.  ID your website (TCP 8080) and agent ports (TCP port randomly generated but changeable during installation) and you’re laughing:

  • Open both ports on the management server.
  • Open the agent port on the agent computer.

No AD integration is used.  That’s actually a very good thing – hard as it is for me to say.

I’m glad to say that the anti-malware I’ve always liked best is still king of the hill:

  • Simple
  • Reliable
  • Hands-off
  • Effective

EDIT:

It’s a pity about the cost of TM.  They priced themselves out of my market today.  I couldn’t stop laughing when the salesman told me how much they wanted.

Hyper-V and NIC Teaming

NIC teaming, bonding, load balancing, A+B networking, or whatever you want to call it, is a core concept in highly available server computing.  In Windows, we’ve been able to create a virtual NIC that we configured TCP in by using 3rd party drivers from the likes of HP or Intel.

The virtual network is no different.  ESX can do this by using two host physical NIC’s to connect a virtual switch.  A VM has one virtual NIC that connects to this virtual switch and has the benefit of A+B networking.  The Hyper-V virtual switch can only use one NIC.  Hyper-V relies on drivers in the parent partition.  You’d think "OK, lets team two physical NIC’s in the parent partition and use the resulting virtual NIC to connect the virtual switch".  Right now, that’s not possible.  HP says:

"IMPORTANT

Windows Server 2008 Hyper-V RC1 does not support the Network Configuration Utility (NIC Teaming). Deselect this component before installing the PSP components".

This is not good, not good at all.  We cannot do A+B networking in Hyper-V until this changes.  I’m told that MS is relying on partners, e.g. HP and Intel, to resolve this like they have done for Windows up to now.  I’m really hoping that they do.

EDIT:

I got a respnse from Microsoft when I sent in some feedback on this issue.  Officialy, MS does not support any kind of NIC teaming on Windows.  Currently, this is no different with Hyper-V.  They are relying 100% on partners such as the likes of HP, Broadcom and Intel to provide updated versions of their teaming drivers for Hyper-V.  There is an opening in the parent paritition to allow partners to accomplish NIC teaming and present one virtual (teamed) NIC to a virtual switch in Hyper-V.

Hyper-V VM Loading

I’m using a lab box with 9GB RAM in it for testing to see what sort of load I can get out of a Hyper-V host.  Remember that Hyper-V does not do memory  over-committing like you get on ESX – who really wants paging both in the virtual machine’s virtual disk and on the physical host?  That sounds like server admin hell to me!

Anyway, I managed to get a series of virtual machines up and running, each with varying RAM assignment to reflect a production environment.  The result was that I got 7GB of VM’s up and running on the 9GB host.  It appears that a full installation of Windows Server 2008 with RC1 of Hyper-V consumes 2GB RAM.  That probably comes down if you use a core installation instead, which I’d recommend in a Hyper-V farm.

Cool thing here is that I noticed no drop in performance.  The VM’s all run quite smoothly – we’re doing all sorts of things with the VM’s so they are actually doing real (albeit just lab) work.

EDIT:

I followed this post up with some research on the memory overhead requirements and how Hyper-V uses RAM.

Windows Server 2008 SCW and SCCM 2007 SP1

Service Pack 1 of System Center Configuration Manager 2007 adds support for running the systems management product on Windows Server 2008.  This update from Microsoft adds the required information for SCCM 2007 in the Security Configuration Wizard so that you can correctly lock down site systems.

"The Security Configuration Wizard (SCW) is an attack-surface reduction tool for the Microsoft Windows Server® 2008 operating system. SCW determines the minimum functionality required for a server’s role or roles, and disables functionality that is not required. The Configuration Manager 2007 SP1 Windows Server 2008 SCW template supports both new and updated site system definitions and the required services and ports.

Feature Bullet Summary:

The Configuration Manager 2007 SP1 Windows Server 2008 SCW template adds support for the following new site systems:

  • Out of Band Service Point
  • Asset Intelligence Synchronization Point

The Configuration Manager 2007 SCW template renews support for the following site systems:

  • Fallback Status Point (FSP)
  • State Migration Point (SMP)
  • PXE Service Point (PSP)
  • Software Update Point (SUP)
  • System Health Validator (SHV)
  • Primary Site Server (PSS)
  • Secondary Site Server (SSS)
  • Server Locator Point (SLP)
  • Management Point (MP)
  • Reporting Point (RP)"

Microsoft SCOM 2007 Management Packs: New Location

If you’re using SCOM 2007 then you’ve surely been on the MS System Center Catalog web page.  It’s a pain to use, not nearly as bad as the new TechNet download page, but it’s horrible anyway.  The OpsMgr team is now placing Microsoft management packs on the public TechNet site.  You can download the operating system technologies (Base OS, DNS, DHCP, etc) and the other server products (Exchange, SQL, etc) without having to endure drop-down list box reset hell.

Hyper-V Test Lab Continued

I set up a few templates over the last few evenings, Windows 2008, Vista and Windows 2003.  I sysprepped them today and exported the VM’s.  I’ve noticed some funnies in the exports – the confix.xml retains the path to the VHD of the original machine.  This cause some problems when I copied the exports and re-imported them as new machines.

My lab machine has only so much disk and I’ve way more testing that I need to do.  I decided to use the VHD’s of my sysprepped templates as parents for differencing disks.  These differencing disks are what my VM’s use.  The idea is that the child disk stores on the differences between the VM that I run and the parent disk of the template.

I deployed them like this:

  • I created a template VM, sysprepped it and saved exported it.
  • I put the exported VHD somewhere safe.
  • I created a new VM but did not create a disk in the wizard.
  • I returned to the settings of the VM started creating a new disk.
  • The new differencing VHD was stored in the Virtual Hard Disks folder of the new VM.  It’s parent was the VHD of the exported template.
  • The new VM is powered up and runs the mini-setup wizard, i.e. I name the machine.

Simple and diskspace economic.  I can deploy more VM’s and they core OS is stored only once.  It’s perfect for a lab.  Obviously it’s going to be slower if running plenty of VM’s off the parent.  You wouldn’t do this for production but it’s fine for a lab.

The Response From Dabs.ie

I just got this lame response from Dabs.ie:

"I can confirm we do adhere to the data protection act therefore any data will be removed from the drive.

Apologies for any inconvenience this may cause.

Regards,
Stella
Customer services escalation team"

This says to me "Hey, we’re proud that we destroy your data and screw you very much!".  Man!  These folks never learn.  The campaign has now begun 🙂

Lacie and Dabs.ie SUCK!

Back in March I decided to buy a Lacie BigDisk 1TB with RAID1 – it’s a chassis with two removable disks. I wanted something to keep all of my data on with RAID. I hated not having something more secure. Tape and online backup are way out of budget here.

Yesterday the device failed. The power connection within the chassis isn’t seating properly. If you wiggle the power "just so" it power up … after some sparks fly out of the chassis!

I logged a call with Lacie and here’s what they had to say:

"Please contact your reseller directly for assistance in obtaining warranty service. Resellers have direct access to our returns system through the Reseller Zone on our website and are able to book a return for the item there.

Please note that our repair process does not retain the data on the drive. Any data on a working mechanism will be erased in the testing process. An unworking mechanism will be replaced".

Huh!?!? A storage company will wipe your data! I guess I’d better talk to the reseller, dabs.ie. Finding their support contact details was a NIGHTMARE. No phone details .. uh-oh .. someone’s trying to hide something? I got through to an agent on live chat.

"Aidan Finn: Hi, I bought a Lacie RAID1 disk unit in March and it’s chassis has failed

Sanjay 1422: Have you the order number please?

Aidan Finn: I need the chassis to be replaced under warrantee. I cannot give you the disks because my data is on them

Aidan Finn: Receipt xxxxxxxx for sales order xxxxxxxxx

Sanjay 1422: thank you one moment please

Sanjay 1422: sorry if this item is to be returned to us we would need the full unit back . the manufacture may be able to arrange a return for just the chassis to be replaced. sorry for this would like Lacie details or would you like to return the item to us ?

Aidan Finn: I’ve talked to lacie and they say I must talk to you. I cannot give you the disks because my data is on them. You will need to give me something to put my data on first.

Aidan Finn: This is no fault of mine.

Aidan Finn: I do not expect to lose 3 years of data.

Aidan Finn: Lacie have said disks are wiped during testing.

Sanjay 1422: I am sorry but if I a return is to be issued the complete unit would need to be sent to us . unfortunately i could not offer a unit to store your data on ."

So, both the manufacturer and the reseller have no concept of storage. And they’ve no concern about me losing my data. The correct process here should be to replace the chassis while I keep my two disks with my data intact.

I thought both companies deserved a little bit of publicity for their magnificent service *tongue firmly in cheek*.

I’ve sent a mail off to the customer service manager at dabs.ie. We’ll see how that goes and I’ll update the post when I can.

My advice for now:

  • Steer clear of Lacie
  • Steer clear of Dabs.ie

I’ve just checked out the small courts procedures so we’ll give the customer service manager a chance before I take the next step.  Last year’s adventures with Eircom taught me plenty.