Notes–Enabling Disaster Recovery for Hyper-V Workloads Using Hyper-V Replica

I’m taking notes from VIR302 in this post.  I won’t be repeating stuff I’ve blogged about previously.

image

Outage Information in SMEs

Data from Symantec SMB Disaster Preparedness Survey, 2011.  1288 SMBs with 5-1000 employees worldwide.

  • Average number of outages per year? 6
  • What does this outage cost per day? $12,500

That’s an average cost of $75,000 per year!  To an SME!  That could be 2 people’s salary for a year.

  • % That do not have a recovery plan: 50%.  I think more business in this space don’t have DR.
  • What is their plan? Scream help and ask for pity.

Hyper-V Replica IS NOT Clustering And IT IS NOT a Cluster Alternative

Hyper-V Replica IS ALSO NOT Backup Replacement

It is a replication solution for replicating VMs to another site.  I just know someone is going to post a comment asking if they can use it as a cluster alternative [if this is you – it will be moderated to protect you from yourself so don’t bother.  Just re-read this section … slowly].

  • Failover Clustering HA: Single copy, automated failover within a cluster.  Corruption loses the single copy.
  • Hyper-V Replica: Dual asynchronous copy with recent changes, manual failover designed for replication between sites.  Corruption will impact original immediately and DR copy within 10 minutes.
  • Backup: Historical copy of data, stored locally and/or remotely, with the ability to restore a completely corrupted VM.

Certificates

For machines that are non-domain joined or non-trusted domain members.  Hoster should issue certs to the customer in the hosted DR scenario. 

Compression

Can disable it for WAN optimizers that don’t work well with pre-optimised traffic.

Another Recovery History Scenario

The disaster brought down VMs at different points.  So VMA died at time A and VMB died at time C.  Using this feature, you can reset all VMs back to time A to work off of a similar set of data.

You can keep up to 15 recovery points per day.  Each recovery point is an hour’s worth of data. 

The VSS option (application consistent recovery) fires every two hours.  Every 2nd hour (or whatever depending on where you set the VSS slider) in the cycle it triggers VSS.  All the writes in the guest get flushed.  That replica is then sent over.

Note that the Hyper-V VSS action will not interfere with backup VSS actions.  Interoperability testing has been done.

So if you’re keeping recovery snapshots, you’ll have standard replicas and application consistent (VSS) replicas.  They’ll all be an hour apart, and alternating (if every 2nd hour).  Every 5 minutes the changes are sent over, and every 13th one is collapsed into a snapshot (that’s where the 1 hour comes from).

Every 4 hours appears to be the sweet spot because VSS does have a performance impact on the guests.

Clusters

You can replicate to/from clusters.  You cannot replicate from one node to another inside a cluster (can’t have duplicate VM GUIDs and you have shared storage).

Alerting

If 20% of cycles in the last hour are missed then you get a warning.  This will self-close when replication is healthy again. 

PowerShell

24 Hyper-V Replica cmdlets:

  • 19 of them via get-command –Module hyper-v | where {$_.Name –like “*replication*”}
  • 5 more via get-command –Module hyper-v | where {$_.Name –like “*failover*”}

Measure-VMReplication will return status/health of Hyper-V Replica on a per-VM basis.

Measure-VMReplication | where {$_.ReplicationHealth –eq “Critical”}

Could use that as a part of a scheduled script, and then send an email with details of the problem.

Replica Mechanism

Refers to the HRL (Hyper-V Replica Log) process as a write splitter.  They use HTTP(s) for WAN traffic robustness.  It’s also hosting company friendly.  The HRL is swapped out before sending for a new HRL.

There is a threshold where the HRL cannot exceed half the VHD size.  If WAN/storage goes down and this happens then HVR goes into a “resync state” (resynchronisation).  When the problem goes away HVR automatically re-establishes replication. 

VM Mobility

HVR policy follows the VM with any kind of migration scenario.  Remember that replication is host/host.  When the VM is moved from host A to host B, replication for the VM from host A is broken.  Replication for the VM starts on host B.  Host B must be already authorized on the replica host(s) – easier with cluster Hyper-V Replica broker. 

IP Addressing VMs In DR Site

  1. Inject static address – Simplest option IMO
  2. Auto-assignment via DHCP – Worst option IMO because DHCP on servers is messy
  3. Preserve IP address via Network Virtualisation – Most scalable option for DR clouds IMO with seamless failover for customers with VMs on a corporate WAN.  Only one for seamless name resolution, I think, unless you spend lots on IP virtualisation in the WAN.

Failover Types

Planned Failover (downtime during failover sequence):

  1. Shutdown primary VM
  2. Send last log – run planned failover action from primary site VM.  That’ll do the rest for us.
  3. Failover replica VM
  4. Reverse replication

Test Failover (no downtime):

Can test any recovery point without affecting replication on isolated test network.

  1. Start test failover, selecting which copy to test with (if enabled).  It does the rest for you.
  2. Copies VM (new copy called “<original VM name> – test”) using a snapshot
  3. Connects VM to test virtual switch
  4. Starts up test VM

Network Planning

  • Capacity planning is critical.  Designed for low bandwidth
  • Estimate rate of data change
  • Estimate for peak usage and effective network bandwidth

My idea is to analyse incremental backup size, and estimate how much data is created every 5 minutes.

Use WS2012 QoS to throttle replication traffic.

image

Replicating multiple VMs in parallel:

  • Higher concurrency leads to resource contention and latency
  • Lower concurrency leads to underutilizing and less protection for the business

Manage initial replication through scheduling.  Don’t start everything at once for online initial synchronisation.

What they have designed for:

image

 

Server Impact of HVR

On the source server:

  • Storage space: proportional to the writes in the VM
  • IOPS is approx 1.5 times write IOPS

On the replica server:

  • Storage space: proportional to the write churn.  Each additional recovery point approx 10% of the base VHD size.
  • Storage IOPS: 0.6 times write IOPS to receive and convert.  3-5 times write IOPS to receive, apply, merge, for additional recovery points.
  • There is a price to pay for recovery points.  RECOMMENDATION by MSFT: Do not use replica servers for normal workloads if using additional recovery points because of the IOPS price.

Memory: Approx 50 MB per replicating VM

CPU impact: <3%

System Center 2012 Visio Management Pack Designer for System Center Operations Manager

Want to design your own simple management packs for SCOM (OpsMgr) from scratch but, like me, found the authoring kit to be like a mythical Greek maze filled with monsters?  Well I have great news … the Visio Management Pack Designer (VMPD) is finally here!!!!

I blogged about this tool at MMS earlier this year.  You drag and drop what you want done, and it’ll do all the hard work for you.  It’ll be a great addition to any OpsMgr admin/consultant toolkit.

The System Center 2012 Visio MP Designer—VMPD—is an add-in for Visio 2010 Premium that allows you to visually design a System Center Operations Manager Core Monitoring Management Pack. VMPD generates a Management Pack that is compliant to the MP Authoring Best Practices by simply dragging and dropping Visio shapes to describe your application architecture and configuring them through the Shape Properties.

– Visually describe your application architecture which generates classes and discoveries.

– Visually add monitoring to create monitors and rules.

– Start quickly from our pre-canned single server and multi-server patterns.

I Sold My iPad Today

I found myself using my iPad really only for two things:

  • Reading – but I since bought a more convenient smaller Kindle that I could read in a wildlife photography hide without scaring off the subject
  • Watching a little bit of TV when I went to bed

The reason I bought it originally was to have lots of battery life at conferences.  But I couldn’t type with it.  The screen keyboard is OK but not fast enough.  The attachable keyboards weren’t rigid and you never have a desk at these events.  So I ended up buying an Ultrabook.

So my iPad became dispensable.  Even though it was an iPad 1, there were no shortage of buyers.  And I didn’t even have to advertise it.  So it’s not like I’m claiming it’s a dead platform or anything.

So what’s my future on the device front?  In my personal lab, it’s a bunch of tower PCs.  That’s tied up for a while with work. 

My Ultrabook is going strong.  It’ll stay on Windows 7 until Windows 8 RTM, and maybe later depending on my work schedule.  My work laptop (the Beast) is already running Windows 8 so I can have a mobile Hyper-V base for demos.

My “tablet” for now is the Build slate, a revved up version of the Samsung slate that you can buy in retail at the moment.  The Release Preview is running nicely on there.  It’s not ideal – it runs hot and the battery life is poor for a tablet style device.  Maybe I’ll sell it later in the year before I get a Windows 8 device.  Or maybe I’ll sell it as a collectible on Pawn Stars Smile

I will look at design-for-Windows 8 devices later in the year.  I work for a Sony and Toshiba distributor so obviously I’ll look at what they have coming.  I haven’t seen anything about Sony’s plans in that space yet.  Toshiba have an interesting slider in the but I’d want to try it out.  I’m not sure about it as a machine for your lap.

The Asus Transformer goes a more portable route.  It’s a laptop and a tablet with an i7 CPU.  I like that as an iPad and Ultrabook replacement. 

The one making the headlines is the Microsoft Surface.  The problem is … what do I want?  If I want a tablet, then either the Pro or the RT would suffice.  The Pro would be great for things like Photoshop and be dock-able as a normal PC.  But I can’t let myself fall into the same trap as I did with the iPad.  That keyboard isn’t rigid – so it will suck at conferences and events, constantly flopping.

I don’t know.  That’s why I will wait and see.

Technorati Tags: ,

Notes: Continuously Available File Server – Under The Hood

Here are my notes from TechEd NA session WSV410, by Claus Joergensen.  A really good deep session – the sort I love to watch (very slowly, replaying bits over).  It took me 2 hours to watch the first 50 or so minutes 🙂

image

For Server Applications

The Scale-Out File Server (SOFS) is not for direct sharing of user data.  MSFT intend it for:

  • Hyper-V: store the VMs via SMB 3.0
  • SQL Server database and log files
  • IIS content and configuration files

Required a lot of work by MSFT: change old things, create new things.

Benefits of SOFS

  • Share management instead of LUNs and Zoning (software rather than hardware)
  • Flexibility: Dynamically reallocate server in the data centre without reconfiguring network/storage fabrics (SAN fabric, DAS cables, etc)
  • Leverage existing investments: you can reuse what you have
  • Lower CapEx and OpEx than traditional storage

Key Capabilities Unique to SOFS

  • Dynamic scale with active/active file servers
  • Fast failure recovery
  • Cluster Shared Volume cache
  • CHKDSK with zero downtime
  • Simpler management

Requirements

Client and server must be WS2012:

  • SMB 3.0
  • It is application workload, not user workload.

Setup

I’ve done this a few times.  It’s easy enough:

  1. Install the File Server and Failover Clustering features on all nodes in the new SOFS
  2. Create the cluster
  3. Create the CSV(s)
  4. Create the File Server role – clustered role that has it’s own CAP (including associated computer object in AD) and IP address.
  5. Create file shares in Failover Clustering Management.  You can manage them in Server Manager.

Simple!

Personally speaking: I like the idea of having just 1 share per CSV.  Keeps the logistics much simpler.  Not a hard rule from MSFT AFAIK.

And here’s the PowerShell for it:

image

CSV

  • Fundamental and required.  It’s a cluster file system that is active/active.
  • Supports most of the NTFS features.
  • Direct I/O support for file data access: whatever node you come in via, then Node 2 has direct access to the back end storage.
  • Caching of CSVFS file data (controlled by oplocks)
  • Leverages SMB 3.0 Direct and Multichannel for internode communication

Redirected IO:

  • Metadata operations – hence not for end user data direct access
  • For data operations whena  file is being accessed simultaneously by multiple CSVFS instances.

CSV Caching

  • Windows Cache Manager integration: Buffered read/write I/O is cached the same way as NTFS
  • CSV Block Caching – read only cache using RAM from nodes.  Turned on per CSV.  Distributed cache guaranteed to be consistent across the cluster.  Huge boost for polled VDI deployments – esp. during boot storm.

CHDKDSK

Seamless with CSV.  Scanning is online and separated from repair.  CSV repair is online.

  • Cluster checks once/minute to see if chkdsk spotfix is required
  • Cluster enumerates NTFS $corrupt (contains listing of fixes required) to identify affected files
  • Cluster pauses the affected CSVFS to pend I/O
  • Underlying NTFS is dismounted
  • CHKDSK spotfix is run against the affected files for a maximum of 15 seconds (usually much quicker)  to ensure the application is not affected
  • The underlying NTFS volume is mounted and the CSV namespace is unpaused

The only time an application is affected is if it had a corrupted file.

If it could not complete the spotfix of all the $corrupt records in one go:

  • Cluster will wait 3 minutes before continuing
  • Enables a large set of corrupt files to be processed over time with no app downtime – assuming the apps’ files aren’t corrupted – where obviously the would have had downtime anyway

Distributed Network Name

  • A CAP (client access point) is created for an SOFS.  It’s a DNS name for the SOFS on the network.
  • Security: creates and manages AD computer object for the SOFS.  Registers credentials with LSA on each node

The actual nodes of the cluster nodes are used in SOFS for client access.  All of them are registered with the CAP.

DNN & DNS:

  • DNN registers node UP for all notes.  A virtual IP is not used for the SOFS (previous)
  • DNN updates DNS when: resource comes online and every 24 hours.  A node added/removed to/from cluster.  A cluster network is enabled/disabled as a client network.  IP address changes of nodes.  Use Dynamic DNS … a lot of manual work if you do static DNS.
  • DNS will round robin DNS lookups: The response is a list of sorted addresses for the SOFS CAP with IPv6 first and IPv4 done second.  Each iteration rotates the addresses within the IPv6 and IPv4 blocks, but IPv6 is always before IPv4.  Crude load balancing.
  • If a client looks up, gets the list of addresses.  Client will try each address in turn until one responds.
  • A client will connect to just one cluster node per SOFS.  Can connect to multiple cluster nodes if there are multiple SOFS roles on the cluster.

SOFS

Responsible for:

  • Online shares on each node
  • Listen to share creations, deletions and changes
  • Replicate changes to other nodes
  • Ensure consistency across all nodes for the SOFS

It can take the cluster a couple of seconds to converge changes across the cluster.

SOFS implemented using cluster clone resources:

  • All nodes run an SOFS clone
  • The clones are started and stopped by the SOFS leader – why am I picturing Homer Simpson in a hammock while Homer Simpson mows the lawn?!?!?
  • The SOFS leader runs on the node where the SOFS resources is actually online – this is just the orchestrator.  All nodes run independently – moving or crash doesn’t affect the shares availability.

Admin can constrain what nodes the SOFS role is on – possible owners for the DNN and SOFS resource.  Maybe you want to reserve other nodes for other roles – e.g. asymmetric Hyper-V cluster.

Client Redirection

SMB clients are distributed at connect time by DNS round robin.  No dynamic redistribution.

SMB clients can be redirected manually to use a different cluster node:

image

Cluster Network Planning

  • Client Access: clients use the cluster nodes client access enable public networks

CSV traffic IO Redirection:

  • Metadata updates – infrequent
  • CSV is built using mirrored storage spaces
  • A host loses direct storage connectivity

Redirected IO:

  • Prefers cluster networks not enabled for client access
  • Leverages SMB Multichannel and SMB Direct
  • iSCSI Networks should automatically be disabled for cluster use – ensure this is so to reduce latency.

Performance and Scalability

image

image

SMB Transparent Failover

Zero downtime with small IO delay.  Supports planned and unplanned failovers.  Resilient for both file and directory operations.  Requires WS2012 on client and server with SMB 3.0.

image

Client operation replay – If a failover occurs, the SMB client reissues those operations.  Done with certain operations.  Others like a delete are not replayed because they are not safe.  The server maintains persistence of file handles.  All write-throughs happen straight away – doesn’t effect Hyper-V.

image

The Resume Key Filter fences off file handles state after failover to prevent other clients grabbing files when the original clients expect to have access when they are failed over by the witness process.  Protects against namespace inconsistency – file rename in flight.  Basically deals with handles for activity that might be lost/replayed during failover.

Interesting: when a CSV comes online initially or after failover, the Resume Key Filter locks the volume for a few seconds (less than 3 seconds) for a database (state info store in system volume folder) to be loaded from a store.  Namespace protection then blocks all rename and create operations for up to 60 seconds to allow for local file hands to be established.  Create is blocked for up to 60 seconds as well to allow remote handles to be resumed.  After all this (up to total of 60 seconds) all unclaimed handles are released.  Typically, the entire process is around 3-4 seconds.  The 60 seconds is a per volume configurable timeout.

Witness Protocol (do not confuse with Failover Cluster File Share Witness):

  • Faster client failover.  Normal SMB time out could be 40-45 seconds (TCP-based).  That’s a long timeout without IO.  The cluster informs the client to redirect when the cluster detects a failure.
  • Witness does redirection at client end.  For example – dynamic reallocation of load with SOFS.

Client SMB Witness Registration

  1. Client SMB connects to share on Node A
  2. Witness on client obtains list of cluster members from Witness on Node A
  3. Witness client removes Node A as the witness and selects Node B as the witness
  4. Witness registers with Node B for notification of events for the share that it connected to
  5. The Node B Witness registers with the cluster for event notifications for the share

Notification:

  1. Normal operation … client connects to Node A
  2. Unplanned failure on Node A
  3. Cluster informs Witness on Node B (thanks to registration) that there is a problem with the share
  4. The Witness on Node B notifies the client Witness that Node A went offline (no SMB timeout)
  5. Witness on client informs SMB client to redirect
  6. SMB on client drops the connection to Node A and starts connecting to another node in the SOFS, e.g. Node B
  7. Witness starts all over again to select a new Witness in the SOFS. Will keep trying every minute to get one in case Node A was the only possibility

Event Logs

All under Application and Services – Microsoft – Windows:

  • SMBClient
  • SMBServer
  • ResumeKeyFilter
  • SMBWitnessClient
  • SMBWitnessService

Notes: Microsoft Virtual Machine Converter Solution Accelerator

These are my notes from the TechEd NA recording of WCL321 with Mikael Nystrom.

Virtual Machine Converter (VMC)

VMC is a free-to-download Solution Accelerator that is currently in beta.  Solution Accelerators are glue between 2 MSFT products to provide a combined solution.  MAP, MDT are other examples.  They are supported products by MSFT.

The purpose of the tool is to convert VMware VMs into Hyper-V VMs.  It can be run as standalone or it can be integrated into System Center, e.g. Orchestrator Runbooks.

It offers a GUI and command line interface (CLI).  Nice quick way for VMware customers to evaluate Hyper-V – convert a couple of known workloads and compare performance and scalability.  It is a low risk solution; the original VM is left untouched.

It will uninstall the VMware tools and install the MSFT Integration components.

The solution also fixes drive geometries to sort out possible storage performance issues – basic conversion tools don’t do this.

VMware Support

It supports:

  • vSphere 4.1 and 5.0
  • vCenter 4.1 and 5.0
  • EXS/ESXi

Disk types from VMware supported include:

  • VMFS Flat and Sparse
  • Stream optimised
  • VMDK flat and sparse
  • Single/multi-extent

Microsoft Support

Beta supports Windows VMs:

  • Server 2003 SP2 x64/x86
  • 7 x64/x86
  • Server 2008 R2 x64
  • Server 2008 x64 (RC)
  • Vista x86 (RC)

Correct; no Linux guests can be converted with this tool.

In the beta the Hyper-V support is:

  • Windows Server 2008 R2 SP1 Hyper-V
  • VHD Fixed and Dynamic

In the RC they are adding:

  • Windows Server 2012 and Windows 8 Hyper-V
  • VHDX (support to be added in RTM)

Types of Conversion

  • Hot migration: no downtime to the original VM.  Not what VMC does.  But check the original session recording to see how Mikael uses scripts and other MSFT tools to get one.
  • Warm: start with running VM.  Create a second instance but with service interruption.  This is what VMC does.
  • Cold: Start with offline VM and convert it.

VMC supports Warm and Cold.  But there are ways to use other MSFT tools to do a Hot conversion.

Simplicity

MSFT deliberately made it simple and independent of other tools.  This is a nice strategy.  Many VMware folks want Hyper-V to fail.  Learning something different/new = “complexity”, “Microsoft do it wrong” or “It doesn’t work”.  Keeping it simple defends against this attitude from the stereotypical chronic denier. 

Usage

Run it from a machine.  Connect to ESXi or vCenter machine (username/password).  Pick your VM(s).  Define the destination host/location.  Hit start and monitor.

  1. The VM is snapshotted. 
  2. The VMware Tools are removed. 
  3. The VM is turned off. 
  4. The VMDK is transferred to the VMC machine
  5. The VMDK is converted.  You will need at least twice the size of the VMDK file … plus some space (VHD will be slightly larger).  Remember that Fixed VHD is full size in advance.
  6. The VHD is copied to the Hyper-V host. 
  7. The new Hyper-V VM is built using the VM configuration on the VMware host.
  8. The drive is added to the VM configuration.
  9. The VM is started. 
  10. The Hyper-V integration components are installed.

The conversion will create a Hyper-V VM without a NIC.  Supposed to prevent split-brain conversion where source and target VM are both online at the same time.  I’d rather have a tick box. 

If a snapshot is being used … then you will want any services on that VM offline …. file shares, databases, etc.  But offline doesn’t mean powering down the VM …. we need it online for the VMware tools removal.

The Wizard

A VM must has a FQDN to be converted.  Install the VMware tools and that makes the VM convertible.  This is required to make it possible to … uninstall the VMware tools Smile

It will ask for your credentials to log into the guest OS for the VMware tools uninstall. 

Maybe convert the VM on an SSD to speed things up.

Application Compatibility and API Support for SMB 3.0, CSVFS, and ReFS

Microsoft just published this document with details on compatibility for SMB 3.0, CSVFS (cluster shared volume for Hyper-V and SOFS), and the new server file system ReFS.

The Application Compatibility with Resilient File System document provides an introduction to Resilient File System (ReFS) and an overview of changes that are relevant to developers interested in ensuring application compatibility with ReFS. The File Directory Volume Support spreadsheet provides documentation for APIs support for SMB 3.0, CSVFS, and ReFS that fall into the following categories: file management functions, directory management functions, volume management functions, security functions, file and directory support codes, volume control code, and memory mapped files.

It is very much aimed towards developers.  There is a little bit of decipherable text in there to describe what ReFS is, something MSFT is not talking about much, not even at TechEd.  My take so far: it’s a file system for the future that will eventually supplant NTFS.

Sections 1.1-1.3 are interesting to us IT Pros, then jump ahead to section 1.11.

Technorati Tags: ,

Windows Assessment and Deployment Kit for Windows 8 Release Preview

With Windows 7, Microsoft release a bunch of individual tools and toolkits, each as individual downloads, to aid in our assessment, deployment, and application compatibility testing/reconciliation.  With Windows 8, Microsoft are continuing with the free support tools, but it appears that they will be released in a single kit called the Windows Assessment and Deployment Kit (Windows ADK).

The tools in the Windows ADK include:

Application Compatibility Toolkit (ACT): The Application Compatibility Toolkit (ACT) helps IT Professionals understand potential application compatibility issues by identifying which applications are or are not compatible with the new versions of the Windows operating system. ACT helps to lower costs for application compatibility evaluation by providing an accurate inventory of the applications in your organization. ACT helps you to deploy Windows more quickly by helping to prioritize, test, and detect compatibility issues with your apps. By using ACT, you can become involved in the ACT Community and share your risk assessment with other ACT users. You can also test your web applications and web sites for compatibility with new releases of Internet Explorer. For more information, see Application Compatibility Toolkit.

Deployment Tools: Deployment tools enable you to customize, manage, and deploy Windows images. Deployment tools can be used to automate Windows deployments, removing the need for user interaction during Windows setup. Tools included with this feature are Deployment Imaging Servicing and Management (DISM) command line tool, DISM PowerShell cmdlets, DISM API, Windows System Image Manager (Windows SIM), and OSCDIMG. For more information, see Deployment Tools.

User State Migration Tool (USMT): USMT is a scriptable command line tool that IT Professionals can use to migrate user data from a previous Windows installation to a new Windows installation. By using USMT, you can create a customized migration framework that copies the user data you select and excludes any data that does not need to be migrated. Tools included with the feature are ScanState, Loadstate, and USMTUtils command line tools. For more information, see User State Migration Tool.

Volume Activation Management Tool (VAMT): The Volume Activation Management Tool (VAMT) enables IT professionals to automate and centrally manage the activation of Windows, Windows Server, Windows ThinPC, Windows POSReady 7, select add-on product keys, and Office for computers in their organization. VAMT can manage volume activation using retail keys (or single activation keys), multiple activation keys (MAKs), or Windows Key Management Service (KMS) keys. For more information, see Volume Activation Management Tool.

Windows Performance Toolkit (WPT): Windows Performance Toolkit includes tools to record system events and analyze performance data in a graphical user interface. Tools available in this toolkit include Windows Performance Recorder, Windows Performance Analyzer, and Xperf. For more information, see Windows Performance Toolkit.

Windows Assessment Toolkit: Tools to discover and run assessments on a single computer. Assessments are tasks that simulate user activity and examine the state of the computer. Assessments produce metrics for various aspects of the system, and provide recommendations for making improvements. For more information, see Windows Assessment Toolkit.
Windows Assessment Services: Tools to remotely manage settings, computers, images, and assessments in a lab environment where Windows Assessment Services is installed. This application can run on any computer with access to the server that is running Windows Assessment Services. For more information, see Windows Assessment Services.

Windows Preinstallation Environment (Windows PE): Minimal operating system designed to prepare a computer for installation and servicing of Windows. For more information, see Windows PE Technical Reference.

If OS deployment is your thing or in your future then this kit and you are going to be close friends.

KB2715472: Virtual Machine Configuration Resources Online Failure During Migration or Startup

Microsoft has just posted a new KB article for a clustered Hyper-V host scenario:

Assume you have a 4 nodes Hyper-V cluster with more than 200 Virtual machines and 10 physical network adapters installed on the cluster node, each virtual machine is configured with 2 virtual network adapters; if you start 50 virtual machines on a single node at the same time or you failover 50 virtual machines to another node, you will find virtual machine configuration resources fail to be online after pending state.

When a virtual machine configuration resource is online, multiple WMI queries will be sent to query the network properties. The number of queries is decided by the number of virtual machines in the cluster and physical network adapters on the cluster node. In the scenario described in Symptoms section, it takes more than 10 minutes for all virtual machine configuration resources online. However, the default resource deadlock timeout is 5 minutes, so you will see resource online failure due to timeout.

The solution is:

Change the virtual machine configuration resource DeadlockTimeout and PendingTimeout value. The exact value depends on the cluster environment.

Microsoft Is a Virtualisation Leader – Gartner

I saw something about this last week but didn’t pay much attention until this morning.  Gartner has ranked Microsoft as a leader in their Magic Quadrant for x86 Server Virtualization Infrastructure.

Figure 1.Magic Quadrant for x86 Server Virtualization Infrastructure

They are just behind VMware.  Here’s the fun bit: this is based on Windows Server 2008 R2 Hyper-V and System Center “2007” versus vSphere 5.0.  Wait until they get a load of System Center 2012 and Windows Server 2012 Hyper-V.

The cautions that Gartner have for the Microsoft platform are all compete and market awareness based, rather than technical.  And whereas Microsoft have gone for heterogeneous in System Center 2012, Gartner has a caution about the homogeneous virtualisation nature of VMware’s management/cloud vision … customers are concerned about vendor lock-in.

Roll on next year.  By the way, who owns Netscape now?