Windows Server 2012: Help us defluff Microsoft’s ‘cloud’ OS–With The Register

I have been asked to participate in a live chat hosted by The Register (November 5th, 2PM GMT/9am Eastern) to talk about Windows Server 2012’s role as a cloud operating system. Expect lots of Hyper-V, storage, and networking chat.
Say “cloud”, and you think Amazon, Google, VMware – but Microsoft?
Yet Windows Server 2012 brings changes in scalability, management and flexibility that are helping turn the tide on one member of that triumvirate – VMware. This is because customers are starting to switch to Hyper-V, the Windows hypervisor-based virtualisation system which comes bundled in with Windows Server 2012, rather than paying extra for VMware.

Join All-About-Microsoft’s Mary Jo-Foley; Reg regular and ITwriting author Tim Anderson; MVP and co-author of the Great Big Hyper-V Survey Aidan Finn, representing Microsoft; and fellow Reg readers in a Live Chat to talk about what’s in Windows Server 2012 and why exactly it’s starting to turn the tide on market-leader VMware.

We’ll be looking at why enterprises are likely to upgrade now rather than later and working out where Windows Server 2012 fits into a Redmond cloud story that also contains Windows Azure.
Is this a “software data centre”, a “cloud OS” or server marketing BS?
Speak your Brains on at 2pm GMT, 9am Eastern, 6am Pacific, on 5 November. Ahead of that, register in the Live Chat window below.
It should be fun. Hopefully I’ll see you there

Thinking About Cloud, Self-Service, and Hyper-V Dynamic Memory

I’ve been doing some thinking about how to configure Dynamic Memory (DM) in a Windows Server 2012 Hyper-V cloud.  One of the traits of a cloud is self-service.  And that’s why some thought is required.

If you’re not familiar with Dynamic Memory, then have a look at the paper I wrote on the feature as it was in W2008 R2.  Then take a few minutes to update your knowledge of the changes to DM in WS2012. 

I’ve previously stated that you had to be careful with the Startup RAM setting in a cloud/hosting scenario.  The guest OS can only see the high-water mark of allocated RAM since the VM booted up.  For example:

  • A VM is configured with 512 MB Startup RAM and 8 GB Maximum RAM
  • The VM boots up and the guest OS sees 512 MB RAM
  • The DMVSC integration component starts up
  • Pressure for RAM increases in the guest OS and maybe the VM increases to 768 MB
  • Pressure reduces and the VM is now using 612 MB

Imagine a customer who owns this VM, logs into it, downloads the SQL installer and runs setup.  Setup will fail because the installer requires 1 GB of RAM for SQL to run.  The guest OS can only see 768 MB – the high water mark since the VM booted up.  That’s a helpdesk call.  Now scale that out to hundreds or thousands of VMs.  Imagine all the helpdesk calls.  Sure; you’ve saved some money on RAM, but you’ve had to hire more people to close calls.  Trust me … no wiki or knowledge base article will sort it out.  I’ve been on the hosting service provider side of the fence and I’m a hosting customer too Smile

So my advice for W2008 R2 was to set Startup RAM to be 1 GB of RAM.  Sure, lots of VMs remain idle in a cloud – you’d be amazed how many might never be logged into, even if there is a monthly invoice for them.  You’ve reduced the helldesk calls but you’re still using up RAM.

Enter Windows Server 2012 Hyper-V (and Hyper-V Server 2012) …

We have a new setting called Minimum RAM, allow a VM to balloon down to below the Startup RAM when the VM is idle.  All that idle RAM returns back to the host for reuse.  How about this now:

  • The VM is configured with 1 GB RAM Startup RAM and 8 GM Maximum RAM
  • Minimum RAM is set to 256 MB
  • The VM powers up with 1 GB RAM, goes idle and balloons down to 320 MB RAM
  • After a week, the customer logs into the VM, attempts to install SQL Server.  The RAM high-water mark is 1 GB and the SQL setup has no problems.

No helldesk calls there!  And it’s done without the loss of performance associate with RAM over-commitment and second level paging.

F5 Hyper-V Network Virtualization Gateway

F5 recently announced that they will be releasing a Microsoft Network Virtualization product that will be a Network Virtualization Gateway.  A common question about Network Virtualisation (aka Software Defined Networking) is “how do non-migrated client devices continue to find migrated VMs in a cloud with virtualised IP addresses?”.

image

The Network Virtualization Gateway sits between the cloud VMs and the client site as in the diagram.  That does the translation for the clients using your virtualisation policies to find their desired destination servers.  And that’s what F5 look like they’ve announced.

It is a WS2012 Hyper-V appliance, and it seems to have integration support for System Center 2012 SP1, a strongly recommended piece of Network Virtualisation deployment and management.  Expect the F5 solution in Q1 2013, according to RedmondMag.

Network Virtualization is a very important solution:

  • Use it in DR-as-a-Service so you don’t need IP injection or VLAN stretching.  VM’s continue to use their original IP and be accessible
  • Public cloud to eliminate VLAN complexities and restrictions
  • Abstract networks in enterprise data centres so that VMs can move from one network footprint to another without downtime or need for reconfiguration, kicking down another barrier to Live Migration or vMotion.

F5 also has Hyper-V support for the Big-IP LTM VE:

Local Traffic Manager (LTM) Virtual Edition (VE) takes your Application Delivery Network virtual. You get the agility you need to create a mobile, scalable, and adaptable infrastructure for virtualized applications. And like physical BIG-IP devices, BIG-IP LTM VE is a full proxy between users and application servers, providing a layer of abstraction that secures, optimizes, and load balances application traffic.

image

Get some info here:

One of the interesting things about the LTM is the support for System Center.  This converges the functions, deployment, and management of your cloud:

  • Monitor the LTM
  • Load balancing deployment, i.e. VIP automation in service templates
  • Orchestration via System Center 2012

Build A WS2012 Hyper-V Cloud Using PowerShell

Fellow MVP David Lachari found a great series of Microsoft blog posts on how to use PowerShell scripts (with examples) to build a Windows Server 2012 Hyper-V cloud.  It’s some impressive stuff.

You might want to check out my series of posts on converged fabrics to understand some of the stuff in this series.

Three Quarters of Datacenter Managers Admit To Failing On Performance

There is an alarming story on TechCentral.ie this morning where that majority of IT managers are admitting that they do not adequately manage the quality of service that their data centres (or clouds) are delivering.

A survey of over 400 European data centre managers found that while 93% of them acknowledged the criticality of optimising application performance across their data centres and networks, the large majority said they were failing to do so

Sounds like they need to start using System Center Operations Manager to monitor network, storage, hardware (servers/blades/chassis/etc), operating systems, applications, code, services, and service level agreement from a component and a service perspective.

Embracing automation (System Center Orchestrator) and self-service (System Center Service Manager and the entire suite) frees up engineer/operator time in the cloud where data centres are filled with centralised, broadly available, and measured/controlled/secured infrastructure and services.  It is the responsibility of the data centre, as the “hosting company” of this cloud, to guarantee SLAs.  SLAs cannot be measured or met without adequate systems management.

So here’s my advice if you are setting company strategy for the cloud:

  • If you’re implementing private cloud then ask your tech staff, IT Manager, CIO (or whatever) what complete and deeply integrated/automated systems management solution they are using.  Nagios is not the correct answer because it meets none of the criteria (complete, deep, integration, automation, etc).  Make sure you’re going to see quarterly/annual reports appearing automatically in your inbox or on a SharePoint site for you to review.
  • If you’re about to place your services in a public cloud, ask the same question.  And make sure you have visibility of the monitoring for yourself.
Technorati Tags: ,