Microsoft News Summary – 3 July 2014

After a month of neglect, I have finally caught up with all of my feeds via various sources. Here are the latest bits of news, mixed up with other Microsoft happenings from the last month.

Microsoft News Summary-2 July 2014

It’s been a long times since I posted one of these! I’ve just trawled my feeds for interesting articles and came up with the following. I’ll be checking news and Twitter for more.

Microsoft News Summary-21 May 2014

I took a break from these posts last week while I was at TechEd, and then had work catch up to do this week. Let’s get back a rockin’. There is a distinct tendency towards cloud and automation in the news of the last week. That should be no surprise.

SCVMM – Setting Up Remote SQL Database Is Hanging Or Failing

If you are deploying lots of System Center products, then it’s not uncommon to use a single SQL server/cluster for one instance per component (Service Manager is a whole other ball of wax but I stay away from that game). This means setting up a remote SQL database for VMM. It’s no big deal, and it increases scalability for the truly large deployments. It also makes clustering VMM a realistic possibility – and that’s a must-do if you’re creating a cloud.

image

When at the above screen, the connection to the remote server to allow you to select an instance can freeze or fail if you have not configured the Windows Firewall of the remote SQL server. Configure the firewall, and away you go.

Note: the lazy and less secure method is to open the firewall completely. Don’t do that if you can help it.

Technorati Tags: ,,,

How Do I Plan And Size A Hyper-V Deployment – MAP 9.0

You measure and assess.  And Microsoft gives you a tool to do that called MAP.  They’ve been giving us this tool for many years, and it’s now on version 9.0 (just released).

When planning a traditional Hyper-V conversion (not a new bare-metal cloud) you can run MAP to identify the physical or virtual (VMware) servers that you want to convert to Hyper-V, measure their resource utilization, enter in potential Hyper-V host specifications, and then MAP will produce reports that size your environment.  It’s something you kick off, let it measure, and run the reports after (maybe – you choose) a week while you’ve been doing something else.

There’s some new stuff in MAP 9.0:

    • New Server and Cloud Enrollment scenario helps to simplify adoption: Server and Cloud Enrollment (SCE) is a new offering under the Microsoft Enterprise Agreement that enables subscribers to standardize broadly on one or more Microsoft Server and Cloud technologies.  The MAP Toolkit 9.0 features an assessment scenario to identify and inventory SCE supported products within an enterprise and help streamline enrollment.
    • New Remote Desktop Services Licensing Usage Tracking scenario creates a single view for enterprise wide licensing: With an increase in enterprises deploying Remote Desktop Services (RDS) across wider channels, RDS license management has become a focus point for organizations.  With the new RDS Licensing scenario, the MAP Toolkit rolls up license information enterprise-wide into a single report, providing a simple alternative for assessing your RDS licensing position.
    • Support for software inventory via Software ID tags now available: As part of the Microsoft effort to support ISO 19770-2, the MAP Toolkit now supports inventory of Microsoft products by Software ID (SWID) tag.  SWID enhanced reports will provide greater accuracy and assist large, complex environments to better manage their software compliance efforts by simplifying the software identification process and lowering the cost of managing software assets.
    • Improved Usage Tracking data collection for SQL Server Usage Tracking scenario: As part of our ongoing improvement initiatives, Usage Tracking for SQL Server 2012 has been enhanced to use User Access Logging (UAL).  UAL is a standard protocol in Windows Server 2012 that collects User Access information in near real time and stores the information in a local database, eliminating the need for log parsing to perform Usage Tracking assessments.  UAL vastly improves the speed and helps to eliminate long lead times for environment preparation associated with running Usage Tracking assessments.

If you want to plan and size desktop deployment, Office deployment, RDS, Azure, Hyper-V, SQL Server, and more, then you need to be checking out the FREE (yes FREE!!!!) MAP 9.0.

Best Practices for Virtualizing & Managing SQL Server On Hyper-V

This Microsoft-written guide provides high-level best practices and considerations for deploying and managing SQL Server 2012 on a Microsoft virtualization infrastructure. The recommendations and guidance in this document aim to:

  • Complement the architectural design of an organization’s specific environment.
  • Help organizations take advantage of the key platform features in SQL Server 2012 to deliver the highest levels of performance and availability.

There are lots of tips, requirements, and recommendations, such as this one for SQL Server VMs that are replicated using Hyper-V Replica.  Yes, Hyper-V Replica is supported by SQL Server – ya hear that Exchange!?!?!

image

The setting in question can be found here and can be enabled when modifying the replication of a VM using PowerShell.  It:

Determines whether all virtual hard disks selected for replication are replicated to the same point in time. This is useful if the virtual machine runs an application that saves data across virtual hard disks (for example, one virtual hard disk dedicated for application data, and another virtual hard disk dedicated for application log files).

Long story short: it maintains consistency of an application across disks.

Azure Services For Windows Server

Microsoft likes to talk about how they are the only company offering both pubic (Azure) and private (Windows Server and System Center) cloud solutions.  What about hosting partners?  Can they implement Azure?  In the immortal words of Vicky Pollard: no but yeah.

You can’t buy Azure appliances.  They were supposed to come via the likes of Fujitsu and Dell but they never emerged.  But there is another way.  You can build a public cloud based on Azure Service For Windows Server, formerly Codename Katal.  A lot of people actually prefer to refer to ASWS as Katal.

Uh oh!  Is this yet another incomplete hosting pack from Microsoft that is forgotten almost as soon as it is released?  The answer: no.  This is something very important to Microsoft, as you can tell by the strategic reuse of the Azure name.  As for the incomplete question: this is a pretty (not 100%) complete solution.

What do you get?  Well, you get a solution that uses VMM and the Service Provider Foundation (SPF). This allows you to build a multi-tenant cloud.  Sticking Katal in front of SPF gives you tenant (customer) and management (cloud admin) portals.  You can build service plans for web hosting (IIS 8.0), database (MySQL and SQL Server) hosting, and IaaS (VM hosting).  Those plans are then made available to tenants who can register via the externally facing tenant portal (and API – both hopefully load balanced).

The tenant experience is amazingly similar to the real Azure.  This is indicative of how important this product is to Microsoft, and how it should be treated differently to past hosting “solutions”.  I’ve paid near no attention to those past offerings – and I used Hyper-V and System Center in hosting!  But I’m paying attention to this release.

Importantly for hosting companies, you can rebrand Katal to suit the company.  The solution is mostly complete.  It comes with the modular source code.  You can add on extra functionality that hosting companies usually build for themselves such as:

  • DNS reselling – there’s a built in pack for reselling GoDaddy
  • Tenant onboarding – maybe you want to capture and validate payment data before completing the new customer registration
  • Billing – you’ll need to work with a partner or develop your own add-on for automated billing

At first you might question the lack of these features.  However, most hosting companies already have these services in place and Katal will have to fit in around them.

Be careful with customization; do it on a documented and modular way so that future upgrades from Microsoft don’t break your cloud (always test before upgrades).

The Katal portals do not integrate with the real Azure.

Katal is aimed at the hosting community but I think the enterprise should pay attention too.  Katal is a superb self-service portal, providing a very user-friendly essential element to the cloud recipe.

If you want to learn more then:

Deploying Application Virtual Machines Just Got A Whole Lot Quicker

Several years ago, I first heard Mark Minasi talk about accidental DBAs.  The term refers to server administrators/engineers who find that the vast majority of their Windows Servers either have or use a SQL Server installation.  We were mostly still in a physical world back then, with virtualisation just in its infancy in the industry (as a whole).  Things have moved on since then.  Anyone deploying servers now should be looking at the virtual option first (be it some open cloud, Xen, VMware, or Hyper-V).  Virtualisation seems to encourage server sprawl and that means lots more servers.

My last experience as a hands-own “own it” engineer was in hosting.  Here’s how a deployment looked:

  1. Time to deploy a VM: about 30 seconds in a wizard, and do something else while the files copied
  2. Customize the OS: about 1-10 minutes
  3. Install SQL Server: 30-45 minutes (longer if SQL 2008 R2 Reporting was required)
  4. Install the SQL Server service pack (if not already slipstreamed): 30-45 minutes
  5. Install the SQL Server service pack cumulative update (if not already slipstreamed): 30-45 minutes

In my experience, I could lose the guts of a day installing SQL Server if I didn’t have a slipstreamed package, while the VM deployment itself took very little time.

“Why, in a cloud, shouldn’t the user install SQL Server?”

LMAO!  Clouds are like hosting, and I left the hosting business because 80% of the customers made me want to scream at them.  They were clueless: e.g. the guy who opened a helpdesk call to get a DR replication application written for his new SaaS business (selling DR).

Not that all of them were like that.  I learned from a few and some were doing very interesting and innovative things.

When it comes to things like SQL Server, the infrastructure people (or system) must do the installation.  But we want to minimize that time.  SQL Server 2012 SP1 CU2 has expanded support for Sysprep.  This means that you can optimise the deployment of virtual machines with SQL Server (including service pack and related cumulative update).  For more information you can see:

Survey On SQL Usage For System Center

I know … Surveys! … Me and some others are running The Great Big Hyper-V Survey of 2012 (still open) and now there’s another one:

System Center MVP, Paul Keely, (one of the authors of Mastering System Center 2012 Operations Manager) is running a survey on how people are using SQL Server in their System Center implementations. The purpose is to gather data to enable him to write a white paper on the subject – Paul is a very smart guy on the subject of System Center in the enterprise.

Take a couple of minutes, grab a tea/coffee/whatever, and answer a few questions in the survey.

Technorati Tags: ,

Rough Guide To Setting Up A Scale-Out File Server

You’ll find much more detailed posts on the topic of creating a continuously available, scalable, transparent failover application file server cluster by Tamer Sherif Mahmoud and Jose Bareto, both of Microsoft.  But I thought I’d do something rough to give you an oversight of what’s going on.

Networking

First, let’s deal with the host network configuration.  The below has 2 nodes in the SOFS cluster, and this could scale up to 8 nodes (think 8 SAN controllers!).  There are 4 NICs:

  • 2 for the LAN, to allow SMB 3.0 clients (Hyper-V or SQL Server) to access the SOFS shares.  Having 2 NICs enables multichannel over both NICs.  It is best that both NICs are teamed for quicker failover.
  • 2 cluster heartbeat NICs.  Having 2 give fault tolerance, and also enables SMB Multichannel for CSV redirected I/O.

image

Storage

A WS2012 cluster supports the following storage:

  • SAS
  • iSCSI
  • Fibre Channel
  • JBOD with SAS Expander/PCI RAID

If you had SAS, iSCSI or Fibre Channel SANs then I’d ask why you’re bothering to create a SOFS for production; you’d only be adding another layer and more management.  Just connect the Hyper-V hosts or SQL servers directly to the SAN using the appropriate HBAs.

However, you might be like me and want to learn this stuff or demo it, and all you have is iSCSI (either a software iSCSI like the WS2012 iSCSI target or a HP VSA like mine at work).  In that case, I have a pair of NICs in each my file server cluster nodes, connected to the iSCSI network, and using MPIO.

image

If you do deploy SOFS in the future, I’m guessing (because we don’t know yet because SOFS is so new) that’ll you’ll mostly likely do it with a CiB (cluster in a box) solution with everything pre-hard-wired in a chassis, using (probably) a wizard to create mirrored storage spaces from the JBOD and configure the cluster/SOFS role/shares.

Note that in my 2 server example, I create three LUNs in the SAN and zone them for the 2 nodes in the SOFS cluster:

  1. Witness disk for quorum (512 MB)
  2. Disk for CSV1
  3. Disk for CSV2

Some have tried to be clever, creating lots of little LUNs on iSCSI to try simulate JBOD and Storage Spaces.  This is not supported.

Create The Cluster

Prereqs:

  • Windows Server 2012 is installed on both nodes.  Both machines named and joined to the AD domain.
  • In Network Connections, rename the networks according to role (as in the diagrams).  This makes things easier to track and troubleshoot.
  • All IP addresses are assigned.
  • NIC1 and NIC2 are top of the NIC binding order.  Any iSCSI NICs are bottom of the binding order.
  • Format the disks, ensuring that you label them correctly as CSV1, CSV2, and Witness (matching the labels in your SAN if you are using one).

Create the cluster:

  1. Enable Failover Clustering in Server Manager
  2. Also add the File Server role service in Server Manager (under File And Storage Services – File Services)
  3. Validate the configuration using the wizard.  Repeat until you remove all issues that fail the test.  Try to resolve any warnings.
  4. Create the cluster using the wizard – do not add the disks at this stage.  Call the cluster something that refers to the cluster, not the SOFS. The cluster is not the SOFS; the cluster will host the SOFS role.
  5. Rename the cluster networks, using the NIC names (which should have already been renamed according to roles).
  6. Add the disk (in storage in FCM) for the witness disk.  Remember to edit the properties of the disk and rename if from the anonymous default name to Witness in FCM Storage.
  7. Reconfigure the cluster to use the Witness disk for quorum if you have an even number of nodes in the SOFS cluster.
  8. Add CSV1 to the cluster.  In FCM Storage, convert it into a CSV and rename it to CSV1.
  9. Repeat step 7 for CSV2.

Note: Hyper-V does not support SMB 3.0 loopback.  In other words, the Hyper-V hosts cannot be a file server for their own VMs.

Create the SOFS

  1. In FCM, add a new clustered role.  Choose File Server.
  2. Then choose File Server For Scale-Out Application Data; the other option in the traditional active/passive clustered file server.
  3. You will now create a Client Access Point or CAP.  It requires only a name.  This is the name of your “file server”.  Note that the SOFS uses the IPs of the cluster nodes for SMB 3.0 traffic rather than CAP virtual IP addresses.

That’s it.  You now have an SOFS.  A clone of the SOFS is created across all of the nodes in the cluster, mastered by the owner of the SOFS role in the cluster.  You just need some file shares to store VMs or SQL databases.

Create File Shares

Your file shares will be stored on CSVs, making them active/active across all nodes in the SOFS cluster.  We don’t have best practices yet, but I’m leaning towards 1 share per CSV.  But that might change if I have lots of clusters/servers storing VMs/databases on a single SOFS.  Each share will need permissions appropriate for their clients (the servers storing/using data on the SOFS).

Note: place any Hyper-V hosts into security groups.  For example, if I had a Hyper-V cluster storing VMs on the SOFS, I’d place all nodes in a single security group, e.g. HV-ClusterGroup1.  That’ll make share/folder permissions stuff easier/quicker to manage.

  1. Right-click on the SOFS role and click Add Shared Folder
  2. Choose SMB Share – Server Applications as the share profile
  3. Place the first share on CSV1
  4. Name the first share as CSV1
  5. Permit the appropriate servers/administrators to have full control if this share will be used for Hyper-V.  If you’re using it for storing SQL files, then give the SQL service account(s) full control.
  6. Complete the wizard, and repeat for CSV2.

You can view/manage the shares via Server Manager under File Server.  If my SOFS CAP was called Demo-SOFS1 then I could browse to \Demo-SOFSCSV1 and \Demo-SOFSCSV2 in Windows Explorer.  If my permissions are correct, then I can start storing VM files there instead of using a SAN, or I could store SQL database/log files there.

As I said, it’s a rough guide, but it’s enough to give you an oversight.  Have a read of the above linked posts to see much more detail.  Also check out my notes from the Continuously Available File Server – Under The Hood TechEd session to learn how a SOFS works.