Beware When Using Descriptive Names For VMM

Over the years I’ve seen lots of computer naming standards.  Some have used Simpsons or Tolkein character names, football player surnames, etc.  That has mainly because of laziness, but sometimes it’s to do with security-by-obscurity because “hackers then can’t figure the network out” Smile  Ooooooo-k then!  No need for defensive comments on that topic Smile

On the other extreme I’ve seen the likes of Dub-Lab-DC-1.  It couldn’t get much more descriptive without including the spec of the server.  You’ll need to be careful if creating a VMM server in this kind of network.  There’s a small, but important, note in TechNet article that describes the system requirements of System Center 2012 Virtual Machine Manager (VMM) with/without Service Pack 1 (SP1).

In addition to the normal rule of the computer name not exceeding 15 characters:

The computer name cannot contain the character string of –SCVMM-, but you can use the character string of SCVMM in the computer name. For example, the computer name can be SEASCVMMLAB, but the computer name cannot be SEA-SCVMM-LAB.

In other words:

  • Dub-Lab-SCVMM-1 is BAD.
  • Dub-Lab-SCVMM1 is good.  A single hyphen can be the difference between a successful day and a world of hurt.

Interestingly, neither Bing nor Google return any results for -SCVMM- for me. 

Azure Services For Windows Server

Microsoft likes to talk about how they are the only company offering both pubic (Azure) and private (Windows Server and System Center) cloud solutions.  What about hosting partners?  Can they implement Azure?  In the immortal words of Vicky Pollard: no but yeah.

You can’t buy Azure appliances.  They were supposed to come via the likes of Fujitsu and Dell but they never emerged.  But there is another way.  You can build a public cloud based on Azure Service For Windows Server, formerly Codename Katal.  A lot of people actually prefer to refer to ASWS as Katal.

Uh oh!  Is this yet another incomplete hosting pack from Microsoft that is forgotten almost as soon as it is released?  The answer: no.  This is something very important to Microsoft, as you can tell by the strategic reuse of the Azure name.  As for the incomplete question: this is a pretty (not 100%) complete solution.

What do you get?  Well, you get a solution that uses VMM and the Service Provider Foundation (SPF). This allows you to build a multi-tenant cloud.  Sticking Katal in front of SPF gives you tenant (customer) and management (cloud admin) portals.  You can build service plans for web hosting (IIS 8.0), database (MySQL and SQL Server) hosting, and IaaS (VM hosting).  Those plans are then made available to tenants who can register via the externally facing tenant portal (and API – both hopefully load balanced).

The tenant experience is amazingly similar to the real Azure.  This is indicative of how important this product is to Microsoft, and how it should be treated differently to past hosting “solutions”.  I’ve paid near no attention to those past offerings – and I used Hyper-V and System Center in hosting!  But I’m paying attention to this release.

Importantly for hosting companies, you can rebrand Katal to suit the company.  The solution is mostly complete.  It comes with the modular source code.  You can add on extra functionality that hosting companies usually build for themselves such as:

  • DNS reselling – there’s a built in pack for reselling GoDaddy
  • Tenant onboarding – maybe you want to capture and validate payment data before completing the new customer registration
  • Billing – you’ll need to work with a partner or develop your own add-on for automated billing

At first you might question the lack of these features.  However, most hosting companies already have these services in place and Katal will have to fit in around them.

Be careful with customization; do it on a documented and modular way so that future upgrades from Microsoft don’t break your cloud (always test before upgrades).

The Katal portals do not integrate with the real Azure.

Katal is aimed at the hosting community but I think the enterprise should pay attention too.  Katal is a superb self-service portal, providing a very user-friendly essential element to the cloud recipe.

If you want to learn more then:

Managing Apple iOS Devices From Windows Intune

This was the most exciting thing I saw at MMS 2012.  I knew what System Center was capable of, but I wasn’t expecting to see iPhones and iPads (as well as Android, etc) being managed by Microsoft from the cloud, using the same solution for managing PCs.

This week I’ve been setting up a demo environment in Windows Intune “Wave D” (thanks to my colleagues at work for the help in setting up the “partner”).  It’s one thing to manage PCs, but you really score points with customers when you can show a Microsoft product managing the rivals.  I use Ubuntu as my guest OS when showing of Hyper-V.  I want to show of an iPad Mini being managed by Windows Intune Smile

The process is “documented” on TechNet, with links from the Windows Intune console.  I use “documented” very loosely.  The information incomplete in my opinion.  So here are my notes:

A step I missed in this documentation is choosing your mobile device management solution.  I chose the Windows Intune option, instead of using System Center with Windows Intune, which was under Tasks in Administration > Mobile Device Management.

The Push Notification Certificate

The first requirement for managing iOS devices is that you have an Apple ID for your company.  There is no cost to this.  This contrasts with the €75/year cost of signing up for a Windows Phone developer account for managing Windows Phone 8.

Now open the Windows Intune admin console and browse to Administration > Mobile Device Management > iOS > Upload an APNs Certificate.  Confusion point: there is more to this than a simple upload.  Here’s how.  Click Download The APNs Certificate Request.  This downloads a .CSR file certificate request.

Now you browse to the Apple Push Certificates Portal.  Here is where you upload the .CSR file that you just downloaded from Windows Intune.  If like me, you’re using IE, you will likely be prompted about a .JSON file.  Ignore that.  Refresh the page (I muddled about here trying to figure out the JSON thing) and you should end up with something like the below:

image

Click Download to get a file called MDM_ Microsoft Corporation_Certificate.PEM; this is the certificate that you will be uploading to Windows Intune.  It will uniquely identify your organisation to managed iOS devices (or something like that). 

Return  to Windows Intune where you downloaded the .CER file, and click Upload The APNs Certificate. Browse in the dialog and select the .PEM file you just got from Apple.  You also need to supply the Apple ID name that was used in the Apple Push Certificates Portal to create the PEM file.

image

That all sounds messy.  I agree.  But you only have to do it once in your portal … every year.  Check the previous Apple screenshot and look at the expiry date for the APN certificate.  It only lasts for 1 year.  Set a recurring reminder in your (and your colleagues) calendar to repeat this process in advance of the expiration (you don’t want to be digging up email addresses and passwords).  And document what accounts/passwords are being used.  Please use a strong passphrase for your Apple ID.

Create User Accounts

You create user accounts in the Windows Intune Accounts site.  You can set up AD synchronisation instead of manually creating your users.  A warning: management of the devices will not work unless you add the users to the Windows Intune user group in the Accounts site.  Open the user, click Group, and check the Windows Intune box:

image

Enroll the Device

This is a crude mechanism.  You need to supply the IOS device user (probably via email) with the following information:

At this point there’s a whole bunch of crap that happens from the Apple side.  You have to OK lots of things to enable the device to volunteer to be managed: Install, Install, Install Now, Install, and then Done.  A Company Portal “app” (it’s actually a web shortcut that opens the mobile site in Safari) is installed on the iOS device.  Now the user can open the Company Portal, log in using their Intune account, and install company supplied apps.  Here’s a screenshot of a user browsing a serious business app on an iPad Mini in the Windows Intune catalog.

image

You can add apps from the Apple App Store (just links which open the App Store and allow the user to install apps as always) or you can develop in-house apps and side-load them directly from Windows Intune, bypassing the app store completely.  Good news: you use the exact same tool for managing apps on all types of devices, including PCs.  And it’s pretty simple to use too.

The Management Profile

Part of the configuration on the device is setting up the Management Profile.  You can find this under Settings > General > Profile – Management Profile.

image

You can expand More Details to see more information (might be useful for troubleshooting certificates).  You can remove management of the device by Intune (“returning” the device to the user) by clicking Remove.  It takes a few seconds to remove the profile.  Management Profile should disappear from Profile after this and Windows Intune is now nothing to do with the machine again.

Device Not Appearing In the Console

The “documentation” says:

To enable iOS devices to receive notifications using a wireless connection, make sure that port 5223 is open.

There is no mention if this is an inbound or outbound port requirement, or if it is TCP (probably) and/or UDP.  You could also read it as a firewall requirement on the actual iOS device itself (which it isn’t).  I had the devices on the lab at work and, while I could pull down apps from and browse the Company Portal, the devices refused to appear in the console.

Want to check if it’s working OK?  Log into the Company Portal on the device in question, and browse to Support.  If the name of the device appears there then comms seem to be OK and the device is registered … at least in my experience – I have no idea if that’s a valid indicator but it works for me … so far.

image

On the Wi-Fi in the company lab, the devices refused to register.  I put them onto 3G and they registered pretty quickly, and you can see lots of information for each device.

Reinstalling The Management Profile

I decided to remote the management profile and try to re-add the iOS device to Windows Intune.  I could not get the device to re-register to Windows Intune using the above process.  I believe the correct procedure is to log into the Company Portal, hit Support, click Change, and click Add Another Device.  This has worked for me a couple of times.

Policy

You can create Mobile Device Security Policy objects in the admin console.  There are some generic and some iOS specific settings:

image

image

image

Summary

The certificate stuff is a bit fiddly but you’ll only have to do that once per company, per year.  I can’t be sure, but I guess that is an Apple restriction on the validity of the APN certificate.  After that, it’s a pretty simple process.

Enrolment of these consumer style devices will always (with any product) be user driven.  You can’t push management onto a consumer (or BYOD) device.  If necessary, you could do the sneaker-net thing.  I can envision helpdesks doing a lot of that for BYOD management.

Some of the Apple folks in the office were very impressed with this solution.  Centralised management of mobile (particularly iOS) is a hot topic right now.  Windows Intune does a nice job.  Does it have all the bells and whistles of a Zenprise?  No, but Intune has a nice price at around €4.89/user/month (with 5 devices/user).  Throw in Software Assurance (€8.98/user/month) and those Windows PCs can be upgraded to the rights of SA, including Windows 8 Enterprise.

Thumbs up!

Technorati Tags: ,,

System Center Global Service Monitor Availability

Global Service Monitor for System Center 2012 SP1 Operations Manager is now available.  However, it’s not quite as simple as your normal feature in OpsMgr, because there is a cloud service involved.

Version 1.0.1800.0 of the System Center Global Service Monitor Management Packs can be downloaded and installed freely.  Then you are going to need an account for Global Service Monitor.  On this, Microsoft says:

You can sign up for a free trial account and use Global Service Monitor for free for up to 90 days. Beyond the 90-day free trial period, System Center Global Service Monitor is only available to customers with active Microsoft Software Assurance coverage for their System Center 2012 server management licenses.

This Software Assurance benefit will be available in March 2013 in supporting countries.  At the moment, these are Australia, Austria, Brazil, Canada, France, Germany, Ireland, Italy, Japan, Mexico, Netherlands, Singapore, Spain, Switzerland, United Kingdom, and the United States.

So, if you want to use GSM long term, you will need to be (a) in one of the participating countries, and (b) have current Software Assurance on your System Center licensing.  Beyond that, there is no additional cost that I can see.

Technorati Tags: ,,

Monitor Web Site Health From Around The World Using System Center 2012 SP1

When I worked in the VM hosting business, we offered monitoring via System Center Operations Manager as a part of the service.  It was great for us as a service provider because we were aware of everything that was happening.  One of the things I tried to do for customers was website monitoring, using an agent to fire client perspective tests at the customers’ website(s) to see if they were responsive.  On more than one occasion, a customer would upload new code, assume it was OK, and OpsMgr would see the code failure in the form of an offline website.  The customer (and us) got the alerts and they could quickly undo the change.

When you work in hosting, you learn what a mess the Internet is.  Consider this example.  I worked for a hosting company in Dublin (that’s on the east coast of Ireland).  Our helpdesk got a bunch of calls from customers saying that the services we were providing to them were “offline”.  That sent the networking engineers into a bit of a tizzy – oh, did I mention this was happening as 99% of the staff were leaving for our Christmas party?  Nice timing!  The strange thing was that not all customers were having a problem.  That suggested a routing issue and the networking folks started making calls.  In the end it turned out that only customers of a certain ISP were affected.  Their route sent packets to a router in Dublin, possibly only a kilometre away from our data centre (almost all of the major datacenters, including the Dublin “Azure” one, are on one glow-in-the-dark road in south-west Dublin).  From there, packets were routed to Germany.  They bounced around there, and normally, came back to Dublin to our data center.  Something went wrong in Germany and packets went in a loop before timing out.  From the customers’ perspective, we were offline.  A simple traceroute test would have highlighted the issue but most (not all) hosting customers are … hmm … how do I put this? … special Smile

image

Hosting (or as it’s called now, the public cloud) customers typically sell services globally.  They need their product available everywhere.  That means you have routes all over the globe to contend with.  Take the above example, and turn it into a rats nest of ISPs and peering all over the world.  Those global;y available web services are typically not just simple websites placed in a single site, either.  Any service needing a responsive user experience must use content distribution.  That throws another variable into the mix.  Testing the availability of the website from a single location will not do.  You need to test globally.

Using an older style tool, including client perspective website monitoring in OpsMgr 2007, you could do this by renting VMs in globally located data centres and installing agents on them.  The problems with this are:

  • Increased complexity.
  • A reliance on those global data centres – would you rely on the Virginia Amazon data centre that’s made lots of headlines in recent months?  What about Honest Jose’s Hosting in Argentina?
  • Renting VMs is adding a cost to the hosting company, that must be passed onto the customer, and every cent add to the per-month charges makes the cloud service less competitive.

System Center 2012 SP1 Operations Manager includes a new feature called Global Service Monitoring (GSM).  It’s an Azure based service that will perform the synthetic web transactions of client perspective monitoring for you, from locations around the world.  This is an invaluable feature for any public facing service, such as a public cloud (IaaS, web, or SaaS).  The hosting/service provider can see how available (uptime and performance) their service is to customers worldwide, whether the problem is internal infrastructure or an ISP routing related issue.

The most difficult helpdesk ticket is the “slow” website.  Using traditional tools you can do only so much.  The warehouse in OpsMgr can rule out disk, memory, and CPU bottlenecks, but that doesn’t satisfy the customer.  I haven’t tried this yet, but apparently GFSM adds  360 degree dashboards, offering you availability and performance information using internal (from the data centre) and external (from GSM) metrics.  That would be very useful when troubleshooting performance issues; you can see where the slowness begins if it happens externally, and you can redirect the customer to their local ISP if the fault lies there.

If I was still in the hosting business, GSM is one of the features that would have driven me to upgrade OpsMgr to 2012 SP1.

See these Microsoft TechNet posts for more:

Strike Up Another Reason For Using System Center Configuration Manager In Your Cloud

It is rare that Microsoft releases a bad update through Windows Updates, but one appeared this week, as Hans Vredevoort posted.  How do you avoid the problem of automatically pushing out “bad” updates straight after they are released?

Well, here’s the “solution” I often encounter when I talk to consultants and administrators:

We approve patches manually

Ah!  My response to this usually goes along the lines of:

  1. I grimace
  2. and respond with:

When you approve patches manually then you don’t patch at all!

One such company hadn’t deployed a Windows update since Windows XP SP2 – and I suspect that the media they used came with SP2 slipstreamed.  It was no doubt that Conficker ate them up.  And it’s no doubt that Conficker still is in the top 10 of malware in domain-joined (i.e. administrator controlled) PCs.  Meanwhile, PCs that are managed by users (workgroup members) are not seeing Conficker in the top 10.  By the way, Microsoft released a hotfix to prevent Conficker 1 month before the malware was first detected, and that was around the time of Windows 7’s GA launch.

The fact is that manual patch testing and approval do not happen.  There might be a process, but that doesn’t mean that it’s used.  I bet if you surveyed 1000 companies with this process then you’d find the majority of them don’t do it, and are probably woefully unprotected.  Queue the moronic comments that’ll try to excuse behaviour … I know they’re coming and they only show guilt.

What you need is automation.  But doesn’t automated patch approval mean that patches are approved and deployed immediately, bugs and all?  Not necessarily.

When I started working with ConfigMgr 2012, I read the guides by Irish (in Sweden) MVP, Niall Brady.  I liked his approach to dealing with updates:

  1. Check for new catalog updates every hour (my preference)
  2. Allow already approved updates to be superseded automatically
  3. Delay approval of updates by 7-14 days
  4. Set a deadline of 7 days

With this approach, updates are approved automatically, but they aren’t made available for 7-14 days.  And updates won’t be mandatory for another 7 days beyond that. That means updates don’t get forced onto machines for 14-21.

For server updates, I’d set a maintenance window on the collection(s) of servers, so that updates can only happen during those time windows (and not impact SLA).

With this approach, you get the best of both worlds:

  • You delay the updates, giving other people the “opportunity” to test the updates for you, and you deploy the 2nd release of “bad” updates (bad updates are superseded by new versions)
  • The process is automated, so your updates are pushed out without any human intervention.  You can always disable the automatic approval rule if the brown smelly stuff looks like it wants to hit the fan.

Remember, you can deploy updates from anywhere using ConfigMgr (see System Center Updates Pulisher).  And this is just one of many reasons why I like ConfigMgr in the cloud.

Technorati Tags: ,,

Update Rollup 1 for System Center 2012 Service Pack 1

Microsoft has released UR1 for SysCtr 2012 SP1.  Here is my advice: do not deploy any agents, do not take control of any fabrics or storage, until you have deployed UR1.  UR1 fixes a number of issues (details are on the site) and the update process requires already deployed agents to be updated from their management consoles.

Don’t Install WMF 3.0 On VMM Managed W2008 R2 Hyper-V Hosts

Microsoft has published a KB article (KB2795043) that explains the following scenarios:

On System Center Virtual Machine Manager, you may experience one of the following symptoms:

  • A Windows Server 2008 R2 SP1 Hyper-V host has a status of Needs Attention in the VMM Console.
  • or

  • Adding a Hyper-V host or cluster fails.

The fix is … to uninstall the WMF 3.0 update (KB2506143).  There’s a bit more to it than that. You also need to reboot the host and then run:

winrm qc

… and then do another reboot of the host.

I know; it’s far from an ideal situation.  But there’s the workaround for you.

System Center 2012 Service Pack 1 Is On The Volume Licensing Service Center – And Ready For Production

Fellow MVP, Johan Arwidmark (@jarwidmark), just tweeted that he saw SysCtr 2012 SP1 on the VLSC site.  I just checked.  He’s right:

image

TechNet is for evaluation and MSDN is for test/development/demo.  What you download from the VLSC site is for production usage … and for managing Windows 8 and Windows Server 2012 (including Hyper-V).  This is also the release to integrate with Wave D of Windows Intune.

This is not an R2 release, it’s a service pack.  So if you bought System Center 2012 then you’re entitled to this update.  Please don’t assume anything about “upgrades”.  Some features of System Center can be upgraded (Operations Manager – see Kevin Greene’s series of posts).  Some cannot be directly upgraded (see VMM).