Installed IE9 or FireFox 4? Hold on to your Britches …

Just as you’ve gotten Internet Explorer 9 and/or FireFox 4 installed, news has hit the wires of their successors.

Mozilla have confirmed that you can expect FireFox 5 in the Summer and FireFox 6 just a few months after that.

Paul Thurrot just tweeted (and there have been other news sources too) that a technical preview release (link is dead at the moment) of IE10 would be shown at the MIX (developers) conference today.

I’ve encountered sites and applications that don’t work with one or both of these browsers.  Normally I’d think “the app vendor will catch up soon enough”.  But now?  Why would they bother?  With major new releases with the ever present new features (AKA “standards”) breaking existing apps, a web app publisher is probably going to sit on one browser version for ages, before updating their application to a later version.  We might end up in Java hell – anyone been in that painful place where you had to maintain 4 or 5 versions of Java in your desktop “standard”.  It was hell for the helpdesk/desktop management teams.

In an era of cloud computing where the browser becomes the most important tool, I can understand frequent releases.  But surely these should be minor releases, fixing bugs, improving performance, and so on?  Worried about app compat for Windows?  In a world where the likes of Google and Microsoft want you consuming their SaaS apps via the browser, you might be in for a world of hurt pretty soon if every SaaS publisher works to different versions.

It remains to be seen what Mozilla and Microsoft have in store for us.  My bet is that, based on this news, a lot of organisations will choose to skip IE9 and FireFox 4, waiting for the next versions to come along.

Technorati Tags:

Public Cloud Computing and Stickiness

Earlier today, I read a blog post (that I recommend) on TechCentral.ie about customer lock-in and cloud computing.  The author asked if a consumer of a cloud computing service should be concerned with what marketing people call “stickiness” and what we might call an exit strategy.  In other words, if I consume some cloud application or build an application on some platform/infrastructure, how easily can I get it from that service and move to another service?

No matter what cloud solution (or even basic web hosting to be honest) you choose for your online presence, there will always be some customisation required.  However, some require more than others.  An infrastructure-as-a-service (IaaS) solution using virtual machines with traditional operating systems will give the developers a pretty common experience across different vendors.  There might be different machine names, different patch levels, and so on, but in the end they’ll develop for IIS, .NET, SQL, Apache, or MySQL the same with company X as they do with company Y.  That’s because there is a common denominator across the IaaS providers which those providers cannot customise.

Some platform-as-a-service (PaaS) solutions offer what developers might consider some very useful features.  However, the vendor behind them will have customised the platform quite a bit to provide those features.  An application developed on this PaaS won’t be directly portable to another service without extensive re-engineering.

The same applies whether you are developing some great big online application, or storing business data in some online SRM software-as-a-service (SaaS).  Tsun Tzu wrote the following:

“To always have an exit strategy from vulnerable positions in which the army will find itself”.

He didn’t know it at the time but he put it perfectly.  If I’m deploying into a public cloud service then I want to know how I can get my application/data out of that service and into a competitor (or back internal) down the road.  Things might be just peachy right now; the SLA looks good, the price is right, regulations aren’t a problem, and going cloud aligns with company strategy.  But what if the battlefield shifts?  What if that public cloud service provider increases prices?  What if they don’t live up to their SLA and lose business for you?  What if state/industry regulations change and you need to relocate your data?  What if you need to change how business applications/data interact with each other?  How hard, and how expensive, will it be to move from A to B?

Will you have to hire consultants?  Will there be a third party solution?  Will there be a lot of manual work?  Just how long will it take to migrate?  How exactly will you exit from the public cloud service provider without wasting huge amounts of time and money?

The business people who are promoting cloud computing love this aspect of the service.  It’s referred to as “stickiness”.  For example, if you put all your customer data and workflows into some online CRM, using their features, and suddenly they jack up the price, what are you going to do?  If it was a phone service, you’d browse the alternatives, and move once your contract was up.  Your phone number can usually transfer with you.  Downtime?  Usually nil.  Cost?  Usually nil.  What about moving that CRM?  The truth is you will have to get consultants in to assess the situation, and then balance the cost of paying for their time to migrate you from A to B.  One could say the same is true of internally or on-site deployed applications.  True; but we tend to consider that factor and there are typically existing routes to manipulate the easily accessible data that resides in the data centre or computer room that you own.  It’s a whole other ball game when the data sits in some multi-tenant database that you have no direct access to and your going to be working with nasty dump files – kept deliberately that way to deter you from moving.  In the end, you’ll compare the cost of leaving that CRM SaaS with the cost savings of another provider and stay where you are.  Stickiness; there you have it!

Like the earlier referenced blog author said, I recommend that you do investigate an exit strategy for any public cloud deployment that you consider.  Remember that any USP (unique selling point) that a public cloud has equates to complexity in your exit strategy.

Technorati Tags:

Interesting Survey Results on Behalf of Veeam

I’ve just read these stats on TechCentral.ie in an article called “IT departments lack visibility in virtualisation”.  They are from a survey “carried out by Vanson Bourne on behalf of VMware management solutions provider Veeam Software”.

  1. Nearly half (49%) of firms that use virtualisation say they have delays in resolving IT problems because of a lack of visibility into their whole IT infrastructure
  2. Forty five per cent of respondents also said that the lack of visibility is slowing down their organisation’s adoption of virtualisation
  3. Eighty per cent of respondents who currently use specialist tools but would prefer to use traditional enterprise-wide management tools
  4. Seventy-one per cent said they had difficulty doing so managing the VMware vSphere and Microsoft Hyper-V hypervisors from a single console …
  5. … while 68% wanted a single dashboard for managing them both

The stats are quoted are from TechCentral.ie so please check out their site for IT news.

Let’s quickly deal with the stats one by one:

  1. Want visibility into your infrastructure?  I’m curious to see how VMware will accomplish that.  They make great virtualisation software but that’s where they stop.  On the other hand, Microsoft System Center will audit and report on your infrastructure hardware and software (Configuration Manager) and monitor your hardware and applications (Operations Manager).
  2. Use the Microsoft Assessment and Planning Toolkit, ideally combined with System Center, and you have (a) the tools to figure out what you have (b) figure out what your virtualisation infrastructure will be, and (c) do that conversion process.
  3. See System Center.  vSphere will give you great VMware virtualisation management.  But as anyone who really knows what private cloud computing is will tell you, the business doesn’t care about the infrastructure – they care about the business application that lives on top of it.  You need complete end-end and top-bottom management, including deployment, configuration, auditing, policy management, virtualisation, monitoring (traditional and client perspective), backup/recovery, and maybe even other things, covering everything from the network/hardware to the web app/database running on top of everything.
  4. Understandable.  VMware are adding Hyper-V support.  VMM 2008 R2 manages vSphere (but not all features).  VMM 2012 will add more vSphere support in addition to Xen.  But vSphere 5 isn’t far away.  I’ll be honest, I don’t think any management solution will have 100% feature management completeness of all virtualisation platforms, but maybe we can get close to it.
  5. See #4

First Impressions: Free Microsoft iSCSI Target for W2008 R2

Today I downloaded and installed the free iSCSI target for Windows Server 2008 R2 that was just released.  I needed something free and lightweight for the lab in work.  We’re using a pair of HP DL165 G7s as clustered hosts and a DL180 G6 with “cheap” SATA disk as the “SAN”.  I was planning on using Windows Storage Server 2008 R2, but then I saw the tweet by Microsoft’s Jose Barreto that announced the release.  Perfect – that was one less ISO I would have to download.

I deployed W2008 R2 from the WDS VM in the lab and downloaded the compressed setup file.  After it was extracted, I installed the the target.  That gives you a simple enough tool to use.

The service creates targets.  Each target is a collection of disks (fixed size VHDs that are stored on the iSCSI target server) and you permission the target using IQN, MAC, IP address … and I can’t remember if DNS name was one of the options or not.

I needed two targets.  One would be for the VMM library.  For my lab, VMM would be running as a VM on a standalone host (another DL165 G7).  I set up a target with a disk and permitted the iSCSI addresses of the standalone host to connect.

On the standalone host I added the MPIO feature, enabled iSCSI, and added the iSCSI NICs devices.  In the initiator, I added the target IP address, enabled multipath, and added the volume. All I had to do now was format it in Disk Management.

For my Hyper-V cluster (all the networking was set up), I set up a second target, and permitted the 4 iSCSI NIC IP addresses of the 2 hosts to connect.  The first disk  I created was a 1GB VHD.  This would be for the cluster witness.

Back on each clustered host, I added the Hyper-V role, and added the MPIO and Failover Clustering features.  Once again, I enabled iSCSI in MPIO and added the NIC devices.  On each host, I connected to the target IP address and enabled multipath.  It found the second (cluster storage) target and did not find the first (VMM storage) target.  That’s because the VMM storage target did not permit the IP addresses of the clustered hosts iSCSI NICs to connect.  The witness disk was added.

Now I set up the cluster.  The witness disk was added and I renamed it to “Witness Disk” in Failover Clustering.

Now I needed some storage for VMs.  In the CSV target admin console, I created another disk on the “SAN” server of the required size.  It was associated with the second (cluster storage) target so the clustered hosts could now see it in Disk Management.  I formatted the volume, labelling it as “CSV1”, and added it into Failover Clustering, renaming it as "CSV1” in there.  CSV was enabled in Failover Clustering, and the CSV1 disk was added as CSV storage.

I repeated that process to create CSV2.

A couple of VMs later and I had a fully functioning Hyper-V cluster working with a free Microsoft iSCSI target, running on relatively economic storage.

I found the iSCSI target to be really easy to set up and use.  You just need to get used to the idea that you are sharing VHDs instead of LUNS to your iSCSI clients.  The performance is OK – it’s never going to match a dedicated appliance like a Compellent, P4000, or a Clarion.  But it sure does beat them on price and quick availability.  I had no complaints but I intend this lab to be a lab, not a production private cloud with hundreds of VMs.

I was asked if I would run performance benchmarks.  I though this would be pointless – you cannot compare something that is intended to run on a huge variety of economic platforms (I’m using a non-dedicated HP 1 Gbps switch in the lab, along with slow SATA disk on a budget storage server) with something like a pre-set collection of gear like you get with a HP P4000 bundle.  Everyone’s performance experience of this solution will vary wildly.

This sort of solution is going to be of use in two scenarios:

  1. Demonstrations and training labs:  If you need to try something out quick or show clustering in action, you can’t beat something that you can run even on a laptop and is free to download and use.
  2. Low end, budget production clusters: No, it cannot match a storage appliance or even other paid-for iSCSI software solutions for features or performance, but I bet you that many low end, 2 or 3 node cluster owners would prefer economy over features.  Not everyone needs snapshots replicating to a remote site, you know!

Give it a look-see and find out for yourself what it can do.  You might have an EVA 8000 series or some monster Hitachi SAN for production – but maybe something like this could be useful in a test lab?

CRM Dynamics 2011 and Hyper-V

Strangely, the official text for Dynamics support on virtualisation was difficult to find.  I kept finding an article for Dynamics 4.0 and how it supported Virtual Server.  Eeek! 

The advice from the Dynamics group is as follows:

Virtualisation Products

Any virtualisation plartform on the SVVP is supported.

Storage

It starts out with what I expected: they advise the use of fixed size VHDs, much like the advice from SQL, SharePoint and Exchange PGs.

The CRM Dynamics group goes on to show that they need a little education about how we have been doing things in the real world.  They say that each VHD should be on a seperate disk on the host computer.  (a) We tend to use SANs in mid-size and greater environments where CRM Dynamics will be found, and (b) a well sized CSV or LUN on a modern SAN spans every disk in the disk group so it has the potential to use all of their IOPS.  We tend not to do 1 VHD per LUN anymore.

And that’s where the virtualisation specific guidance ends.  There is some general talk about allowing for required resources, runing config testing tools in the machines (just as they advise for physical), and so on.  There’s no mention of Live Migration/vMotion, Dynamic Memory, etc.  In the end, this is another SQL product so I guess the SQL guidance is what you should use to drive your design.  And I’d also take a little from the SharePoint 2010 guidance: allow headroom for growth, do a pilot and user acceptance testing, monitor, and resize accordingly.

SharePoint and Dynamic Memory

So far today, I’ve covered Exchange, SQL, and Lync support statements for Dynamic Memory.  This post is going to focus on SharePoint.  What is the news?

I have searched high and low using Google and Bing.  I have checked the guidance, including designing SharePoint 2010 for virtualisation on TechNet, and I have not found any mention of Dynamic Memory.  Let’s assume that SharePoint does support Dynamic Memory – unless you do have an abundance of support calls with CSS or Premier and can get an answer (please do share!).

Two things stand out:

SQL

The key to performance of SharePoint appears, to me, to be SQL Server.  We already know the story for SQL and Dynamic Memory.

Sizing

The sizing guidance for SP 2010 is quite realistic.  There’s a lot of “it depends” and talk of user acceptance testing.  In Ireland we call it “suck it and see”.  In other words, you won’t know what’s the right sizing for your environment until you try it.  Memory guidance uses the word “estimated” quite a bit.  Based on my previous experience with SharePoint (which is limited, I admit), MS sizing tends to be for huge user bases and not those that most of us deal with.  I remember a “small” SP 2003 farm from an MS Press book being 10,000 users.  I was sizing for 800 at the time, and MS Ireland considered us to be an enterprise customer! 

You will need flexibility.  That leaves me thinking that SharePoint is the perfect candidate for Dynamic Memory.  You will have to estimate that maximum memory, and the hypervisor will take care of assigning only what is required.  Later on (after monitoring) you can decide to reduce or increase the maximum memory setting.

I will update this post if I hear anything definitive.

Lync Server 2010 and Dynamic Memory

Following the Exchange Server and SQL Server posts, I thought I’d look at Lync Server 2010 next.  What’s the story with it and Dynamic Memory.  After much Googling, and eventually Binging, I found a document on Microsoft’s site called Server Virtualization in Microsoft Lync Server 2010.  There is one brief section on Dynamic Memory:

“Dynamic memory has not been validated with Lync Server 2010 workloads, and specific guidance cannot be provided”.

In other words, they haven’t tested it.  That’s the latest I could find.  I’m asking some contacts in MS to see if they can find anything that might have been published since by the Lync group.  Untested can be interpreted as unsupported.  But the Lync group didn’t care to be clear on that in the above document!  I’ll update/edit this post if I am corrected.

EDIT:

^$%*(£!  I had a gut feeling I should do a bit more digging in that Lync virtualisation doc and found:

“Quick Migration and Live Migration with Lync Server 2010 workloads have not been validated by the product group at this point”.

Again … I’m interpreting that as a lack of support for making a Lync VM highly available.  That’s based on my experience dealing with MS CSS.  If something goes wrong, they’ll find that statement and present it to you, forcing you to change your configuration before they will progress the call.  Once again, I’ll be happy to be corrected with more recent information if it exists.  And before you VMware-heads start jumping with joy, Live Migration = VMotion in MS’s point of view.

Exchange and Dynamic Memory

Earlier today I talked about SQL Server and Dynamic Memory. What about Exchange? Not surprisingly, the Exchange group do not recommend Dynamic Memory being enabled for virtual machines that are running Exchange. You can get the official text on TechNet under the heading of Dynamic Memory Allocation Considerations. They say: “… for virtual machines that are running Exchange in a production environment, it is best to turn off memory oversubscription or dynamic memory allocation. Instead, configure a static memory size …”.

I’ve heard several times, but not seen any official text, that the Mailbox role does not support Dynamic Memory.  I’m not an Exchange person (I’ve had the “luck” of usually working in sites that use Lotus Domino/Notes) but I believe that Dynamic Memory would cause problems for the Mailbox role.  I have read that it only checks for available memory at start up, grabs what it can, and that’s it.  Adding memory afterwards to deal with memory pressure would be pointless.   Anyway, the Exchange group don’t recommend enabling DM on your Exchange VMs.