Visio 2010 Add-Ins – Pay Attention System Center People!

You may have wondered how to crate pretty pictures to share on a big screen that depict some health information about stuff that you manage using System Center.  Here’s how …

I was mucking around with the Visio plug-ins for Operations Manager for the first time today, adding monitored objects from SCE 2010 (plus their health status) into Visio.  The cool thing with this is that it refreshes the objects’ health in Visio!  And then you can save your diagram into SharePoint 2010 with live health refreshing.  In other words, you can create nice and friendly views of the services that IT provides and share them with service owners and/or users via diagrams on SharePoint sites.

VisioOpsMgrAddinExample

But it doesn’t stop there.

There are a lot of these plug-ins.  Why I’ve not heard/paid attention to most of these before, I have no idea.  There’s one for Exchange, allowing you a friendly view of your Exchange Server 2007 environment.  There is a cool one that drags in alerts from OpsMgr and update status from ConfigMgr if you are running a dynamic datacenter. 

image

Seriously, take a look at this stuff if you are running System Center, or if you’re a systems integrator looking for cool new upsell services.

Mastering Hyper-V Deployment Book is Available Now

Amazon has started shipping the book that I wrote, with the help of Patrick Lownds MVP, Mastering Hyper-V Deployment.

Contrary to belief, an author of a technical book is not given a truckload of copies of the book when it is done.  The contract actually says we get one copy.  And here is my copy of Mastering Hyper-V Deployment which UPS just delivered to me from Sybex:

BookDelivered

Amazon are now shipping the book.  I have been told by a few of you that deliveries in the USA should start happening on Tuesday.  It’s been a long road to get to here.  Thanks to all who were involved.

Mastering Hyper-V Deployment Excerpts

Sybex, the publisher of Mastering Hyper-V Deployment, have posted some excerpts from the book.  One of them is from Chapter 1, written by the excellent Patrick Lownds (Virtual Machine MVP from the UK).  As you’ll see from the table of contents, this book is laid out kind of like a Hyper-V project plan, going from the proposal (Chapter 1), all the way through steps like assessment, Hyper-V deployment, System Center deployment, and so on:

Part I: Overview.

  • Chapter 1: Proposing Virtualization: How to propose Hyper-V and virtualisation to your boss or customer.
  • Chapter 2: The Architecture of Hyper-V: Understand how Hyper-V works, including Dynamic Memory (SP1 beta).

Part II: Planning.

  • Chapter 3: The Project Plan: This is a project with lots of change and it needs a plan.
  • Chapter 4: Assessing the Existing Infrastructure: You need to understand what you are converting into virtual machines.
  • Chapter 5: Planning the Hardware Deployment: Size the infrastructure, license it, and purchase it.

Part III: Deploying Core Virtualization Technologies.

  • Chapter 6: Deploying Hyper-V: Install Hyper-V.
  • Chapter 7: Virtual Machine Manager 2008 R2: Get VMM running, stock your library, enable self-service provisioning.  Manage VMware and Virtual Server 2005 R2 SP1.
  • Chapter 8: Virtualization Scenarios: How to design virtual machines for various roles and scales in a supported manner.

Part IV: Advanced Management.

  • Chapter 9: Operations Manager 2007 R2: Get PRO configured, make use of it, alerting and reporting.
  • Chapter 10: Data Protection Manager 2010: Back up your infrastrucuture in new exciting ways.
  • Chapter 11: System Center Essentials 2010: More than just SCE: Hyper-V, SBS 2008 and SCE 2010 for small and medium businesses.

Part V: Additional Operations.

  • Chapter 12: Security: Patching, antivirtus and where to put your Hyper-V hosts on the network.
  • Chapter 13: Business Continuity: A perk of virtualisation – replicate virtual machines instead of data for more reliable DR.

Oracle On Their Internal Systems Management

I just read a story about how Oracle consolidated their internal systems management   They decided to invest in a legacy-style solution based on SNMP and ping.  One of the things I noticed was that Oracle wanted to do lots of customization, be able to get access to the underneath data so they could manipulate it, integrate it, etc.

This is how not to do monitoring in a modern IT infrastructure.

In 1st year of college, we were taught about different ways you could buy software:

  1. Write it yourself: Takes lots of time/skills and has hidden longterm costs.
  2. Buy or download something cheap off the shelf that does 80% of what you need.  You spend a very long time trying to get the other 20%.  It ends up not working quite right and it costs you a fortune, especially when it fails and you have to replace it – of course, the more common approach is to live with the failure and pitch a story that it is fantastic.  I call this the “I’m in government” approach.
  3. Spend a little bit more money up front, buy a solution that does what you need, is easily customizable, and will work.

In Ireland, approach number 2 is the most commonly taken road.  Ping/SNMP cheapware is what most organizations waste their money and time on.  A server responding to ping does not make it healthy.  A green icon for a server that is monitored by a few SNMP rules that took you an age to assemble does not make it healthy.

Instead what is needed is a monitoring solution that has indepth expertise in the network … all of it … from the hardware, up through to the applications, has ana additional client perspective, and can assemble all of that into the (ITIL) service point of view.  Such a solution may cost a little buit more but:

  • It works out of the box, requiring just minor (non-engineering) changes along the way.
  • The monitoring expertise is usually provided by the orginal vendor or an expert third party.
  • The solution will be cheaper in the long term.

No guesses required to tell which solution I recommend, based on experience.  I’ve tried the rest: I was certified in CA’s Unicenter (patch-tastic!), I got a brief intro to BMC Patrol, I’ve seen teams of Tivoli consultants fail to accomplish anything after 6 months of efforts, and I’ve seen plenty of non-functional cheapware along the way.  One solution always worked, out of the box, and gave me results within a few hours of effort.  System Center Operations Manager just works.  There’s lots of sceptics and haters but, in my experience,  they usually have an agenda, e.g. they were responsible for buying the incumbant solution that isn’t quite working.  There is also the cousin of OpsMgr, SCE 2010, for the SME’s.

Writing of Mastering Hyper-V Deployment Nearing Completion

I’ve just submitted the last of my content to Sybex for Mastering Hyper-V Deployment.  It’s been a long and tough road.  Early work started on the project in February.  I’ve been doing my normal day job and trying to squeeze in chapters in a rush schedule.  I’ve been working during the morning commute, at lunchtime, the evening commute, into the night, and at weekends.  My co-author is close to finishing his chapters on schedule.  I’ve been doing the first of the reviews as we’ve moved through the project.  I’m probably already a third of the way through the copy edits (2nd set of reviews).  After that comes the final set (I hope) of layout edits.  And then off it goes to the printers for release in November.  I can’t wait!

Microsoft Ireland – Best of #MMS2010

I arrived in about an hour late for this event because I had to present at a cloud computing breakfast event in the city.  Writing until midnight, doing work until 1am and getting up at 05:30 has left me a bit numb so my notes today could be a mess.

The ash cloud has caused last minute havoc with the speakers but the MS Ireland guys have done a good job adjusting to it.

System Center v.Next

I arrived in time for Jeff Wettlaufer’s session.

The VMM v.Next console is open with an overview of a “datacenter", giving a glimpse of what is going on.  We see the library and shares which is much better laid out.  It includes Server App-V packages, templates, virtual hard disks, MSDeploy packages (IIS applications), SQL DAC packages, PowerShell, ISO and answer files.

VMM v.Next

The VMM model is shown next.  We can create a template for a service.  This includes virtual templates for virtual machines: database, application, web, etc.  The web VM is shown.  We can see the MS deploy package from the library is contained within the template for this VM.  The web tier in the model can be scaled out automatically using a control for the model.  The initial instance count, maximum and minimum instance counts can be set.  The binding to network cards can be sent too.

An instance of this model is deployed: lots of VM’s are included in the model.  One deployment = lots of new VM’s.  We now see the software update mechanism.  The compliant and non compliant running VHD’s are identified.  Normally we’d do maintenance windows, patching and reboots.  With this approach we can remediate the running VM’s VHD’s.  Because there are virtualised services, they can be migrated onto up-to-date VHD’s and the old VHD’s are remediated. The service stays running and there are no reboots or maintenance windows.

This makes private cloud computing even better.  We already can have very high uptimes with current technology.  The only blips are usually in upgrades.  This eliminates that.  The model approach also optimises the

Operations Manager 2007 R2 Azure Management Pack

You can use an onsite installation of OpsMgr to manage Azure hosted applications.  This is apparently out at the end of 2010.  We get a demo starting with a model, including web/database services, synthetic transactions and the Azure management pack containing Azure objects (a web front end that fronts the on-premises databases).  We see the usual alert and troubleshooting stuff from OpsMgr.  Now we see that tasks for Azure are integrated.  This includes the addition of a new web role instance on Azure.  In theory this could be automated as a response to underperforming services (use synthetic transactions) but it would need to be tested and monitored to avoid crazy responses that would cost a fortune.

Almost everything in the System Center world has a new release or refresh in 2011.  It will be a BIG year.  I suspect MMS 2011 will be nuts.

It looks like I missed 4 of the demos :-(  That’s work for ya!

Configuration Manager v.Next– Jeff Wettlaufer

Woohoo!  I didn’t miss it.

The focus on this release is user centric client management.  The typical user profile has changed.  Kids are entering the workplace who are IT savvy.  The current generation knows what they want (a lot of the time).  MS wants to empower them.  Users should self-provision, connect from anywhere, access devices and services from anywhere. 

There should be a unified systems management solution.  Do you want point solutions for software, auditing, patching, anti-malware, etc.

Control is always important.  Whether it is compliance for licensing, auditing, policy enforcement, etc.  Business assets must be available, reliable and secure.  Automation must be employed and expanded upon to remove the human element – more efficient, allow better use of time to focus on projects, less mistake prone.

ConfigMgr 2007 does a lot of this.  However, it didn’t do the last step: remediating non-compliance with policy (software, security, etc).

Notes: 75% of American and 80% of Japanese workers will be mobile in 2011.  The IT Pro needs to change: be more generalized and have a variety of skills capable of changing quickly.  IT in the business has “comsumerized”: they are dictating what they want or need rather than IT doing that.  I think many admins in small/medium organizations or those dealing with executives will say that there has always been some aspect to that.  The new profile of user will cause this to grow.

System Center ConfigMgr is moving towards answering these questions.  The end user will be empowered to be able to self-provision.  Right now, the 2007 release translates a user to a device, and s/w distribution is a glorified script.  It is also very fire and forget, e.g. an uninstalled application won’t be automatically reinstalled so there isn’t a policy approach.

The v.Next method changes this.  It will understand the difference between different types of device the user may have.  It is more flexible.  It is a policy management solution, e.g. an uninstalled application will be automatically reinstalled because it is policy defined/remediated.

Software distribution in v.Next: relationships will be maintained between the user and devices.  User assigned software will be installed only if the user is the primary user of the device – save on licensing and bandwidth.  S/W can be pre-deployed to the primary devices via WOL, off-peak hours, etc.

Application management is changing too.  Administrators will manage applications, not scripts.  The deployments are state based, i.e. ConfigMgr knows if the application is present or not and can re-install it.  Requirements for an application can be assessed at installation time to see if the application should even be installed at all.  Dependencies with other applications can be assessed automatically too.  All of this will simplify the application management process (collections) and troubleshooting of failed installations.

For the end user, there is a web based application catalog.  A user can easily find and install application.  A workflow for installation/license approval can back this up.  S/W will install immediately after selection/approval – this uses Silverlight to trigger the agent.  A user can define what their business hours are in the client to control installations or pre-deployments.  They can also manage things like automated reboots – no one likes a mandated reboot (after 5 minutes) while doing something important, e.g. a live meeting, demo, presentation, etc.  This is coming in beta2: there will be a pre-flight check feature where you can see what will happen with an application if you were to target it at a collection.  You then can do some pre-emptive work to avoid any failures.  I LIKE that!

We now see a demo of a software package/deployment.  An installer package for Adobe Reader is imported.  This isn’t alien from what we know now.  There is a tagging mechanisms for searches.  We can define the intent: install for user or install for system.  You can add deployment types for an existing application.  We see how an App-V manifest is added to the existing application which was previously contained _just_ an MSI package.  Now you can do an install or an App-V deployment (stream and/or complete deployment) with the one application in ConfigMgr.  So we now have 2 deployment types (packages) in a single application.  This makes management much easier. 

We see that the deployment of the application can be assigned to a user and will only be installed to their primary device.  System requirements for the application can be included in the package.

A deployment (used to be called an advertisement) is started and targeted at a collection.  The distribution points are selected.  Now you can specify an intent, e.g. make the application available to the user or push it.  The usual stuff like scheduling, OpsMgr integration are all present.

SQL is being leveraged more and more.  A lot of the file system and copy operations are going away and being replaces with SQL object replication.  It also sounds like the ConfigMgr server components might be 64-bit only.

The MMC GUI is being dropped.  The new UI is more intuitive, better laid out and faster.  It will filter content based on role/permissions  in ConfigMgr.  This will make usage of the console easier.  Wunderbars finally make an appearance in ConfigMgr to allow different views to be presented: Administration, Software Library, Assets and Compliance, and Monitoring.

Role Based Administration: The MMC did cause havoc with this.  A security role can be configured.  This moves in the same direction as VMM and OpsMgr.  13 roles are built into the beta1 build.  You can bound the rights and access in ConfigMgr, e.g. application administrator, asset analyst, mobile device analyst, read only roles, etc.  We are warned that this might change before RTM.  Custom roles can be created.  When a role logs into the console they will see only what is relevant (permitted).  Current ConfigMgr sites did this by tweaking files on site servers which is totally not supported and caused lots of PSS tickets.

Primary sites are needed only for scale out.  The current architecture can be very complex in a large network.  Content distribution can be done with secondary sites, DP’s (throttling/scheduling), BranchCache and Branch Distribution Points.  Client agents settings are configurable in a collection rather than in a primary site.

Note: we see zero hands go up when we are asked if anyone is using BranchCache.  That’s not surprising because of the licensing requirements, the limit of not having upload efficiencies (compared to network appliance solutions) and limited number of supported solutions.

Jeff says that client traffic to cross-wan ConfigMgr servers dropped by 92% when BranchCache was employed – the distribution point can be BITS (HTTPS) enabled.

Distribution point management has been simplified with groups.  Content can be added based on group membershpip.  Content can be staged to DP’s, as well as scheduled and throttled.

SQL investments mean that the inbox is gone in v.Next.  Support issue #1 was the inbox.  There are SQL methods for inter-site communications.  SQL Reporting Services is going to be used.  SQL skills will be required.  MS needs to invest in training people on this.

ConfigMgr client health features have been expanded.  There is configurable monitoring/remediation for client prerequisites, client reinstallation, windows services dependencies, WMI, etc.  There are in-console alerts when certain numbers of unhealthy clients are detected – configurable threshold.

There is a common administration experience for mobile device management – CAB files can be added to ConfigMgr applications (not just App-V and MSI/installer).  Cross-platform device support (Nokia Symbian) is being added.  User centric application and configuration management will be in it.  You can monitor and remediate out of date devices.

Software Updates introduces a group which contains collections.  You can target updates to a group.  This in turn targets the contained collections.  Auto-deployment rules are being introduced.  Some want to do patch tuesday updates automatically.  You DEFINITELY need to auto-approve anti-virus/malware updates (Microsoft Forefront updates flow through Windows Updates).  Auto-approved updates will automatically flow out to managed clients.  This has a new interface but it’s a similar idea to what you get with WSUS. 

Operating System Deployment is a BIG feature for MS in this product.  We now get offline servicing of images.  It supports component based servicing and uses the approved updates.  This means that newly deployed PC’s will be up to date when it comes to updates.  There is now a hierarchy-wide boot media (we don’t need one per site and saving time to create and manage it).  Unattended boot media mode with not need to press <Next>.  We can use PXE hooks to automatically select a task sequence so we don’t need to select one from a list.  USMT 4.0 will have UI integration and support hard-link, offline and shadow copy features.  In 2007 SP2, these features are supported but hidden behind the GUI.

Remote Control is back.  Someone wants it.  I don’t see why – the feature is built into Windows and can be controlled by GPO.

Settings Management (aka Desired Configuration Management) is where you can define a policy for settings and identify non-compliance.  V.Next introduces automated remediation of this via the GUI.  This is an option so it is not required: monitor versus enforce.  Audit tracking (who changed what) is added.

Readiness Tips: Get to 64-bit OS’s ASAP.  Start using BranchCache.  Plan on flattening the hierarchy.  Use W2008 64-bit or later.  Start learning SQL replication.  Use AD sites for site boundaries and UNC paths for content.

A VHD with a 500 day time bombed VHD will be made available by MS in a few weeks.  Some hand-on labs will be made available soon after in TechNet Online. 

Can you see why I reckon ConfigMgr is the biggest and most complex of the MS products?

Operations Manager

Irish OpsMgr MVP Paul Keely did this session.  I missed the first half hour because I was talking to Jeff Wettlaufer and Ryan O’Hara from Redmond.  When I came back I saw that Paul was talking about the updates that have been made available for OpsMgr 2007 R2.  The demo being shown was the SLA Dashboard for OpsMgr.

Management pack authoring: “you need to have a PhD to author a management pack”.  This is still so true.

Using a Viso/OpsMgr connector you can load a distributed application into Visio.  You can then export this into SharePoint where the DA can be viewed on a site.

KB979490 Cumulative Update 2 includes support for SLES 11 32-bit and 64-bit and zones for all versions of Solaris.

V.Next: MS have licensed “EMC Smarts” for network monitoring.  An agent can figure out what switch it is on and then figure out the network. This means OpsMgr can figure out the entire network infrastructure and detect when a component fails. 

Management packs are changing.  A new delay and correlation process will alert you about the root cause of an issue rather than alert you about every component that has failed because of the root cause.  This makes for a better informed and clearer issue notification.

Opalis

This is a recent System Center acquisition for automated work flows.  The speaker was to fly in this morning but the ash cloud caused airports to close.  MS Ireland have attempted to set up a Live Meeting where the speaker can present to us from the UK.

The speaker is Greg Charman and is present in a tiny window in the top left of the projector screen.

We have a number of IT silos: SQL, virtualisation, servers, etc.  Applications or processes tend to cross those silos, e.g. SQL is used by System Center.  Server management relies on virtualization.  Server management and virtualization both use System Center.

Opalis provides automation, orchestration and integration between System Center.  Currently (because it was recently acquired) it also plugs into 3rd party products. Maybe it will and maybe it won’t continue to support 3rd party products in future releases.

Opalis provides runbook/process automation.  You remove human action from the process to improve the speed and reliability.  It also allows processes to cross the IT silos.

In the architecture, there is an Integrated Data Bus.  Anything that can connect to this can interact with other services (in theory).  Lots of things are shown: Microsoft, BMC, HP, CA, IBM, EMC, and Custom Applications. 

A typical process today: OpsMgr raises an alert.  Manually investigate if it is valid.  Update a service desk ticket.  Figure out what broke and test solutions.  Maybe include a 3rd party service provider.  All of these tasks take time and the issue goes on and on.

Opalis: sees the alert and verifies the fault.  It updates the issue.  It does some diagnostics.  It passes the results back to the service desk.  It might fix the problem and close the ticket.  At the least it could provide lots of information for a manual remediation.

Opalis is used for:

  • Incident management: orchestrate the troubleshooting.  Maybe identify the cause and remediate the issue.
  • Virtual machine life cycle management: Automate provisioning and resource allocation.  Extend virtual machine management to the cloud.  Control VM sprawl.
  • Change and control management: This integrates ConfigMgr and VMM.

The integration for some products will be released later in 2010.  The VMM and ConfigMgr integrations are in the roadmap, along with a bunch of other MS ones.

System Center Essentials 2010

This is presented by Wilbour Craddock.  As most companies in Ireland are small/medium, SCE 2010 should be a natural fit for a lot of them.  Remember that it is a little crippled compared to the full individual products.  It can manage up to 50 servers (physical or virtual) and up to 500 clients.

  • Monitor server infrastructure using the OpsMgr components.
  • Manage virtual machine using the VMM 2008 R2 components.  This include P2V and PRO tips.
  • Manage s/w and updates using the ConfigMgr components.

The “SCE 2010 Plus” SKU adds DPM 2010 to the solution so you can backup your systems.

Inventorying: Runs every 22 hours and includes 60+ h/w and s/w attributes.  Visibility is through reports.  180 reports available.  New in 2010: Virtualization candidates.

Monitoring includes network management with SNMP v1 and SNMP v2.  It uses the same management packs as OpsMgr.  Third party and custom ones can be added.  The product will let you know when there is a new MP in the MS catalog.

Only the evaluation is available as an RTM right now.  The full RTM and pricing for it will be available in June.

Patching is done with WSUS and this is integrated with the solution.  Auto-approval deadlines are available.  It can synch with the Windows catalogue multiple times in a day.  There is a simple view for needed updates.

SCE can deploy software but it cannot deploy operating systems.  You can use the free WDS or MDT to do this.  Note that a new version of MDT seems to be on the way.  The software deployment process is much simpler than what you get with ConfigMgr, thanks to the reduced size of the network that it supports.  It assumes a much simpler network.

At first glimpse of the feature list, it appears to include most of the VMM features, but it not not be as good as VMM 2008 R2.  It cannot manage a VMware infrastructure but it can do V2V.  Host configuration might be better than VMM.  P2V is different than in VMM.  The Hyper-V console is still going to be regularly used, e.g. you can’t manage Hyper-V networking in SCE 2010.  Enabling a physical machine to run Hyper-V is as simple as clicking “Designate as a host”.  PowerShell scripts are not revealed in the GUI like in VMM but you can still use PowerShell scripts.

Software deployment now include filtering, e.g. CPU type X and Operating System Y.  You can modify the properties of existing packages.

The setup is simple: 10 screens.  Configuration is driven by a wizard.

Requirements: W2k* or W2K8 R2 64-bit only.  2.8GHz, 4GB RAM, 150GB disk recommended.  It can manage XP, W2003, and later.

The server with DPM will be around €800.  Each managed device (desktop or server) will require a management license.  You can purchase management licenses to include DPM support or not.  This means you can backup your servers, maybe a few PC’s and choose to use the cheaper management licenses for the rest of the PC’s.

Intune

Will talks about this.  Dublin/Ireland will be included in phase II of the beta.  It provides malware protection and asset assessment from the cloud.  It will be used in the smaller organizations that are too small for SCE 2010. 

That was the end of the event.  It was an enjoyable day and a good taster of what happened at MMS.

SCE 2010 and DPM 2010 RTM

Data Protection Manager 2010 and System Center Essentials 2010 were both announced as being released to manufacturing today.

DPM is MS’s backup solution and is the one that has the ability to backup a Hyper-V CSV.  The catch is that it puts the CSV into redirected IO mode.  Thus the preference is to use a storage provider with a supported VSS provider.  That allows you to safely backup running VM’s and maintain database consistency when recovered  -> VSS runs all the way through the stack.  You can even recover single files!

SCE 2010 is the all-in-one package that has the best of ConfigMgr, OpsMgr and now with VMM so you can manage W2008 R2 Hyper-V.  This makes it the ideal systems management solution for small-medium companies.

So … What Exactly Am I Writing?

You can tell I’m pretty busy because my usual high rate of blogging has dropped significantly in the last month.  Apologies for that.  The blogging has become writing.  I am involved in 2 book projects.  I’ve just seen on Twitter that details on one of those has just gone public.  I actually just saw the tweet seconds after I sent off a chapter I just finished.

Earlier this year I proposed an idea for a Windows Server 2008 R2 virtualization book to Wiley Publishing/Sybex.  It took quite a bit of work to tune the proposal.  It requires an understanding of the subject matter, the audience, and ideas on how it can be marketed.  You could think that a brief overview on the subject matter would be enough.  But no, the publisher needs much more detail.  You pretty much have to provide a detailed project plan for every heading (3 levels deep), page estimates and time estimates.  The proposal evolved over the weeks and eventually went through a couple of reviews.  I then got the news: an ISBN number was assigned and contracts were on the way – I was going to be a lead author on my own book for the very first time!!!!  I did get drunk that night – I think.

The deadlines are very tight.  I was considering seeking help.  My contact in Sybex advised that I outsource some of the chapters to a co-author.  I knew the person I wanted to bring in.  Wilbour Craddock is a technical specialist in the partner team with Microsoft Ireland.  Will (Irish folks will know him as the crazy Canadian who is always wearing shorts) is also a former SBS MVP.  His job has him spending a lot of time working with Hyper-V and Microsoft System Center, making him a perfect co-author to work with on this project.  Thankfully, Will agreed to hop on board the crazy train of book writing.

Another MVP (I won’t say who yet because I don’t have permission to name him) is the technical editor under the employment of Sybex.  He’s an ace at this stuff and will make sure everything we do is up to scratch.

The book is called Mastering Hyper-V Deployment.  I won’t go into the details of it yet.  But you can bet that it is based on our collective experience and knowledge of the product set involved in a Hyper-V deployment.  I saw a gap in the market and figured I could probably write (or a good chunk of) the book to fill it.  The estimated release is in November 19th of this year.  That means we need to finish writing in July.  It has started to appear on some sites for pre-order.

I’m two chapters in a the moment.  I’m really pushing my hardware at home to its limits and am “this close” to buying more.  Will is ahead of schedule and has one chapter nearly done.

I am also working on another book project as a co-author for a friend’s book.  It’s another on-subject book that is turning out to be a good experience.  I’ve one chapter done on that and am 50% through the other.  I’ll talk more about that when the time is right.

As you may have read in my previous posts about my chapters in Mastering Windows Server 2008 R2, the original draft edit is just the very start of the process.  There are numerous technical, language, layout and copy edits for each and every chapter.  It’s a lot of work but it’s a great experience.  And I can’t wait for the buzz to see my name as the lead author of a book in a book shop.  I had to really try when I saw Mastering Windows Server 2008 R2 in Barnes & Noble over in Belleview WA back in February.

System Center Essentials 2010

SCE is possibly the least known of Microsoft System Center family.  The existing 2007 version is a merger of the core components of Operations Manager 2007 and Configuration Manager 2007.  It is a subset and it does support fewer servers and desktops.  That’s because it is aimed at small to medium companies.  For example, SCE 2007 manages up to 30 servers.

Microsoft is updating the product.  OpsMgr has seen changes with 2007 R2 and Configuration Manager is undergoing development for an R3 release for this year.  It doesn’t end there.

Microsoft knows that SME’s are quite likely to deploy Hyper-V for virtualisation.  The number of hosts might grow.  I know one small software company that runs two hosts with dozens of VM’s.  Developers want new VM’s for test and development on a frequent basis.   That sounds like maybe VMM would be handy.  And so SCE 2010 will include functionality from VMM to manage Hyper-V.  Virtualisation typically means there will be more servers.  Therefore SCE 2010 will manage up to 50 servers.

A release candidate (test) version of SCE 2010 is available

  • Delivers single console monitoring and management with summary information, common tasks, alerts and reports, allowing you to quickly see and manage your IT environment.
  • Provides rapid provisioning, importation, management and live migration of virtual servers.
  • Simplifies complex management tasks like packaging and deploying software, and configuring Microsoft and third-party updates.
  • Helps quickly solve problems using integrated alerting, expert knowledge and troubleshooting for servers, PCs and IT services running in your IT environment.