Before You Install System Center … Clean Up Those Computer Accounts

First, I hope you’ve done some planning/architecture/proof of concept.  Next, clean up the environment.  Products that deploy agents, such as System Center Essentials (SCE), Configuration Manager (SCCM/ConfigMgr), and Operations Manager (SCOM/OpsMgr), will allow you to track the success of agent deployment.  And if your network is like most others I’ve encountered over the years, nobody has bothered to clean up the inactive/obsolete computer accounts.  The computer discovery process will use some sort of discovery process, most likely based on computer accounts found in Active Directory.  It may find computer accounts that have been there since 2000 and no longer are valid.  It may find 50% more computer accounts than actually exist.

Before you deploy agents you need to do some spring cleaning.

Computer Accounts

My favourite tool for this in the past was oldcmp.  The page doesn’t list Windows 2008 or 2008 R2.  I last used it with Windows Server 2008 in a lab and it worked fine.  It allowed you to work with user and computer accounts:

  • Report only
  • Disable
  • Move and disable (to a “disabled” OU)
  • Delete

The last time I was an admin of a large environment I was very fussy about inactive accounts.  We used to run oldcmp as a scheduled task on a monthly basis.

If you want something that is supported then try this.  Identify & disable computer accounts that were inactive for the last 4 weeks:

dsquery computer -inactive 4 | dsmod computer -disabled yes

Then you can identify and delete computer account that have been inactive for the last 8 weeks:

dsquery computer -inactive 8 | dsrm computer

Put that in a script and run it every month and you’ll automate the cleanup nicely.  Inactive machines for the last 4 weeks will be disabled and you can re-enable them if a user complains.  After 8 weeks, they get completely removed.  If you have people away for longer periods then you can extend this, e.g. disable after 26 weeks and delete after 52 weeks.  Or you might bundle that caution about deleting with a secure mindset, e.g. disable after 4 weeks and delete after 52 weeks.

Note: dsquery, dsmod, and dsrm can be easily used for lots more, e.g. user accounts. Check the help (at command prompt) and test-test-test before putting it into use.  You probably can do all of this with PowerShell and the useful –whatif flag.

DNS Records

I hate stale DNS records because they can lead to all sorts of false positives when there is IP address re-use, especially when trying to remotely manage/connect to PCs in a DHCP environment.  You can configure DNS scavenging of stale records on a DHCP server (for all zones) or on a per zone basis.

image

Be careful with this one.  I’ve been especially careful with the intervals since the 2003 days when I had a Premier support call open.  Scavenging didn’t like me using smaller intervals, even if they were correctly configured.

Once you have the environment cleaned up, you can start deploying agents.  Now when you see a “failed” message, you know you can take it seriously and schedule a human visit.

Note: I don’t think I’ve ever used ConfigMgr to build collections of users.  Users roam and I don’t want to install software needlessly.  But ConfigMgr 2012 will have a more reliable user-centric approach that detects a user’s primary PC.  Therefore, you’ll want to do a user clean up before deploying it … and that should be standard security practice anyway.

Sample Chapter: Mastering Windows 7 Deployment

Last year was pretty busy.  Not only did I write Mastering Hyper-V Deployment (with MVP Patrick Lownds helping), but that project was sandwiched by me writing a number of chapters for Mastering Windows 7 Deployment.  That Windows 7 book is due out somethime this month.

If you browse onto the Sybex website you can get a sneak peak into what the book is like.  There is a sample exceprt from the book, along with the TOC.

The book aims to cover all the essential steps in a Windows 7 deployment … from the assessment, solving application compatibility issues, understanding WAIK (and digging deeper), learnign about WDS for the first time (and digging deeper), more of that on MDT, and even doing zero touch deployments using Configuration Manager 2007.  A good team of people contributed on the book from all over the place … and the tech reviewers were some of the biggest names around (I wet myself with fear when I saw who they were).

Give it a look, and don’t be shy of placing an order if you like what you see 🙂

Mastering Hyper-V Deployment Excerpts

Sybex, the publisher of Mastering Hyper-V Deployment, have posted some excerpts from the book.  One of them is from Chapter 1, written by the excellent Patrick Lownds (Virtual Machine MVP from the UK).  As you’ll see from the table of contents, this book is laid out kind of like a Hyper-V project plan, going from the proposal (Chapter 1), all the way through steps like assessment, Hyper-V deployment, System Center deployment, and so on:

Part I: Overview.

  • Chapter 1: Proposing Virtualization: How to propose Hyper-V and virtualisation to your boss or customer.
  • Chapter 2: The Architecture of Hyper-V: Understand how Hyper-V works, including Dynamic Memory (SP1 beta).

Part II: Planning.

  • Chapter 3: The Project Plan: This is a project with lots of change and it needs a plan.
  • Chapter 4: Assessing the Existing Infrastructure: You need to understand what you are converting into virtual machines.
  • Chapter 5: Planning the Hardware Deployment: Size the infrastructure, license it, and purchase it.

Part III: Deploying Core Virtualization Technologies.

  • Chapter 6: Deploying Hyper-V: Install Hyper-V.
  • Chapter 7: Virtual Machine Manager 2008 R2: Get VMM running, stock your library, enable self-service provisioning.  Manage VMware and Virtual Server 2005 R2 SP1.
  • Chapter 8: Virtualization Scenarios: How to design virtual machines for various roles and scales in a supported manner.

Part IV: Advanced Management.

  • Chapter 9: Operations Manager 2007 R2: Get PRO configured, make use of it, alerting and reporting.
  • Chapter 10: Data Protection Manager 2010: Back up your infrastrucuture in new exciting ways.
  • Chapter 11: System Center Essentials 2010: More than just SCE: Hyper-V, SBS 2008 and SCE 2010 for small and medium businesses.

Part V: Additional Operations.

  • Chapter 12: Security: Patching, antivirtus and where to put your Hyper-V hosts on the network.
  • Chapter 13: Business Continuity: A perk of virtualisation – replicate virtual machines instead of data for more reliable DR.

Passed the 70-401 Exam

I passed the 70-401 (System Center Configuration Manager 2007, Configuring) exam this morning.  I found most of the questions to be pretty simple.  My advice: know the logs on the client, know the site roles (points), pay attention to software update deployment, and you are storted.  Also pay attention to the various pieces of work you do to prep an evironment for installing ConfigMgr.  I was surprised to see how the OS deployment questions seemed to only look at PXE and DHCP.  There were a couple of questions that I marked for review.  It was merely a matter of working out which answers were clearly wrong, leaving the right answers for you to tick.

That’s 3 exams in just over a week.  I’ll probably have a lash at 70-659 next, before focusing on upgrading my MCSE to an MCITP.

Passed 70-635 Exam

I sat and passed the 70-635 (MDT 2008) exam today.  I know it’s old; but it’s required for a some MS partner stuff and a more modern replacement hasn’t been announced as a requirements replacement.  The exam was particularly easy considering that I had done work with Vista, WAIK (Vista and Windows 7), WinPE, MDT 2010, WDS (2003 SP2, 2008, 2008 R2), and ConfigMgr 2007.  It also goes into some Office 2007 deployment stuff which is easy enough and some SMS 2003 stuff.  The answers to the SMS questions centred around SP3 and the OSD feature pack with everything else being similar to ConfigMgr.

What I did not like was how some of the questions are written as trick questions rather than as tests of knowledge or experience.  That’s quite unfair.  I didn’t bother commenting on the questions; I have my doubts about the comments being used and I had places to be and things to do.

Next up (once the Prometric site lets me book an exam from my voucher) is 70-401: System Center Configuration Manager, Configuring.

So … What Exactly Am I Writing?

You can tell I’m pretty busy because my usual high rate of blogging has dropped significantly in the last month.  Apologies for that.  The blogging has become writing.  I am involved in 2 book projects.  I’ve just seen on Twitter that details on one of those has just gone public.  I actually just saw the tweet seconds after I sent off a chapter I just finished.

Earlier this year I proposed an idea for a Windows Server 2008 R2 virtualization book to Wiley Publishing/Sybex.  It took quite a bit of work to tune the proposal.  It requires an understanding of the subject matter, the audience, and ideas on how it can be marketed.  You could think that a brief overview on the subject matter would be enough.  But no, the publisher needs much more detail.  You pretty much have to provide a detailed project plan for every heading (3 levels deep), page estimates and time estimates.  The proposal evolved over the weeks and eventually went through a couple of reviews.  I then got the news: an ISBN number was assigned and contracts were on the way – I was going to be a lead author on my own book for the very first time!!!!  I did get drunk that night – I think.

The deadlines are very tight.  I was considering seeking help.  My contact in Sybex advised that I outsource some of the chapters to a co-author.  I knew the person I wanted to bring in.  Wilbour Craddock is a technical specialist in the partner team with Microsoft Ireland.  Will (Irish folks will know him as the crazy Canadian who is always wearing shorts) is also a former SBS MVP.  His job has him spending a lot of time working with Hyper-V and Microsoft System Center, making him a perfect co-author to work with on this project.  Thankfully, Will agreed to hop on board the crazy train of book writing.

Another MVP (I won’t say who yet because I don’t have permission to name him) is the technical editor under the employment of Sybex.  He’s an ace at this stuff and will make sure everything we do is up to scratch.

The book is called Mastering Hyper-V Deployment.  I won’t go into the details of it yet.  But you can bet that it is based on our collective experience and knowledge of the product set involved in a Hyper-V deployment.  I saw a gap in the market and figured I could probably write (or a good chunk of) the book to fill it.  The estimated release is in November 19th of this year.  That means we need to finish writing in July.  It has started to appear on some sites for pre-order.

I’m two chapters in a the moment.  I’m really pushing my hardware at home to its limits and am “this close” to buying more.  Will is ahead of schedule and has one chapter nearly done.

I am also working on another book project as a co-author for a friend’s book.  It’s another on-subject book that is turning out to be a good experience.  I’ve one chapter done on that and am 50% through the other.  I’ll talk more about that when the time is right.

As you may have read in my previous posts about my chapters in Mastering Windows Server 2008 R2, the original draft edit is just the very start of the process.  There are numerous technical, language, layout and copy edits for each and every chapter.  It’s a lot of work but it’s a great experience.  And I can’t wait for the buzz to see my name as the lead author of a book in a book shop.  I had to really try when I saw Mastering Windows Server 2008 R2 in Barnes & Noble over in Belleview WA back in February.

Managing SharePoint 2010 using System Center

I’ve tuned into a webcast aimed at the System Center Influencers and I’m going to try blog from it live.  Microsoft’s line is that System Center is the way to manage SharePoint because Microsoft understands the requirements.

SharePoint often started as some ad-hoc solution but grew from there to be mission critical and containing urgent business data.  Administration is complex: users, file server admins, web admins, database admins and web developers.

System Center Improves Availability:

  • DPM backs it up the way it should be.
  • Operations Manager monitors health and performance.
  • Virtualisation (VMM managed) can allow for rapid deployment with minimal footprint.

Administration

  • Configuration automates management
  • Service Desk will add more benefits

Centralised Management

This is the norm for System Center.  Centralised management with delegation is how System Center works.  For example, a Sharepoint administrator could deploy a front end server in minutes using the VMM 2008 R2 self service portal.  A quota will control sprawl but the network administrators don’t need to be as involved.

OpsMgr Management Pack

  • There is a new monitoring architecture.  There are physical and logical components where the physical entity rolls up to a logical entity.
  • Monitoring is integrated into SharePoint so the SharePoint admins can see the health in SharePoint
  • There will be a unified management pack instead of the current 2007 split management packs.  The discovery process will identify the roles installed on an agent machine and only utilise the required components.

We’re shown an OpsMgr diagram that shows the architecture of a SharePoint deployment.  If you haven’t seen these, they are hierarchical diagrams that give you a visualisation of some system, e.g. HP Blade farm, Hyper-V cluster, SharePoint farm.

The 2010 management pack allows you to monitor a particular web application in SharePoint 2010.  The management pack is more aware of what components are deployed where and the interdependencies – sorry I’m not a SharePoint guru so I’m missing some of the terminology here.

Rules administration has been simplified.  There is a view in the Monitoring pane to view the health of all rules for the SharePoint 2010 management pack.  I like this.  I’ve not seen it in any other management pack.  The SQL guys should have coffee with the SharePoint folks 🙂

Three are 300% more discoveries and 1293% more classes and 300% more monitors than in 2007.  That is a huge increase in automated knowledge being built into OpsMgr to look after SharePoint 2010.  There are 45% fewer rules.  This is a good thing because there is duplicated effort being reduced for IIS and SQL management pack to reduce noise.  Microsoft assumes you’ll install those other management packs.  approximately 150 TechNet articles are linked in the pack to guide you to fixing certain detected issues.

Data Protection Manager 2010

DPM 2010 is due out around April 2010.  It important to Hyper-V admins because it adds support for CSV.  DPM allows you to backup to disk and then optionally stream to tape.  You can also replicate one DPM server to another for

SharePoint 2003 and WSS 2.0 are backed up basically as SQL.  You need the native SP tool to complete the backup..

SharePoint 2007 and WSS 3.0 is backed up using a SharePoint VSS writer.  Every server (web/content/config/index) gets an agent.  DPM reaches out to “the farm” and can back up everything required.

DPM is designed to know what to back up.  3rd party solutions are generic and don’t have that.  For example, a new server in the farm will be detected.  The DPM administrator needs to authorise this addition.

DPM 2010 does something similar with SharePoint 2010.  However, it is completely automated, allowing your delegated VMM administrators or Configuration Manager administrators (SharePoint administrators) to deploy VM’s or physical machines.

One of the cool things about DPM is that it doesn’t have specialised agents.  It’s using VSS writers.  That means there is 1 agent for all types of protected servers.

We get a demo now and we see the DPM administrator can just select “the farm” and back that up.  There’s no selecting of components or roles.  The speaker only sets up his destination and retention policies.

DPM 2007 is noisy, e.g. data consistency checks.  I’ve seen this when I did some lab work.  The job wizard allows you to either to perform a heal/check if a problem is found, on a scheduled basis or not at all.  This is a self healing feature.

Recoveries can be done at the farm level, an individual content (SQL) database.  SharePoint 2007 can restore a site collection, a site or a document.  This requires a recovery farm, i.e. a server, consuming resources and increasing costs.  SharePoint 2010 with DPM 2010 does not require a recovery farm.  You can directly recover an item into the production farm.  Trust me, that’s huge.

The release candidate for DPM 2010 comes out next week.

Virtualisation

  • Web role, Render Content: Virtualisation ideal
  • Query role, Process Search, Queries: Virtualisation Ideal
  • Application Role, Excel Forms Services: Virtualisation ideal
  • Index role, Crawl Index: Consider virtualisation– small amount of crawling, and drive space used to store the index (VHD = maximum 2TB, although you can go to pass through disks for more).
  • Database role: Consider virtualisation – OK for smaller farms.

My Take

My advice on top of this: Monitor everything using VMM and Operations Manager.  You soon see if something is a candidate for virtualisation or if a VM needs to be migrated to physical.

If you run everything on a Hyper-V 2008 R2 cluster then enable PRO in VMM.  Any performance issues will allow an automatic Live Migration (if you allow it) to avoid performance bottlenecks.

If you are going physical for the production environment then consider virtual for the DR site if reduced capacity is OK.  For example, your production site is backed up with DPM.  You keep a Hyper-V farm in the DR site.  Your DPM server replicates to a DR site DPM server.  During a DR you can do a restoration.  Will it work?  Who knows :)  It’s something you can test pretty cheaply with Hyper-V Server 2008 R2.  Money is tight everywhere and this might be an option.

ACT for Configuration Manager

Those Configuration Manager teams in Redmond must be incredibly busy and well managed.  They have two product developments going on (ConfigMgr 2007 R3 and ConfigMgr v.Next) as well as producing add-ons for existing products.

The latest is the Application Compatibility Toolkit for Configuration Manager as blogged about by Jeff Wettlaufer.  The concept is simple enough; using ConfigMgr you can audit your existing desktops to see which applications you have.  You can use this information to assess Windows 7 compatibility.  It will also do the same for device drivers.  This reads like MAP for Windows 7 taking on the power and scalability of ConfigMgr.  MAP would be fine in a single office.  ConfigMgr takes this to the WAN.

That’s another bow to the string for Windows 7 deployment in the Enterprise.

Microsoft Assessment and Planning 5.0 CTP

A community technical preview (i.e. a pre-beta, probably buggy) release of MAP version 5.0 has been released on Connect by Microsoft.  MAP is a free set of tools and guidance on how to prepare for a set of technologies, e.g. Windows 7, Windows Server 2008 R2, Hyper-V, etc.  Version 5 adds:

  • Heterogeneous Server Environment Inventory for Technologies including Windows Server, Linux, UNIX and VMware
  • Ability to determine usage of deployed  System Center Configuration Manager, a member of the Core Client Access License (Core CAL) Suite.
  • Office 2010 Readiness Assessment.
Technorati Tags: ,