It is rare that Microsoft releases a bad update through Windows Updates, but one appeared this week, as Hans Vredevoort posted. How do you avoid the problem of automatically pushing out “bad” updates straight after they are released?
Well, here’s the “solution” I often encounter when I talk to consultants and administrators:
We approve patches manually
Ah! My response to this usually goes along the lines of:
- I grimace
- and respond with:
When you approve patches manually then you don’t patch at all!
One such company hadn’t deployed a Windows update since Windows XP SP2 – and I suspect that the media they used came with SP2 slipstreamed. It was no doubt that Conficker ate them up. And it’s no doubt that Conficker still is in the top 10 of malware in domain-joined (i.e. administrator controlled) PCs. Meanwhile, PCs that are managed by users (workgroup members) are not seeing Conficker in the top 10. By the way, Microsoft released a hotfix to prevent Conficker 1 month before the malware was first detected, and that was around the time of Windows 7’s GA launch.
The fact is that manual patch testing and approval do not happen. There might be a process, but that doesn’t mean that it’s used. I bet if you surveyed 1000 companies with this process then you’d find the majority of them don’t do it, and are probably woefully unprotected. Queue the moronic comments that’ll try to excuse behaviour … I know they’re coming and they only show guilt.
What you need is automation. But doesn’t automated patch approval mean that patches are approved and deployed immediately, bugs and all? Not necessarily.
When I started working with ConfigMgr 2012, I read the guides by Irish (in Sweden) MVP, Niall Brady. I liked his approach to dealing with updates:
- Check for new catalog updates every hour (my preference)
- Allow already approved updates to be superseded automatically
- Delay approval of updates by 7-14 days
- Set a deadline of 7 days
With this approach, updates are approved automatically, but they aren’t made available for 7-14 days. And updates won’t be mandatory for another 7 days beyond that. That means updates don’t get forced onto machines for 14-21.
For server updates, I’d set a maintenance window on the collection(s) of servers, so that updates can only happen during those time windows (and not impact SLA).
With this approach, you get the best of both worlds:
- You delay the updates, giving other people the “opportunity” to test the updates for you, and you deploy the 2nd release of “bad” updates (bad updates are superseded by new versions)
- The process is automated, so your updates are pushed out without any human intervention. You can always disable the automatic approval rule if the brown smelly stuff looks like it wants to hit the fan.
Remember, you can deploy updates from anywhere using ConfigMgr (see System Center Updates Pulisher). And this is just one of many reasons why I like ConfigMgr in the cloud.
The article has a point, but according to Microsoft you’re toast if you don’t have your critical patches deployed within 3 days. That leaves little options but automatic deployment of patches. Rather suffer downtime than being owned?
Every MSFT person has a different story. Any organisation that has implemented careful and well applied change control processes will take their time. I remember one MSFT “expert” who told people to push out updates ASAP and to use virtualisation snapshots as a rollback mechanism … and this was in a full large theater at TechEd Europe. Of course, he was 100% wrong to say that.
You can be aggressive but you need at least some sort of minimal test plan. It should be as automated as possible as well so you will be more likely to do it.
This is what we do.
http://www.blkmtn.org/Server_Patch_Schedule