Reality check: Software Update Process

Taylor_Armstron
Valued Contributor

Curious what others think - we're about to move to "Production" with our JSS implementation, and would appreciate input on the approach we're taking.

We're fairly "patch-happy" in the sense of regular patching and tight requirements/deadlines for addressing vulnerabilities.

We have separate "Test" and "Production" cycles.
The "Test" cycle is fairly automated on the back-end. AutoPKGr checks for updates, packages them, updates the policies as needed, and then feeds them via Self Service. So far, working fairly smoothly. The test machines are scoped via a static group which I manage. Test users (and IT staff) know to check SS and install/test, and I have dashboards to monitor and make sure that the packages are actually installed. Once tested, a change control board votes to move to production.

Production workflow:
First, a smart group for production workstations which is basically "Computer group not a member of 'Testing'" AND Computer Group not a member of "Servers"
Next, I have a smart group for each managed application: aka "Adobe Flash-Production-Smart" Criteria is basically "Application title is <Flash>" AND "Computer group is a member of "Production Workstation"

This means that only machines WITH the application will get the update.

Next, a computer policy for each application, which caches the package to local workstations. We have users connecting over VPN, some slow links, so I've opted to cache first to hopefully speed up actual installs once initiated.

Next, an EA which checks to see if there are any cached packages in the /Waiting Room directory. A smart group is then populated of machines which have pending cached packages.

LAST... a policy scoped to that smart group, which simply pops up for the user, giving them a deferral option up to our deadline, and which simply installs all cached packages, and runs software update.

Writing it all down it sounds a bit more convoluted than it "feels" in my head. In limited testing, so far it works well. It saves me from having to update deadlines on a dozen or more policies on a regular basis, as I only have to change the deadline on the "Final" "Install cached packages" policy. My weekly workflow will basically consist of updating the package versions in the production policies and flushing the logs. Users with no pending updates are not interrupted, users only get updates if they have the application to begin with, and once a user choses to start the update process, it SHOLD complete more quickly as it will be installing from local storage instead of pulling over the network, which could mean GigE, or could mean a T1.

Thoughts? Open to feedback of any sort, just trying to figure out which "Gotchas" I haven't spotted yet.

1 ACCEPTED SOLUTION

Taylor_Armstron
Valued Contributor

Just a followup to my own post in case anyone decides to take a similar approach.

Part of our reason for taking this path was that we have remote offices which are often on slow WAN links (T1,etc).

Overall, this approach worked well on our initial test, but the one issue I'm seeing: Starting both sets of policies (caching + a policy with a deadline to install cached packages) means that the "Install" package, in some cases, ran after the 1st packages were cached, but not all. Thus, some updates were installed but not all. Flushing the "install cached" policy resulted in remaining packages being cached.

I expect that for now, I'll simply change my own workflow - initiate the cache policies on a Friday, and not flush/set deadline on the "Install" policy until Monday morning. This still won't resolve the issue for machines turned off over the weekend, or laptops taken home, but it should at least cut down on the number of machines installing the smaller cached packages that downloaded quickly, while things like MS Office 14.6 not getting installed until I flushed the policy logs and re-ran it.

View solution in original post

1 REPLY 1

Taylor_Armstron
Valued Contributor

Just a followup to my own post in case anyone decides to take a similar approach.

Part of our reason for taking this path was that we have remote offices which are often on slow WAN links (T1,etc).

Overall, this approach worked well on our initial test, but the one issue I'm seeing: Starting both sets of policies (caching + a policy with a deadline to install cached packages) means that the "Install" package, in some cases, ran after the 1st packages were cached, but not all. Thus, some updates were installed but not all. Flushing the "install cached" policy resulted in remaining packages being cached.

I expect that for now, I'll simply change my own workflow - initiate the cache policies on a Friday, and not flush/set deadline on the "Install" policy until Monday morning. This still won't resolve the issue for machines turned off over the weekend, or laptops taken home, but it should at least cut down on the number of machines installing the smaller cached packages that downloaded quickly, while things like MS Office 14.6 not getting installed until I flushed the policy logs and re-ran it.