Best Practice for re-testing when updating an existing Policy

cory_barnett
New Contributor

I am just wondering what everyone's best practice is for rolling out new updates to an existing policy. For example we have Privilege Management which the packages need to be updated every time a new one comes out. We want to roll each one out without having to create a new policy each time but I don't see any other option. I have a sandbox but that is only tied to a couple test machines. Would be nice if you could push out a policy in sandbox to production site somehow then just update the old policy in Production when ready.

How does everyone handle things like that? I could just put the policy back to specific computers but then any new computer wouldn't automatically get the policy.

Any suggestions would be much appreciated

9 REPLIES 9

frootion
New Contributor III

If you change anything inside the policy the logs will stay the same, so the changes will not be pushed to the already scoped machines.

In this (rare) cases we use the “Flush All” button inside the policy logs.

Would this be an option for you?

tlarkin
Honored Contributor

In my opinion there is no such thing as a "best practice," as it implies there is only one or a very few number of ways to do something the best possible way. I prefer looking at it from a perspective of "better practices," instead. Every Org will have different needs and requirements. So, there is really no one-size-fits-all best practice. Ok, I am done being pedantic.

So, if your goal is to automate policy changes, I think there are some key high level takeaways that help you accomplish this goal.

  • Collect data so you can audit and improve things moving forward
  • Design systems and workflows that are modular and easy to maintain
  • Use scope to test and roll out
  • Everything should be in some form of version control, or at least as much as you can put into version control
  • Create automation pipelines that can support your workflows

If you cannot automate it, then you should look at redesigning it. This may not be possible in some cases, like for example a piece of software you need to deploy is behind an auth wall with MFA or captcha, etc. Automating that may be difficult as those systems require human interaction, but you should still try to automate as much as possible

When it comes to system states or software states I like to break things down into a set of static policies that will handle scoping, reporting, etc, and they just call a main policy to do the work. This way you can always use the APIs to POST/PUT to the main policies and change the thing, while your static code/policies can remain in place and do the logic.

In some cases I don't even use jamf for logic, I just use a dash of code and the local state and a script that I just execute on all endpoints. Since jamf is solely based of server side system state data, it is often subjected to data drift. If you inventory once a day that means there is 23+ hours of data drift a client can go through before the next inventory. So, I have used both zsh with the builtin is-at-least and Python's LooseVersion to do simple things like check if the local version of app is the desired targeted version and if not payload the app via the main policy. This way I can run a simple script against my entire fleet and not have to worry about the tech debt of smart groups or the error of margin with manual labor.

example:

#!/bin/zsh

# un comment set -x for debug output
#set -x
app_path="${4}"
app_vers="${5}"
policy="${6}"

# test if the app exists, if not install now and exit
if [[ ! -e "${app_path}" ]]
    then echo "app does not exist, exiting..."
    /usr/local/bin/jamf policy -event "${policy}"
    exit 0
fi


# get app current version on the client via spotlight
get_app_vers=$(mdls "${app_path}" -name kMDItemVersion -raw)

# use is at least to validate we are running minimum version 
autoload is-at-least
if is-at-least "${app_vers}" "${get_app_vers}"
  then echo "required version detected, exiting..."
  exit 0
  else echo "client is running an older version"
  /usr/local/bin/jamf policy -event "${policy}"
fi

This is a method where you create a handler script to just do the thing and don't fuss with jamf server side controls. here is a Python example

#!/opt/bin/python3

import subprocess
from distutils.version import LooseVersion
import sys

app_path = sys.argv[1]
deploy_vers = str(sys.argv[2])

def get_local_app_version():
    cmd = ['mdls', '-raw', '-name', 'kMDItemVersion', app_path]
    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    out, err = proc.communicate()
    if proc.returncode != 0:
        print("error occurred")
    else:
        return out

def compare_versions(current_vers):
    if str(deploy_vers) >= str(LooseVersion(current_vers)):
        return True
    else:
        return False


current_vers = str(get_local_app_version())

if compare_versions(current_vers):
    print("found desired or newer version")

This is not 100% functional code as I removed some of my Org specific stuff out, but these are high level generic examples one could use and bend to their own uses. So the idea is to create a script that will run daily, or on demand (as in flush the logs and run once) to do the specific thing you want. Then code is executed against the entire fleet and does not rely on smart groups or jamf inventory data as you are now using local state data from the device itself.

For deployment methodology we have 3 git branches on most things. We have the following:

  • dev branch (local dev version, check out a branch, do the work, submit PR)
  • stage branch (same as dev but only accepted from approved PR, and gets deployed to our test/stage group of devices/users)
  • prod branch (goes to all devices across the fleet)

Also, note that no one can commit/push to stage or prod branches and we can only merge from the other branch via branch promotion. This seems like a lot of work, and upfront it sorta is. As you would need to adopt version control systems as a part of your culture and workflows. Even if you are copy/pasting these into jamf that is a really good start on rolling out changes and mitigating the x factor of failures you could not duplicate or catch in your stage environment.

This may or may not work for every Org as like I posted previously a lot of it is that it just depends if this fits for you. This is also why I do not like the term "best practice," because even our practices are probably not "best" for other Orgs. The reason I chose to share software versioning checks specifically is that jamf pro doesn't do a great job of version string comparison. It can only check for things like or is not a specific version. This means that you can run into scenarios where say an end user auto updates Chrome or Firefox, but your smart group has not been updated yet to reflect that new version. Your automation would then downgrade the user to the version the smart group wants, and once your automation kicks in to install the new version, it would then reinstall the version that the end user had already auto updated to. So, in this specific example this is why I chose to use local state data and not jamf data. There are of course many ways to accomplish this and roll out your changes to a test or stage group before shipping it to all devices and you can have these scripts execute on a regular basis as they will just exit if the criteria is met.

sdagley
Esteemed Contributor II

@tlarkin A quibble with your complaint Smart Groups only handle like or is not for version testing. You can also use a regex, and the scripts https://github.com/moorereason/make_ge_version_regex/ or Match Version Number or Higher.bash make it easy to create a regex that matches on a specific version number or higher. That makes checking for a minimum version via Smart Group as easy as your zsh and Python scripts.

tlarkin
Honored Contributor

@sdagley sure but regex is really not a good solution as it is fragile. While I totally think regex has a place, there are already existing tools to do version string comparison which are not as fragile. This also does not solve my server side dependency with data drift.

This is why I do not like the term "best practice," as it may be a "better practice," for some Orgs to use regex for smart groups but for myself and my Org we will probably never use regex for anything in smart groups. We have plenty of reasons not to use it and it doesn't really address the issue of actual version string parsing. Also, zsh has a built in that just works using is-at-least and Python has LooseVersion which instead of maintaining 100s of lines of code when you can just use a built in feature.

My code is simple, and pretty straight forward. To me this is a "better practice" because I can shift this stuff left and it can be one of the admins things they own and engineering doesn't have to maintain it. However, each Org will have their own ways that are better for them and that won't always transfer from Org to Org. So, maybe what I am doing is not great for some others.

sdagley
Esteemed Contributor II

@tlarkin The quibble isn't with what's best practice, but that a Smart Group isn't capable of doing the same kind of version comparison as zsh's is-at-least or Python's LooseVersion. Utilizing a regex definitely makes that possible, and using a regex generated by the aforementioned scripts makes it easy and not fragile.

tlarkin
Honored Contributor

@sdagley sorry wasn't saying you can't do regex to me it doesn't solve the problem of version string parsing. This is my opinion. Things like regex are hard to maintain with out things like unit tests and proper testing whenever you make a regex change and jamf is also not great at that. Unless you want to clone objects in prod.

Then again, like I mentioned before, it is totally okay to do what you think is best for your Org. Your Org hired you to be the expert, I am simply stating my opinions which may not be the best practices for other Orgs.

gabester
Contributor III

@sdagley  It sure would be nice if Jamf improved the version testing piece so that the comparison happened without necessarily needing to input a complex regex and every vendor adhered to something like https://semver.org. I gotta agree with Tom on regex being fragile and not easy to human parse when you need to perform an update in a hurry (e.g. when you need to patch a Chrome zero day). Even the ITIL went from calling things "best practice" to "good practice" a while back to acknowledge the variability of requirements between orgs. @tlarkin I personally like the idea of getting my org on board using version control, but are you saying that you have automation in place to promote from dev->stage->prod from your version control to your JSS? I presume that's leveraging the API... I'd love to know more about how that works and how you enforce the process so that someone doesn't go rogue or you run into a situation where you need to do something out of band and may not have the luxury to follow the rigor of your process. Props to you and it speaks to organizational maturity! Another practice for things you change semi regularly would be to wrap them in start/end availability and repeat scheduling... so if you update your privilege management monthly you could set the policy to run once a month on each client and have it start say on the 1st and end on the 30th and just iterate the month every time you replace the package and the policy should redeploy to the clients. 

tlarkin
Honored Contributor

@gabester it is 3 separate branches (dev, stage and prod) and we use github actions with remote git runners on hosts in AWS.  The workflow is somewhat like this (as we are still changing things where we need to so we can streamline)

1 - engineer checks out a branch in `git` against the dev branch

2 - make your changes in dev and test locally

3 - submit PR when you want it tested

4 - another engineer swaps to your branch and validates the changes 

5 - if all changes test good and checks and balances work out, we merge it with the main dev branch and it gets a final smoke test

6 - then we use branch to branch promotion and the dev branch gets merged to stage

7 - stage is tied to our pilot testing group, we let it bake for a few days and collect data

8 - if no problems found we merge to main branch 

 

Things to note, no one can branch stage or prod, no one can commit directly to those branches, they only accept branch promotion.  this is still a work in progress so we may change it.  We also are not doing this for jamf just yet, we are doing it for Munki/gorilla and AutoPKG first.  Eventually we will get it going with jamf for EAs and scripts 

awoodbury
Contributor
Contributor

Hi Cory,

Just saw this post. I made a Package Test Policy that is only Scoped to my test machine(s). I swap out the last tested package with the new package I'm testing, then Flush the Log and sudo jamf policy in Terminal on the test machine to rerun the Test Policy with the new package.

 

Once I verify that the new package works correctly from the Test Policy, I will swap the old package out for the new one in the Production Policy, then Flush All Logs. Rinse and repeat.

 

Aaron