Extension to record execution time of Recon

Jason
Contributor II

I'm not sure how easy/possible this one is. But i'm curious to know how long inventory updates are taking on each machine. It'd be nice to have an extension attribute that could put a line in like "Inventory Run Time: 76 seconds" for each system. Why? The benefit to that is then i could make a smart group with an alert set to see if any systems start taking longer than normal. I'm not seeing any issues today, but there were a couple times in the past my inventory had taken 10+ minutes to run for some reason (never figured out why, but it's working again).

1 REPLY 1

mm2270
Legendary Contributor III

Well, this is an odd request. There may be a way, but honestly, there are some inherent pitfalls with almost any approach I can think of.
The primary pitfall is that,since a Recon captures Extension Attributes in the beginning of the process, if you had something placing some value of the recon time into a local file, it would not then get picked up by any Extension Attribute until the next recon, at which point you have a value that could be 24+ hours old. I don't know how useful that would be.
Also, while I can think of some ways to do this for the normal built in Inventory check in the JSS, making it also happen with any policies that collect inventory would be pretty hard. Not impossible, but would be a lot of work.

But to make my post more useful, consider the following command, which you can actually copy/paste into Terminla and run

sudo echo $(date "+%s") > /private/var/recontime; sudo jamf recon; sudo echo $(date "+%s") >> /private/var/recontime; echo "Recon time: $(expr $(tail -1 /private/var/recontime) - $(head -1 /private/var/recontime)) seconds"

Let's break that down
1 - First, echo the date in seconds (unix time) into a local file at /private/var/recontime. It gets created new if its not there.
2 - Perform a full recon as usual
3 - Echo back a new date in seconds value into the "recontime" file, but this time, append it, don't overwrite the file
4 - Finally, grab the last line (end of recon) and first line (start of recon) from the "recontime" file and minus them to get the difference in seconds and echo it back.

The result could look something like this-

Retrieving inventory preferences from https://yourjss.company.com:8443/...
Finding extension attributes...
Locating hard drive information...
Locating hardware information (Mac OS X 10.8.5)...
Locating accounts...
Locating applications...
Locating package receipts...
Gathering application usage information...
Locating printers...
Submitting data to https://yourjss.company.com:8443/...
<computer_id>100</computer_id>
**Recon time: 59 seconds**

I bolded the last line, which is what I added from the above commands. The time printed would of course be approximate, give or take a second, but it should be close.

But, while this works to see the time it took live in the Terminal, you'd need to get that value back into an EA. That's not hard, but the first issue I pointed out still stands, that since EAs get captured first, you will be writing values into a file that won't get picked up until the next round.

The ONLY way I can think of to get around that would be to use the Casper Suite API. For example. have a Before script run that echoes an initial value into a local file. Overwrite the file to remove any previous data.
Have an After script that appends a new time value into the same file, then the same script calculates the diff between them, places the value into a new xml file and then uploads that xml with an API PUT command to the computer record, updating its record with the new EA value.

That could work, but you may end up with some strange results. For example, a Mac may initially report a low value which would keep it out of or into a Smart group, then a few seconds later the value could change (because of the API upload) and may suddenly change the Smart group its in. If you have email notifications going on for those Smart Groups, you could see an email come in about a Mac falling into and then out of a Smart group in close succession, for example.

Anyway, hopefully my rambling makes some sense. Let me know if you need some pointers on the API/xml process.