Jumping on this thread...
Random endpoints managed by our Jamf Pro instance are not checking in/performing an inventory update for even more than 10 days.
Our ad-hoc solution is to request the user to run a policy from the Self Service app which forces Inventory Update (See attached screenshot. Policies -> Maintenance -> Update Inventory)
Does anyone have an idea regarding what causes it and how do we eliminate the issue for the long term?
Following this topic as we're seeing this in our environment as well. Where Macs do not seem to be submitting inventory it appears they are running a policy that hasn't completed for some reason. Initially it appeared to be a policy we had that executed 'softwareupdate' to look for any new applicable updates but we've seen some other instances of this where softwareupdate wasn't running. We've been recommending and initiating reboots to clear the issue but it seems to come back at some point.
Yep, that's it.
Even if we were to force all users to run our Quick Fix & Inventory Update policy manually from the Pendo Self Service - It just appears again randomly without any common traits I can point on such as macOS version, model, softwares installed, location etc.
Did anybody have any success fixing that issue?
Are you guys seeing something similar to this?
I've had no end of problems since the summer.
Services going 'outta whack' and inventory updates failing from then onwards.
Produced a custom search to try and get a handle on how many devices were affected.
Just spot checking a few of our Macs with this issue and they typically are running between 300-400 services. My own Mac that doesn't have this issue has 406 services listed in Jamf Pro. But I will keep an eye on this particular possibility as we look at more of these impacted Macs. I'm not sure how long this issue has persisted for us in general.
We became aware of the issue this year as we began migrating from Symantec Endpoint Protection to Microsoft Defender and started gauging compliance. Defender itself I don't think is the issue for us, it's just that we're driving compliance remediation for Defender via Jamf Pro where we did not do this for SE. So the inventory issue causes our Defender remediation policies to wait in line for the current policy to terminate.
Thanks for smart group criteria - saved me a bit of time trying to think of how to identify these!
However, I'm not seeing that correlation with the amount of services inventoried into Jamf. Some of our seem to be time-based. For instance, we have ~20 Macs that have a similar last update inventory date (10/08/21 within a few minutes). Like something stopped working with the binary after that date? There are other chunks with other dates, but that one is the most obvious.
I will have to join the former response in this thread - I also cannot find a correlation between amounts of services listed for the jamf record and the refusal of an endpoint to natively check-in/perform inventory update
I've had no meaningful response from Jamf Support on this issue. It always just gets corrected on an individual endpoint and swept under the rug because we can't consistently find devices that it happens to.
Sorry guys. I’m finished for the weekend.
we seem to have gotten on top of it.
I thought I’d fed back some solutions in another thread but I’ll come back next week and update this one.
just to clarify, you’re seeing the 15000+ services In JAMFcloud and further recons ‘unknown error’?
I have seen softwareupdate kill everything on the machine in Big Sur 11.6. however we have machine in Monterey that say they are checking in however they not doing an inverory update I feel that this implies that they are not really checking in... anyone running Netskope or Crowdstrike on their machines that see this?