Machines not checking in?

m_higgins
Contributor

I was curious to see if anyone else has been getting this issue. We have 114 machines out of our 600 that have just stopped checking in. All on random dates with seemingly no reason. They are on and connected to the network.

You can run sudo jamf recon on the machine and it seems to run but the JSS does not get the information

The only way I have found around it is to un-enrol, delete the record and re-enrol.

Any ideas?

9 REPLIES 9

davidacland
Honored Contributor II
Honored Contributor II

It sounds like something date / time / certificate related to me.

If you use sudo jamf recon, are there any errors in the feedback, specifically around the part where its submitting to the JSS?

Also worth double checking what JSS they are submitting to and what the DNS resolves to, in case there is something funky there.

m_higgins
Contributor

There doesn't appear to be any errors at all. Is there a log file I can look at?

We have the one JSS which all machines submit to and the DNS appears to be working fine

davidacland
Honored Contributor II
Honored Contributor II

The client side log will be /var/log/jamf.log. On the server, the logs on any policy that has Update Inventory ticked will work. Excluding any sensitive server names, could you post the output from sudo jamf recon?.

gskibum
Contributor III

What output does
sudo jamf recon -verbose
provide?

dmw3
Contributor III

We had a similar issue recently. JAMF support suggested increasing the max_allowed_packet in my.cnf to 64m which seems to have resolved our issue.
From JAMF Support:
"Let's set this to 64MB as a baseline - any chunks larger than 1MB in size will not complete and this can bring on a host of issues, and inventory being one. Setting it too high can be a problem as well, so for our environment, 64MB is an appropriate size.

This can be set by typing the following: sudo nano /etc/mysql/my.cnf
Just change the max_allowed_packet from 1M to 64M under the mysqld section and save.

Once adjusted, restart MySQL and Tomcat."

m_higgins
Contributor

@dmw3

Any idea on how to do this on a windows deployment of the JSS?

dmw3
Contributor III

@m.higgins no sure as we don't run Windows JSS, running RHEL 6.7 for our set.

You might want to look for a ini file associated with MySQL and see if the values are in that.

mike_paul
Contributor III
Contributor III

Just a quick note. max_allowed_packet is not something that should normally need to be modified to correct client communication.

This would be for issues of submitting recon, not for clients checking in. So look at the last check-in vs last inventory submitted to see if its inventory or check-in issues.

The only reason why max_allowed_packet would need to be adjusted for inventory submission is if your clients are submitting a larger amount of data than they should be.

Examples where this might happen is client is offline for a long time and they build up a large application usage log or offline policy logs and when they do connect to the JSS finally they are attempting to send a larger packet then the default allowed value (4mb-16mb depending on version of mysql). I saw it with clock skew as well, wouldnt submit since we can verify clock skew, client built up logs and then JSS wouldnt take it cause it was too large.

But if that is the case, you will see in your JAMFSoftwareserver.log that something is trying to submit a packet larger than it is willing to accept.

Yes, you can increase max_allowed_packet to get those clients to submit that inventory but what you should do after that is figure out why they have that much data they are trying to submit and correct that. Just increasing your max_allowed_packet can create other problems in your JSS (database bloat and possible risk of large unneeded data being submitted) and shouldn't have to be done for anything other than uploading large printer PPDs in Casper Admin.

The methods that @davidacland mention are the troubleshooting methods I would recommend. That and contact your support rep to help dig deeper.

systems_support
New Contributor

I have this same issue occasionally for remote users and I found that restarting Tomcat on my DMZ host fixed it. https://jamfnation.jamfsoftware.com/article.html?id=117