JSS Web Interface Freezing....

mallen13
New Contributor III

This is only affecting a handful of our machine records, and I'm wondering if anyone else is seeing similar issues.

Scenario:

John's mac in the JSS will show any history EXCEPT for policy history.
When you try to open the policy history, you just get the 'busy' hourglass/wormhole, and the policy history will never open.
The JSS web interface will stop responding completely.

Re-starting Tomcat / JSS will resolve the hung JSS,
Deleting / re-enrolling machine solves policy log issue.

Sally's mac has no issues opening policy logs whatsoever.

Log flushing is set up to clear anything older than 90 days.
Tomcat is not running out of RAM when this condition happens... (raising usage did not help)

Suggestions ?

JSS 9.32 / Linux host

3 REPLIES 3

were_wulff
Valued Contributor II

@mallen13][/url][/url][/url

Usually, when we see this happen, it has to do with excessively large tables, usually the usage_logs table.

Occasionally, it's due to something being a bit messed up on individual inventory records as well (most commonly a huge number of policy history logs on that particular machine, which causes Tomcat to appear to hang while MySQL looks through millions of lines of data trying to find the requested items); a summary can help point us in the right direction, as can a copy of your JAMFSoftwareServer.log

This is one of those cases that you'll want to get in touch with your Technical Account Manager and send a new copy of a FULL JSS Summary (all boxes checked) to them to take a look at, as well as a copy of your JAMFSoftwareServer.log (if it's over 10MB, we'll need to compress it first, or our Exchange server will reject it as too large) as without that, it's just a bit of stabbing in the dark.

If you happen to know which machine records it happens on, that would also be something to include in the case e-mail as that can help narrow it down to where we need to look in the database itself.

Edit to add: I took a look at the most recent summary we've got for you, which is from April, and even then the database was abnormally large for the amount of devices (nearly 2GB for 204 devices; for that many devices, we expect to see the database under 500MB total); I would guess, based on the behavior you're describing, that it's even larger now, and is the root cause of what we're seeing here. That, either I or your TAM, can verify once we see an updated JSS Summary show up on the account.

The summary we currently have cuts off before it gets to log flushing & table sizes, so I can't say for certain which table(s) are bigger than they should be. Based on the described behavior, I'd guess usage_logs, applications, and application_usage_logs are larger than normal; a new full summary should show everything, including the sizes of all the tables.

Thanks!
Amanda Wulff
JAMF Software Support

mallen13
New Contributor III

New complete summary has been submitted. I'll dig into the logs and post findings this afternoon.

Thanks Amanda

were_wulff
Valued Contributor II

@mallen13][/url

Found it! It looks like we've got an orphan contact for you attached to our generic JAMF Nation account, so it didn't end up on the actual company account. I found that one and didn't see your name on the contact list any longer, so that's probably something we want to get straightened out as well. The most common reason that happens is e-mail address changes that don't get updated properly.

If you've got time, it's something we'd probably want to get straightened out over the phone or by e-mailing support@jamfsoftware.com as we don't want you posting your personal or company info on public boards.

Good news is the database size is WAY down from where it was in April; down around 611MB, which is much more in line with what we'd expect. The largest table is package_contents, which is what it is and doesn't really affect anything but database size.
No tables look abnormally large any longer, which points back to the direction of there being a specific issue with a specific machine (or two or three).

If you know of a few specific computers that are having the issue, first take a look at their general inventory record and see if anything looks off (massive numbers in any field, like, say, 5 million certificats, 80 hard drives, 4000 users, etc...) try to bring up the policy history again, let it hang, do not restart Tomcat, open up a command prompt on the server and log in to MySQL.

use jamfsoftware;

show full processlist;

That will (hopefully) show us what MySQL thinks is going on vs. what we're seeing in the webapp and narrow it down further.

It's possible that tossing everything into full debug will help too, in terms of information gathering, but starting with looking at the processlist while it appears to be hung is a lot easier than picking through a debug log. Sometimes it really is just a messed up inventory record that's doing it, especially if it's only on a few computers vs. environment wide.

The only other things I noticed that may be cause for concern:
38 policies are updating inventory; this just creates redundant inventory records and adds to the size of tables. The JSS appends a full record every inventory submission, so it’s a bit like stacking copies of papers over and over.

Generally, unless something is mission critical and needs an immediate inventory update that cannot wait until the device checks in normally. In your environment, it looks like that’s every 15 minutes, regardless, so unless it’s something that absolutely cannot wait for that every 15 minute check-in to trigger, we’d recommend unchecking the “update inventory” box in those policies. It just helps keep things cleaner long term.

It also looks like we’re on MySQL 5.1; we may want to plan to update to at least 5.5 sometime in the near future as we’re starting to slowly pull away from supporting the older versions of MySQL due to there being occasional difficulties with versions older than 5.5 performing in the way we’d like to see with some of the newer JSS features. Not super critical, but something that I’d recommend planning to upgrade sometime soon.
5.6 is also acceptable, but we do need to take a few extra steps with 5.6: Most notably, disabling strict mode, which is on by default.

MySQL’s max_allowed_packet is currently at 1MB, we usually want that at 512MB minimum; that can be changed by editing /etc/my.cnf. Change the variable from 1M to 512M or 1024M and save; MySQL needs to be restarted after that change is made.

If there ISN’T an /etc/my.cnf, we’ll need to copy one of the default ones from /usr/bin/mysql/support-files. I usually copy my-huge.cnf to /etc/my.cnf and edit that one.

We'll also want to get the contact records on our end straightened out so we can get a case going in the correct place rather than on the generic JAMF Nation account.

Thanks!
Amanda Wulff
JAMF Software Support