Is there any way to see what process is causing our JSS to run at close to 100% utilization? We can restart the Tomcat process, but it will quickly jump back up. Last year we moved the SQL process to a separate server, and that server is not generally running high, the the Java Tomcat process.
Our JSS server is running in a vm with CentOS with 12GB RAM and 4 processors allocated. We have about 3000 MacBooks and 1200 iPads managed.
Any help would be appreciated.
Things to check:
1. Check your database size. Has it grown? As certain tables grow, so does other back end pressure on the MySQL server. You can tell if you save your backups and see if they are getting bigger. Running a JSS report periodically will help you monitor the situation. Large tables manifest in slow response when your look up certain information in the web GUI.
2. Get JMX running on your server so you can monitor the Java Heap memory. Java memory has to be setup properly. Too much memory, and the memory recovery for Java does takes a long time, too little and the server starves and you will most likely have to restart the Tomcat service. You could just set the Java memory higher, but without some way to monitor, you could just hit the same situation again. https://docs.oracle.com/javase/7/docs/technotes/guides/management/agent.html
3. Make sure any AV software is not scanning your Jamf directories. You can use an EICAR file and place in each directory. If your AV software cleans it up then your AV software is scanning your Jamf directories. (Kudos to Don Montalvo for this one!)
4. MySQL configuration. Too much to impact here, but at least look at your key buffer size if using MyISAM or Innodb buffer pool if using InnoDB. You can install MySQL workbench on another machine (do not install on your MySQL server!) and monitor your server. It does not show a ton of information, but is a quick way to check on your server.
Thanks for the JMX idea. Now that we have it going we just need to figure out what to be looking at for optimizing.
The Heap Memory Usage seems to hover just under 2GB, though we have 6 GB allocated.
The Live Threads so far just keeps steadily increasing over the 20 minutes the server has been up. Now running around 900 threads.
Java heap memory works in four buckets using a very gross description. As one of the four bucket gets filled, it dumps it into another. The last bucket in the chain, if it gets full, will mean the bucket has to be emptied, which in this case, is an expense endeavor. Most call it the "stop the world" event, as Java has to clean out all that memory, which can stall any activity on the Jamf web server. That is why having a large setting for Java memory can be an issue in and of itself. The more Java memory you give it, the longer it takes to empty the bucket, which means the JSS web app waits. But you indicate your memory is staying below 2GB (and probably goes up and down all day), so the Java heap memory is most likely not your issue.
You will have to monitor other activity for your server, such as OS and MySQL parameters. You could also consider adding web servers, keeping in mind though that adding more servers means you have to consider which one will be the master server (and should not have clients pointed to it to help with the memory issue described) and which will be the client facing. The master becomes a special server that you can for administration and must be the first server brought up and the last server brought down.
I would review your MySQL and Tomcat configuration settings. Check things like the max threads for Tomcat and the key buffer size. Make sure you have enough of each to satisfy your enviornment.
After another 10 minutes the memory started climbing until it maxed out the 6GB and the CPU utilization spiked. So it might be memory after all. The SQL server (completely separate VM) doesn't seem taxed, even when the JSS process has the CPU pegged.
We also have a support ticket open with Jamf, to get a little more guidance for fine tuning, bumt this is definitely point us in the right direction.
I an glad you opened a case. I am sure Jamf can help you. If this is the Java Heap memory, you have two choices. Scale up by adding more memory or scale out by adding more servers. But keep in mind, the more Java memory you add, the longer it may take to recover from the stop the world events. Also, if your environment was working and your environment just started having problems, you need to look at back pressure issues. Back pressure issues can be database growth size, additional features you turned on, higher logging settings, other Java applications on the server (hopefully none!), etc. Throwing memory at it may solve the problem, but may crop up in the future. Scaling out means a more complex environment to support, but may serve you in the long run.