Tomcat has to be restarted often

LyndhurstSchool
New Contributor

Hi all,

We are suddenly seeing a very weird behavior with our jamf server. We have to constantly restart tomcat because it becomes completely unresponsive, this happens multiple times a day. We even went as far as setting up a new jss and the behavior persists. Any thoughts would be great?

27 REPLIES 27

Chris
Valued Contributor

Hi,

how's the memory situation on the box hosting the JSS?

mintzd01
New Contributor III
New Contributor III

We have been having the exact same issue. Our Tomcat keeps needing a reboot. We have 7GB of Ram in our Xserve and I increased tomcats ram to 2GB.

Any thoughts would be brilliant.

Dan

LyndhurstSchool
New Contributor

We have 10.7.4 and 16GB ram with a 256GB solid state drive. This issue started recently and we thought it was the memory so we gave tomcat more(4GB) but it was no help.

JPDyson
Valued Contributor

How big is your database? What's your log dump policy? I used to have this problem in a previous environment, but implementing a 6 month log flush interval all but eliminated it.

NowAllTheTime
Contributor III

We had the same thing going on, even with a pretty small DB. We went from 1 year log flushes to a mix of 6 and 3 months and then everything was OK.

LyndhurstSchool
New Contributor

Ok, we are probably on the default settings. I will change those settings now, we had it on "never" :-p. Thanks for the help! I will post back if it does not correct it.

mrshultz
New Contributor

I have been having the exact same issue ever since moving to Lion and above, seems to have to do with a memory leak in Tomcat, unfortunately I cant seem to find a fix, if anyone else comes across one PLEASE SHARE!!!!!!

kanderson
New Contributor

We are having the same issue today. We have been on the phone with support all day and no one mentioned that others are having the same issue?

spowell01
Contributor

Im gonna jump in here....spent the past week with jamf support regarding our tomcat instance going unresponsive anywhere from 2-15 times a day. No rhyme or reason...This is becoming a major issue for us since we rely on self service so much. They are at a total loss as to what is happening here......HELP!

our jss is running on 2008R2 with 12 gigs total, 6 gigs assigned to tomcat but it never even uses 2 gigs
This just started happening this week
Our logs are set to flush after 1 month

donmontalvo
Esteemed Contributor III

spowell01
Contributor

any news on this?

donmontalvo
Esteemed Contributor III

@spowell01 Just curious, is day to day responsibility of the Windows Servers (virtual or physical?) hosting JSS on someone else's plate? If so, is there a chance they made a change without telling you (like updating Java)? :)

--
https://donmontalvo.com

spowell01
Contributor

Hey Don, I am the server specialist with our district, so that's all under my umbrella. The windows servers are all virtualized, and I haven't made any specific changes to the JSS that would be related to this. On another note....yesterday was the first complete day in over a week that the JSS hasn't crashed on us....so far so good this morning as well....

sprattp
New Contributor II

Also having the same issue on our live server, the only way to stop it seems to be stopping the backups.

JPDyson
Valued Contributor

If stopping backups fixes the problem for you, perhaps your backups are set to the same time as your log flush? If those are competing, failures are likely, and Tomcat may not be able to handle it.

spowell01
Contributor

still an ongoing issue for us, and still working with jamf support to no avail. They really aren't any closer to figuring out whats happening in our situation, but haven't given up. We did try reinstalling the JSS/tomcat as well as removing all java components and reinstalling the original version. Tomcat didn't even stay up for 5 minutes. Also received an updated .C3P0 file for tomcat from jamf, and with that the JSS wouldn't even initialize the database on startup.

kanderson
New Contributor

JSS Support modified some priority pool settings, non-priority pools, and maxthread settings in our 'my.ini' file and some xml files. There is some confusion about which my.ini file we are using. I suspect this to be a cause of many errors in implementation, so i would run that by them. They also made changes to number of connections and deleted a bunch of records within the mobile_device_app_deployment_quue and deleted some reports from iphone_applicaitons.

We are still working with them for other significant issues but had to resolve the server crashing first.

We are Windows, with a dedicated server (managed by me), quad core,16GB mem - 12 dedicated to JSS.
We did not re-install or change our java.

gregp
Contributor

We had the same problem that started last October. We were (and still are) on 8.6.

The interesting thing is that it happened at exactly the same time, to the second. Not every day, but several times a week to one every week or three.

JAMF support had us increase the number of connections in Tomcat & MySQL and that helped and hasn't become unresponsive like it used to. Never was able to track down what caused it to get hung up in the middle of the night. Lots of logging was enabled, but never found the cause. One possible theory was due to massive amount of checkins all at once, but all at exactly the same time over weeks? But, I have no other theory.

Also wrote a script that will monitor :9006 and if that becomes responsive, it will restart Tomcat.

The monitoring script remains in place today and it restarts Tomcat every month or two when it becomes unresponsive as Tomcat increases its memory usage until it consumes the vast majority and doesn't run well (well beyond the max set in the Tomcat settings).

We're planning on migrating from our current OS X 10.6.5 machine to Linux this year, so not worried about its current behavior.

sprattp
New Contributor II

I have a 6 hour gap between the log files been flush and the backup and this didn't help. What did seem to fix the problem was reducing the various log retention periods which reduced the backup file by half

Chris_Hafner
Valued Contributor II

Well damnit all, now I'm joining the ranks of this thread. I'm loosing tomcat a few times a day after upgrading to 8.7 on a 10.7.5 Mac Pro. I suppose that I should call JAMF and see what's up.

curullij
Contributor

Does anyone have any progress on this issue?

tkimpton
Valued Contributor II

I would contact your account manager. This is getting ridiculous now! They need to test more.

tkimpton
Valued Contributor II

@ gregp

Please can you share your script and launch daemon. This is getting quite annoying now am tbh im not prepared to spend another week on the phone with my account manager trying to fix it

Chris
Valued Contributor

For what it's worth, some time ago we also had that issue, not very often so i didn't really bother.
However, we were seeing a different problem with clients not properly updating their inventory during recon.
To resolve this, i increased the Max Packet Size for MySQL.
That change also resolved the Tomcat issue for us, it hasn't crashed once since then.
You could check your JAMFSoftwareServer.log for messages like

[ERROR] [JAMFPreparedStatement ] - SQLException from execute: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (2054502 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable. [WARN ] [NewProxyPreparedStatement] - Exception on close of inner statement. com.mysql.jdbc.PacketTooBigException: Packet for query is too large (2054507 > 1048576).

Chris_Hafner
Valued Contributor II

For us, things seem to have gotten a bit better in 9.2 Right now, I have to be a little careful with my MySQL queries as I've locked that up on occasion, but I'm not sure that's a JAMF issue.

gregp
Contributor

@tkimpton][/url
Here's the plist for the Tomcat launch daemon:
[code]<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict> <key>Disabled</key> <false/> <key>Label</key> <string>com.jamfsoftware.tomcat</string> <key>OnDemand</key> <false/> <key>Program</key> <string>/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/bin/java</string> <key>ProgramArguments</key> <array> <string>/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/bin/java</string> <string>-Xms256m</string> <string>-Xmx5120m</string> <string>-XX:PermSize=64m</string> <string>-XX:MaxPermSize=128m</string> <string>-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager</string> <string>-Djava.util.logging.config.file=/Library/JSS/Tomcat/conf/logging.properties</string> <string>-Djava.awt.headless=true</string> <string>-Djava.endorsed.dirs=/Library/JSS/Tomcat/common/endorsed</string> <string>-classpath</string> <string>/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib/tools.jar:/Library/JSS/Tomcat/bin/bootstrap.jar:/Library/JSS/Tomcat/bin/tomcat-juli.jar</string> <string>-Dcatalina.base=/Library/JSS/Tomcat</string> <string>-Dcatalina.home=/Library/JSS/Tomcat</string> <string>-Djava.io.tmpdir=/Library/JSS/Tomcat/temp</string> <string>org.apache.catalina.startup.Bootstrap</string> <string>start</string> </array> <key>ServiceIPC</key> <false/> <key>UserName</key> <string>_appserver</string>
</dict>
</plist>[/code]

Monitor & restart JSS. It runs from launchd every 60 seconds. [code]#!/bin/bash
#
# Monitor JSS:9006 and restart if down. #

JSSURL="http://jss.company.com:9006"
restartlog="/Users/syadmin/Logs/JSS_restarted.log"

# Program runs once per monitoring interval
rc=/usr/bin/curl -s -I --max-time 2 $JSSURL | grep HTTP/1.1 | awk '{print $2}'

if [ "$rc" == "200" ]; then echo "date : $rc" > /tmp/JSS_Up if [ -f /tmp/JSS_Down ]; then `/bin/rm /tmp/JSS_Down` fi
elif [ ! -f /tmp/JSS_Down ]; then sleep 10 rc=/usr/bin/curl -s -I --max-time 2 $JSSURL | grep HTTP/1.1 | awk '{print $2}' if [ "$rc" == "200" ]; then echo "date : $rc" > /tmp/JSS_Up else echo "JSS down at: `/bin/date`" > /tmp/JSS_Down `/bin/launchctl unload /Library/LaunchDaemons/com.jamfsoftware.tomcat.plist` lastrestart=tail -1 /Users/sysadmin/Logs/JSS_restarted.log | awk '{print $3}' now=/bin/date +%s diffrestart=$((now-lastrestart)) if [ $diffrestart -lt 900 ]; then sleep 600 else sleep 10 fi `/bin/launchctl load /Library/LaunchDaemons/com.jamfsoftware.tomcat.plist` echo "JSS restarted `/bin/date +%s` `/bin/date`" >> $restartlog `/bin/date | /usr/bin/mailx -s "JSS restart" sysadmins@company.com` fi fi[/code]

pat_best
Contributor III

This is a real shot in the dark, but have you looked at your unmanaged clients recently? This may be a unique issue to our deployment, but I ran across about 800 clients that were previously managed and decided to flip themselves to unmanaged (this was probably a result of my JSS upgrade) I was maxing 500 database connections and crashing my JSS in about 20 minutes or so. I searched for unmanaged clients and retyped the management account name and password for the whole list and within 15 minutes or so my average database connections went from 500 to about 20 and my clients fell back into management. I am running Mac 10.8.5 with server app on a xserve, MySQL 5.5, and JSS 9.21. I called JAMF support to see if bad management account info will not allow the client to drop the database connection and am waiting for a response.