Hi,
how's the memory situation on the box hosting the JSS?
We have been having the exact same issue. Our Tomcat keeps needing a reboot. We have 7GB of Ram in our Xserve and I increased tomcats ram to 2GB.
Any thoughts would be brilliant.
Dan
We have 10.7.4 and 16GB ram with a 256GB solid state drive. This issue started recently and we thought it was the memory so we gave tomcat more(4GB) but it was no help.
How big is your database? What's your log dump policy? I used to have this problem in a previous environment, but implementing a 6 month log flush interval all but eliminated it.
We had the same thing going on, even with a pretty small DB. We went from 1 year log flushes to a mix of 6 and 3 months and then everything was OK.
Ok, we are probably on the default settings. I will change those settings now, we had it on "never"
. Thanks for the help! I will post back if it does not correct it.
I have been having the exact same issue ever since moving to Lion and above, seems to have to do with a memory leak in Tomcat, unfortunately I cant seem to find a fix, if anyone else comes across one PLEASE SHARE!!!!!!
We are having the same issue today. We have been on the phone with support all day and no one mentioned that others are having the same issue?
Im gonna jump in here....spent the past week with jamf support regarding our tomcat instance going unresponsive anywhere from 2-15 times a day. No rhyme or reason...This is becoming a major issue for us since we rely on self service so much. They are at a total loss as to what is happening here......HELP!
our jss is running on 2008R2 with 12 gigs total, 6 gigs assigned to tomcat but it never even uses 2 gigs
This just started happening this week
Our logs are set to flush after 1 month
@spowell01 Just curious, is day to day responsibility of the Windows Servers (virtual or physical?) hosting JSS on someone else's plate? If so, is there a chance they made a change without telling you (like updating Java)? :)
Hey Don, I am the server specialist with our district, so that's all under my umbrella. The windows servers are all virtualized, and I haven't made any specific changes to the JSS that would be related to this. On another note....yesterday was the first complete day in over a week that the JSS hasn't crashed on us....so far so good this morning as well....
Also having the same issue on our live server, the only way to stop it seems to be stopping the backups.
If stopping backups fixes the problem for you, perhaps your backups are set to the same time as your log flush? If those are competing, failures are likely, and Tomcat may not be able to handle it.
still an ongoing issue for us, and still working with jamf support to no avail. They really aren't any closer to figuring out whats happening in our situation, but haven't given up. We did try reinstalling the JSS/tomcat as well as removing all java components and reinstalling the original version. Tomcat didn't even stay up for 5 minutes. Also received an updated .C3P0 file for tomcat from jamf, and with that the JSS wouldn't even initialize the database on startup.
JSS Support modified some priority pool settings, non-priority pools, and maxthread settings in our 'my.ini' file and some xml files.
There is some confusion about which my.ini file we are using. I suspect this to be a cause of many errors in implementation, so i would run that by them.
They also made changes to number of connections and deleted a bunch of records within the mobile_device_app_deployment_quue and deleted some reports from iphone_applicaitons.
We are still working with them for other significant issues but had to resolve the server crashing first.
We are Windows, with a dedicated server (managed by me), quad core,16GB mem - 12 dedicated to JSS.
We did not re-install or change our java.
We had the same problem that started last October. We were (and still are) on 8.6.
The interesting thing is that it happened at exactly the same time, to the second. Not every day, but several times a week to one every week or three.
JAMF support had us increase the number of connections in Tomcat & MySQL and that helped and hasn't become unresponsive like it used to. Never was able to track down what caused it to get hung up in the middle of the night. Lots of logging was enabled, but never found the cause. One possible theory was due to massive amount of checkins all at once, but all at exactly the same time over weeks? But, I have no other theory.
Also wrote a script that will monitor :9006 and if that becomes responsive, it will restart Tomcat.
The monitoring script remains in place today and it restarts Tomcat every month or two when it becomes unresponsive as Tomcat increases its memory usage until it consumes the vast majority and doesn't run well (well beyond the max set in the Tomcat settings).
We're planning on migrating from our current OS X 10.6.5 machine to Linux this year, so not worried about its current behavior.
I have a 6 hour gap between the log files been flush and the backup and this didn't help. What did seem to fix the problem was reducing the various log retention periods which reduced the backup file by half
Well damnit all, now I'm joining the ranks of this thread. I'm loosing tomcat a few times a day after upgrading to 8.7 on a 10.7.5 Mac Pro. I suppose that I should call JAMF and see what's up.
Does anyone have any progress on this issue?
I would contact your account manager. This is getting ridiculous now! They need to test more.
@ gregp
Please can you share your script and launch daemon. This is getting quite annoying now am tbh im not prepared to spend another week on the phone with my account manager trying to fix it
For what it's worth, some time ago we also had that issue, not very often so i didn't really bother.
However, we were seeing a different problem with clients not properly updating their inventory during recon.
To resolve this, i increased the Max Packet Size for MySQL.
That change also resolved the Tomcat issue for us, it hasn't crashed once since then.
You could check your JAMFSoftwareServer.log for messages like
[error] [JAMFPreparedStatement ] - SQLException from execute: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (2054502 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
[WARN ] [NewProxyPreparedStatement] - Exception on close of inner statement.
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (2054507 > 1048576).
For us, things seem to have gotten a bit better in 9.2 Right now, I have to be a little careful with my MySQL queries as I've locked that up on occasion, but I'm not sure that's a JAMF issue.