Posted on 11-01-2012 09:42 AM
I'm trying to figure out why Tomcat goes on the fritz everyday shortly before 10:00 AM. I have the backups happen 12 hours before hand, there is no purging of logs right now due to compliance and I'm looking into moving them out of the server and hopefully these two issues aren't tied though.
Looking at the logs when it does freak out, tomcat is complaining about
10/31/12 9:54:34 AM com.jamfsoftware.tomcat[61322] SEVERE: The web application [] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation.
Tomcat is then using just shy of 7 GB of RAM on a machine that has 8GB of RAM. Restarting Tomcat brings everything back to normal.
Anyone else seeing similar behavior?
Posted on 11-01-2012 10:16 AM
What time is your Log cleanup set to ? That or it could be an automated backup thats running.
Thats been my experience.
Posted on 11-01-2012 11:18 AM
Log clean up is set to do not delete but the time is set to midnight. That's within 2 hours of scheduled backup where I read in another thread that can cause issues. But since none of the logs are setup to be purged, should I move that time anyway to say 1 or 2 AM? Or maybe set it kick off before hand?
Posted on 11-01-2012 12:55 PM
We're seeing this on 8.6 on OS X (10.6.5).
Had to restart it three times in the past week or so and last week was the first time.
Currently have a ticket open with JAMF and currently have some debugging enabled.
Our log deletion occurs at midnight and backups at 2am.
For quite a while, we've seen Tomcat bloat its memory. The JSS reports normal usage, but looking in the OS, its a really fat Tomcat. We have it set to 5GB on a 12GB server, but it doesn't pay too much attention to the limit set. Once it gets too fat, it slows down and the machine starts paging. Restarting Tomcat will clear that out for a while (dependent on activity).
However, this Tomcat freezing(?) is totally different from what we were seeing previously.
When it was non-responsive, memory usage was very low, a bunch of old connections (>17000 sec) in MySQL, and Tomcat was still running. No web GUI access and nothing available in Self Service.
In the previous problem where Tomcat got super fat, it was running and slowed down over time (eventually it would get to be too fat to do anything useful).
In the error log that you posted is what I see as well when Tomcat is unloaded with launchctl.
Something is awry.
Posted on 11-02-2012 05:59 AM
Freeze is probably not the right term, gregp basically has the same symptons we are seeing as well. JSS 8.6 as well here on a 10.6.8 server. It basically slows down and eventually without a restart will not serve any pages at all and timeout the connections. CPU usage is reasonable, under 50 usually.
One more bit of info, it seems to be isolated to the JSS not being accessed for an extended period of time, my routine when I come into work is to log into JSS via a web browser and to just check on some logs and smart groups. Looking at the logs, that looks like when it's crashing.
Posted on 11-02-2012 07:20 AM
It happened to us again this morning around 03:15 CDT. Backup starts at 02:00 and finishes about 02:16.
In looking at the debug logs, can see that our Tomcat is not idle in the middle of the night. The machines are still checking in, plus we have users in India. Not a flood of users, but they trickle in keeping Tomcat active.
Can see up until 03:15:00, the connections come & go and are limited in number.
At 03:15:25 (the next monitoring point after 03:15:00), the number of connections skyrocket from our machines on 8443 and also the number of connections to MySQL go way up. After a while, the clients drop their connection on 8443, but the connections to MySQL persist.
There is nothing of relevance in system.log.
Seeing if something related to log purging & data backups.
We're also going to be moving around the times of the log purging & DB backups to see if that has any impact on this.
Posted on 11-27-2012 06:31 AM
Bumping this one up. Our JSS seems to just stop working for no reason. 10.8.2 Server with 32GB RAM.
Posted on 02-21-2013 01:43 PM
Same type of issues here. JSS stops responding, restart tomcat, it works for awhile. Sometime days, sometimes hours. Currently have a case open, will see where it goes.
Posted on 02-21-2013 01:47 PM
Bumping the RAM up to 32GB, giving Tomcat 16GB, substantially cutting back on the size of our mysql database, and moving towards failover / load balancing helped solve this.