Posted on 05-20-2013 10:35 AM
Hi All,
I've got 3 processes that are WAY past their 300 second timeout. I was wondering how I can see what they actually are and troubleshoot from there rather than just restarting tomcat. This is the output I get from showprocesslist; run on the jamfsoftware DB (server names changed to protect the innocent):
+-------------------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------------------------------------------------------------------------------------------------------------------+
| 118967 | jamfsoftware | server.domain.com:53046 | jamfsoftware | Sleep | 27081 | | NULL |
| 118974 | jamfsoftware | server.domain.com:53052 | jamfsoftware | Sleep | 27060 | | NULL |
| 118980 | jamfsoftware | server.domain.com:53058 | jamfsoftware | Sleep | 27060 | | NULL |
Posted on 05-21-2013 04:55 AM
I'm seeing the same thing on my end. I'm not 100% sure, but I think those are just part of a connection pool being kept alive for efficiency reasons. This saves the JSS from having to open a new connection and authenticate every time it needs to do something.
Posted on 05-21-2013 05:40 AM
The reason I ask is Jamf support seems to consider these "hung" connections a liability, and I rarely see them outside of times when the server stops responding.