Posted on 12-21-2016 12:59 PM
One of our environments is set up with two Tomcat instances. Not load balanced. 1 Tomcat for clients, and a 2nd for administration (set to Master).
We have selected the following Email Alert for our accounts on JSS:
[ ] An instance of the JSS web application in a clustered environment fails
We received this email warning from the Master tomcat instance:
An exception has caused clustering to fail on the server "server.domain.com". Information on this node will be out of date.
Error: Could not read resultset: unexpected end of stream, read 0 bytes from 4
Query is: SELECT * FROM jss_cluster_updates
Query is:
SELECT * FROM jss_cluster_updates
Jamf site search comes up dry, Google comes up dry...any idea what this might mean, before we engage Jamf?
Thanks,
Don
Posted on 12-21-2016 01:20 PM
No answers but curiosity about how this pans out for you ...we just migrated to a clustered solution for hosting our JSS involving three load balanced member servers, a fourth dedicated admin box and a fifth specialty box for Cisco ISE integration. Please feel free to post anything you learn from anyone regarding troubleshooting JSS clustering errors in the logs.
Posted on 01-05-2017 04:24 PM
Update: While clustering errors had more or less diminished in our environment, they began again (heavily) this week. I was seeing clustering errors beginning Monday morning - averaging one every 52 minutes for 61 hours. Last night I did some digging into MySQL reference materials and forums. I found something I've yet to come across on JAMF Nation or in previous Google searches; nor has my TAM mentioned. The exception was pointing to MySQL being out of resources, which in conversations with JAMF support I've always been told to reduce MySQL connections to a max of 151. This has never resolved the issue. MySQL and other resources have suggested that the max connection limit of 151 is the default number for an Apache Web server, but not the MySQL limit. I've been told the practice for max connection limits (with JAMF) is 90 connections x tomcat servers 1 (i.e. 5 tomcats x 90 1 = 451 max connections). This is supported in CJA training materials and MySQL documentation.
https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
The piece I've missed in the past is adjusting the open-files-limit.
https://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_open-files-limit
Since making changes to the open-files-limit we have been "clustering error ..." free for the past 16 hours. I know it's still early, but given the error rate we were seeing this week, a 16 hour window would have been 18 clustering errors.
Here's what I did:
Stopped all tomcat instances and backed up MySQL.
Logged into the MySQL server
mysql -u root -p
Checked current open-files-limit ... our limit was set to 1024.
SHOW VARIABLES LIKE 'open%';
Exit MySQL
On the server hosting MySQL I edited the my.cnf file
sudo nano /etc/mysql/my.cnf
https://secure.hens-teeth.net/orders/knowledgebase/109/MYSQL-Out-of-resources-when-opening-file-Errcode-24.html
Added the line open_files_limit = 5000 to the my.cnf file as shown below
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
open_files_limit = 5000
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
Saved my changes and restarted MySQL. Confirmed the changes in MySQL by again running
SHOW VARIABLES LIKE 'open%';
Restarted all tomcat instances. So far so good. If the errors return I will update this post.