We've got a table in our MySQL database that had ballooned to a rather large size that we discovered prior to our last upgrade. The database itself was reasonable, but the "downloadable_file_chunk_data" table was enormous. The solution was to run the following in MySQL:
use jamfsoftware
then
truncate downloadable_file_chunk_data
(reboot tomcat, then)
/usr/local/mysql/bin/mysqlcheck -u root --optimize jamfsoftware
This resolved our issue and the backup became manageable again, upgrade to most current version went fine. Checking our backups though we saw this:
2.7G -rw-rw----. 1 mysql mysql 2.7G Nov 18 13:10 applications.MYD
11G -rw-rw----. 1 mysql mysql 11G Nov 17 14:08 downloadable_file_chunk_data.MYD
6.2G -rw-rw----. 1 mysql mysql 6.2G Nov 18 13:10 downloadable_file_chunk_data.TMD
So whatever mitigated this is still there. Although it's easily fixed I would love if someone who's been using Casper for a while and has a better understanding of MySQL could shed light on what's causing this in the first place. I prefer to be proactive not reactive. I found some info on the nation suggesting packet size could play into it but not being a MySQL guru I can't say.
If it's relevant we're hosting our JSS/JDS on RHEL and it's currently an overpowered physical machine for the size of our environment (64 gb RAM / 5 tb storage). Much thanks to any who have seen this and can point me in the right direction.