Database unresponsive due to JDS replication

jyergatian
Contributor

As I wait for a response for JAMF support - I'm in need of help and a sanity check surrounding the Database and JDS

Ultimately, yesterday I uploaded a 7.14GB dmg... twice.. First time using Casper Admin - it took a while but when it was done, all was good - until I relaunched Casper Admin and found the file in red. I checked my JDS via CLI and sure enough, no file to be found. I decided to re-upload the file using the JSS interface. It took some time as well and sure enough, it still did not appear in the CasperShare folder in CLI.

After some research, I found that the JDS needs a /tmp folder larger than what we typically allocate - fine, issue resolved using a symlink to the /data partition.

Now however I've noticed my Database is unresponsive - can't stop MySQL or login to MySQL. Each JSS server is stuck at 'Initializing Database' after a quick restart. Browsing the mysql/data/jamfsoftware in CLI, i've found some large files - about the size of my DMG.... (Picture attached.)

cfc0effcfdcc494f887b235deac47073

The first question that comes to mind is WTF are those doing there, but putting two and two together, I imagine uploaded files traverse the DB before hitting the JDS - and because my JDS couldn't replicate, they were stuck there - and because of their size, our /data partition is now full - and because our /data partition is full, MySQL has become unresponsive...

help...

2 ACCEPTED SOLUTIONS

chriscollins
Valued Contributor

@jyergatian If it doesn't clear up that table after a while, here is my notes from what JAMF had me do in the past (though I obviously recommend you run it by them first):

TO REPAIR downloadable_file_chunk_data table issue:

  1. mysql -u root -p
  2. use jamfsoftware;
  3. show tables;
  4. TRUNCATE TABLE downloadable_file_chunk_data;
  5. Restart MySQL and Tomcat

That should truncate the table and remove it.

View solution in original post

jyergatian
Contributor

JAMF support came back with the following;

truncate table downloadable_file_chunk_data; truncate table downloadable_file_chunk_metadata; truncate table downloadable_files; Then repair and optimize the database and restart tomcat.

As for repair and optimize, I ran:

mysqlcheck -u root -p --auto-repair --all-databases
mysqlcheck -u root -p --optimize --all-databases

The issue appears resolved. I'm re-uploading the file now - with a larger /tmp folder on the JDS, there should no longer be a bottle neck...

View solution in original post

9 REPLIES 9

chriscollins
Valued Contributor

Not going to currently help you but yeah, that is how uploading files to the JDS works. The files go onto a temporary location on the hard drive in the Tomcat directory, then the file gets pulled into the database in the downloadable_file_chunk_data table, and then the file gets pulled down to the main JDS then replicated from there. Its....not the most efficient thing.

chriscollins
Valued Contributor

Unfortunately its why we were forced to give up on the JDS for our environment very quickly. That mysql table wouldn't get flushed properly then it would cause our backups to fail, etc. The only thing we could do was go into mysql and manually clear the mysql table.

jyergatian
Contributor

JDS seems so promising on paper - but I guess that's why the paper surrounding it is so limited.

Can anyone confirm if removing the two astronomically-sized files via CLI is a good idea? Maybe follow that up with killing the running mysqld process and then launch mysql again?

chriscollins
Valued Contributor

Definitely DO NOT do that. You never want to manipulate the raw database files like that. You will corrupt your database.

ernstcs
Contributor III

I can tell you that JAMF is acutely aware of the shortcomings of the current JDS.

I also had my JSS filling up, didn't know it was going to do that, but did eventually figure out that it was handling replication via the MySQL database. There is a process for manually replicating. Not sure if you've seen that.

https://jamfnation.jamfsoftware.com/article.html?id=351

jyergatian
Contributor

Thank you both for your input. A somewhat convoluted solution was to increase the size of the /data partition and restart the process - at least the DB is back online. As for these files - I hope now that the JDS /tmp folder is more accommodating to their size, they'll get out of my DB on into the JDS. Time will tell...

chriscollins
Valued Contributor

@jyergatian If it doesn't clear up that table after a while, here is my notes from what JAMF had me do in the past (though I obviously recommend you run it by them first):

TO REPAIR downloadable_file_chunk_data table issue:

  1. mysql -u root -p
  2. use jamfsoftware;
  3. show tables;
  4. TRUNCATE TABLE downloadable_file_chunk_data;
  5. Restart MySQL and Tomcat

That should truncate the table and remove it.

jyergatian
Contributor

JAMF support came back with the following;

truncate table downloadable_file_chunk_data; truncate table downloadable_file_chunk_metadata; truncate table downloadable_files; Then repair and optimize the database and restart tomcat.

As for repair and optimize, I ran:

mysqlcheck -u root -p --auto-repair --all-databases
mysqlcheck -u root -p --optimize --all-databases

The issue appears resolved. I'm re-uploading the file now - with a larger /tmp folder on the JDS, there should no longer be a bottle neck...

johnpowell
New Contributor II

Just a minor update (might only be necessary for path changes in 10.11):

TO REPAIR downloadable_file_chunk_data table issue:

  1. /usr/local/mysql/bin/mysql -u root -p
  2. use jamfsoftware;
  3. show tables;
  4. TRUNCATE TABLE downloadable_file_chunk_data;
  5. Restart MySQL and Tomcat

That should truncate the table and remove it.