Path of a package \ JDS

ShaunRMiller83
Contributor III

It took a while to find this information and I ultimately had to ask our JAMF Software Account Manager for it to help us troubleshoot some JDS issues.

I figured I would share this since the office JDS documentation doesn't give this specific work flow (at least none that I could find).

1.) The package is first uploaded to the master JSS Tomcat instance. 2.) After the file is completely uploaded to the tomcat instance it is uploaded into the database in the downloadable_file_chunk_data table. 3.) The downloadable_files table is where the root JDS looks to download new packages when it checks for policies. An entry in downloadable_files is made after the upload completes in step 3, indicating the package ID, name and that the package is ready for download. 4.) After the package is loaded to the database, the file in JSS/Tomcat/temp/ is removed. 5.) When the root JDS checks in for policies (every 5 minutes) it looks to downloadable_files and begins downloading. 6.) The file is downloaded to tmp where it is then copied to /JDS/shares/Caspershare/ after the download completes. 7.) The data should be flushed from the downloadable_file_chunk_data table after the upload to the JDS is complete. 8.) The package will appear under the Distribution tab of the JDS after submitting an inventory update which should be done automatically as part of the download process.
19 REPLIES 19

jrippy
Contributor II

You said you are troubleshooting this? Please elaborate. My database is growing to 10+ gigs. Say I upload the CS6 Master Collection installer. Casper Admin churns on it for a while, then completes. It first appears ok, but if I close and reopen Casper Admin, it will show as missing. That is when the database grows like crazy and it never seems to come back.

ShaunRMiller83
Contributor III

@jrippy

I am by no means a JDS expert. We just started explorer the use of them over the last few weeks.

That said assuming your package is known to be good. Does your JDS system's Jamf.log indicate anything? or the logs from the machine you are running Casper Admin from?

What we have seen is even after Casper Admin has finished its replication Casper is still processing the packages as the work flow above describes it goes thru TomCat, enters the database (which may be why your DB is blowing up) and then ultimately the the JDS share.

Additionally for Adobe packages we have seen the checksums mismatch and Casper then deletes the JDSs local copy of the package. Which may also be what you are seeing.

Hopefully this helps a little

Keep me posted
Shaun

kilodelta
New Contributor III

@ShaunM9483

Super helpful, thank you! The JDS documentation is rather lacking, to be blunt, and knowing things like this can be really useful for diagnosing things - for example, why ~/JSS/Tomcat/temp kept bloating to huge sizes when I uploaded big packages.

jrippy
Contributor II

@ShaunM9483

Yes thank you. Like you, we just started into JDS implementation a few weeks ago. I had a single file distribution that I've replaced with 4 JDS instances. Everything has been going fairly smoothly until the past week or two with these issues.
It was interesting you should mention Adobe as the major issue right now is CS6 Master Collection.
I've put in a support ticket for this but I will definitely check some of the logs that you pointed out to see if there's any more info I can glean.

Thanks again!

ShaunRMiller83
Contributor III

@jrippy

Yea are only seeing issues currently with Adobe packages. That said they are our largest files. A few things we have noticed or tried are:

1) Certain files are getting stuck in the Tomcat temp folder (under the JSS install directoryTomcatTempJSS) so we increased the Tomcat Max Memory slightly to see if that would help clear those files.

2) We also tried to create a symlink to another share. The JSS didn't like that at all so had to roll back and normalize.

3) We have also notices the download_file_chunk_data table has grown to 40gbs and doesn't seem to be releasing the packages within it. Which sounds like we may all be having the same or similar issues.

I am working with Jamf Support now so will keep this thread updated.

ShaunRMiller83
Contributor III

Just as a follow up our JAMF SE had us do the following on our DB server:

from the MySQL command line run:

These first 4 steps show how much is put in your DB by JDS

1) use jamfsoftware; enter
2) select count() from downloadable_files;
enter
3) select count(
) from downloadable_file_chunk_data;
enter
4) select count(*) from downloadable_file_chunk_metadata;

This part clears those tables
5) truncate downloadable_files;
enter
6)truncate downloadable_file_chunk_data;
enter 7) truncate downloadable_file_chunk_metadata;
enter

We then manually copied the packages into our JDS share, and once the copy was done we ran sudo jamfds inventory.

This seems to have set us straight and haven't seen any other issues but your milage may vary

Shaun

cstout
Contributor III
Contributor III

I've been working with JAMF on upload to JDS issues for weeks now. There appears to be a defect (don't have the defect number) where files larger than ~1GB never make it to the JDS. They upload seemingly successful, you can name the file, categorize it, and save in Casper Admin. When you quit and re-open, the file is now marked as missing. This is the defect and it appears to be directly related to Casper Admin.app. The work around we ended up finding (only works if a JDS is your master distribution point) is to upload your 1GB+ package via the JSS web interface.

cstout
Contributor III
Contributor III

@ShaunM9483, thank you for writing that out. It wasn't until just a couple of days ago that support explained the details behind what's actually happening when you upload via Casper Admin to a JDS. There are a lot of pieces to this puzzle.

ShaunRMiller83
Contributor III

@cstout

We did see that and what we found for our case was Tomcat was being crippled. We had to increase the TomCat Max memory size slightly in the JSS database utility. Just for perspective we never saw the issue on files smaller than 6gbs. Once we adjusted tomcat the issue hasn't returned since. Granted everyone's setup will vary but just saying that's what we saw and worked for us.

cstout
Contributor III
Contributor III

@ShaunM9483 Adjusting Tomcat max memory resolved your large GB upload issue with Casper Admin and JDS?

ShaunRMiller83
Contributor III

@cstout

Correct, prior to adjusting the Tomcat setting packages over the tomcat max memory size would sit in the tomcat temp folder and never go anywhere.

Once we adjusted the setting we haven't see the issue return.

cstout
Contributor III
Contributor III

@ShaunM9483 If that solves the issue at my end you've not only made my month but I know the guys from JAMF working with me on this issue with me will be overjoyed. Thank you for sharing that!

ShaunRMiller83
Contributor III

@cstout][/url

I really do hope this do address your issue granted each setup will vary and present its own challenges so just described what's worked for us.

Assuming this fixes things a few overall observations:

1) Your server doesn't just need memory and storage for your database to be able to handle your largest package, but also need to be up to par to allocate the memory in Tomcat for your largest package.

2) This still seems like a bug or at the very least ineffective way to handle the process because you then have to blow up your memory requirements.

Please let me know how your testing goes.

cstout
Contributor III
Contributor III

@ShaunM9483 I don't want to get too excited, but I just uploaded a 4GB dmg and waited about five minutes or so for it to transfer from Tomcat to MySQL and then another five minutes for the JDS to check in and it started copying the file over. That's huge progress. I just hope it will keep working. I'll be sure to update if I run into the issue again but for now I'm cautiously optimistic that this fixed it.

cotten
New Contributor

I believe I'm struggling with this same issue. I've got a 9GB Creative Cloud package that isn't making it to the JDS. Did you do a 1:1 allocation meaning I'd have to give Tomcat 9GB?

ShaunRMiller83
Contributor III

@cotten][/url

Based on our use of the JDS (and we saw this behavior on JDS 9.31)

Yes, that is basically what we observed. Your TomCat needs to have the max memory allocation set to the size of your largest package (and maybe even slightly larger than that). Which somewhat makes sense since your JDS is a http(s) share and is then brought into the DB to be processed and then checked in.

cotten
New Contributor

Do you know if we can allocate swap file to use this somehow? It seems a little ridiculous to allocation so much RAM to a system that really doesn't need it for anything other than a very rare import situation.

ShaunRMiller83
Contributor III

I would think you could do that but never tested or did that so I am not sure. You may end up with some fragmentation as well which may not be a huge deal but figured I would call it out anyways.

I do agree with you though the need to have this much ram just sit there 90% of the time unused is a huge waste of resources and money. You may want to reach our to your JAMF Buddy and see what they say about this as well.

jrippy
Contributor II

@cotten @ShaunM9483
We did up the RAM allocated to our server (RHEL6 VM) but not more than our largest package. BTW, our largest package is 37GB worth of Sibelius 7 content.
I think we upped our RAM to 8GB, but our underlying issue turned out to be hard disk space. Our network admin configured the / partition for RHEL to be only 40GB and a separate "services" partition of I think 200GB. This is where the JSS was installed. Tomcat was installed on the services partition as well, but by default the tomcat temp directory was redirected (linked) to a directory on the primary root (/) partition. Since that was discovered and corrected, we haven't run into this problem again.