JDS replication why does it go to the JSS

calum_carey
Contributor

If i have a distribution point defined in the JSS, in my example it is a linux vm with netatalk providing afp shares and also has apache allowing for http downloads.
If i want to replicate this to a JDS instance using casper admin, Casper admin mounts the JDS repository but then seems to upload the files from the Distribution point to the JSS server's C: drive and then down to the JDS.
Kind of sucks, why are you eating up all my storage space on the JSS when you could replicate directly from the distribution point to the JDS? Also kind of annoying if you have a very large distribution point and a not so large C: on your JSS as it will fill up the JSS C drive with out any warning, casper admin will just stick at a point saying that it is replicating a particular package or dmg

Once it has replicated from the Distribution point to the JDS is it safe to remove the files from the JSS C: drive?
It seems to store them in the mySQL tables;
downloadable_file_chunk_data.MYD downloadable_file_chunk_data.TMD

22 REPLIES 22

mrowell
Contributor

This is certainly something to be aware of.

Our JDS replication is not working and this caused the downloadable_file_chunk_data grow to 40GB. This can quickly cause problems on JSS server and backups.

bajones
Contributor II

Same issue here. If I add a JDS as root, it will sync maybe 1 or 2 files and then stop. If I manually replicate via Casper Admin, the files seem to go straight into the database with no network traffic to the actual JDS.

calum_carey
Contributor

Thanks for the confirmation guys. Glad its not just me missing something.

bajones
Contributor II

I turned off server backups and spent several days troubleshooting this issue. It appears to be expected behavior. See my posts at https://jamfnation.jamfsoftware.com/discussion.html?id=8620 for more info.

EDIT: Optimizing your tables after replication is complete (verified by checking the status of your JDS) should recover the disk space taken up by the downloadable_file_chunk_data table. This doesn't resolve your issue with running out of space during replication because the table will grow again once you start adding new packages.

egjerde
New Contributor III

This is a pretty frustrating behavior... took me a little bit to figure out why our JSS server bombed this morning.

So, this link might be useful for those searching through these posts - it's something from JAMF on how to manually move your packages to your root JDS to avoid the database hit that replication with Casper Admin causes. (it's also a lot faster, too.)

https://jamfnation.jamfsoftware.com/article.html?id=351

I'm still waiting for the tables to empty out on my database, not sure what actually prompts the server to cleanse that download stash in MySQL... optimizing tables has yet to make it happen.

ernstcs
Contributor III

I'm going to find the product manager for the JDS and should have a few words with them based on my initial experience...

Thanks for your posts here. Just STOOPID behavior using the database to migrate data. W T F

lynnp
New Contributor

@ernstcs I completely agree. It baffles me why data needs to hit 3 locations before landing in its final resting place (tomcat tmp -> mysql db -> JDS tmp -> JDS storage dir)

If your package is too large for any of these locations, things will break. And if your /tmp happens to be partitioned on its own, it will be difficult to diagnose. Make sure there is sufficient space in each of those locations listed above!

dderusha
Contributor

I've spent the last few weeks battling JDS's in a corporate network. Had to step away from it and redesigning with AFP and SMB shares. It would get close to replicating but would always fail. never got to the bigger adobe and OS packages we have. It would always hang..... we tried manual copies from our old mac server to the JDS. Failed. The JDS is going to inventory every 15 min......so I would have to get my entire 200GB of packages uploaded before the inventory. never happened, and if the package wasn't replicated to the Database on the C: drive as explained above, the JSS would tell the JDS that package didn't exist and it would remove the manual copied package....

Not Fun

Dan

Kedgar
Contributor

I understand that this thread is a bit old... but I just got back into re-architecting and upgrading our JSS environment after a couple of years of it not being maintained by the proper teams. I ran into this issue and opened a case with JAMF support. They said this was a decision they were forced to use to enable web upload of packages/etc.

This is a big problem in our environment as we don't like to allocate a lot of extra disk to our servers (virtual & RHEL). Not to mention also allocating that "wasted" space to our separate database server. I'm still looking to use the JDS because I want a fully Linux environment.

chriscollins
Valued Contributor

@Kedgar Why not just rsync your packages to your Linux boxes and just enable HTTP based distribution points?

bentoms
Release Candidate Programs Tester

@Kedgar, we do as @chriscollins mentioned work one JSS & 10 additional DP's.

On OSX mind, but obvs will work on Linux.

davechristensen
New Contributor II

Bump from the dead...

We have this exact issue. As the JDS runs policy every 5 mins it seems, large packages (eg. Adobe) get deleted before they can be manually copied, and then they fail the has check and JDS deletes the packages half-way into the transfer. Super awesome logic.

Has anyone found a fix around this problem? Has anyone found a way to modify the policy check-in times?

sprattp
New Contributor II

I also had similar problems with the JDS and subsequent large database problems, but as others have stated you can build your linux DP with SMB and HTTP (or HTTPS). I have had no problems with these.

I hate to say it, but while it was a nice idea to use the JDS as a replacement for Xserves it don't see the point in using - sorry Jamf

bmccune
Release Candidate Programs Tester

Just encountered this issue for a second time in the past 3 months and have to do a SQL repair on the database once again. Hopefully it works...last time I was screwed and had to revert back to an older DB (DB backups weren't happening because of the massive downloadable_file_chunk size). Going to be tossing the JDS in the trash once I get things running again and going with a SMB/HTTP share on Linux. Not sure why the JDS is even an option for paying customers considering it's so buggy.

bpavlov
Honored Contributor

My understanding is that the JDS was created for a different use case/purpose, but it wasn't anticipated that admins would be using it in the manner they are. This would explain why they are rebuilding it from the ground up so they can take of the many problems people have with it. At least that's what I've been told.

analog_kid
Contributor

If that's true bpavlov, I'm curious just what use case JAMF designed the JDS for? What manner are admins using it for that they didn't anticipate? I'm not sure what they've told you but my skepticism meter just got pegged.

At the risk of venting my frustration in this post, I can't see how we're using it in any other way than JAMF designed and intended-- as self replicating distribution points. I view the issues with replicating large packages as a design flaw, not user error. Unless JAMF never intends for us to ever push out an Adobe CC/CS package or ever image a computer via a JDS, precisely use cases Casper is for.

It's true nobody made us convert to JDS's and I've been pretty patient working through all the bugs/issues but at this point I wish they were completely sorted. It's welcome news they're redesigning how JDS works because the number of hops a package makes to land on a JDS just screams multiple points of failure.

bentoms
Release Candidate Programs Tester

@analog_kid I have heard a few times of a reference to a "JDS 2.0" so maybe if that's a thing some of the frustrations will be alleviated.

When the JDS was released back when v9 was I'm sure that JAMF advised on this DB thing. It's a shame that the pass-through gives no real value to use admins.

If during the passing through the DB there was some package analysis that could aid with automation, that would be cool.

Maybe with patch management. Idk

wmateo
Contributor

@bentoms

I am late to the party, but do you have a sample rsync script? I am looking to accomplish same but not sure parameters I need for proper mirroring, incremental,etc. Basically I need to mirror all our remote DP sites from master.

bentoms
Release Candidate Programs Tester

@wmateo Sure, I blogged about my setup & the RSYNC section of the post is here.

calumhunter
Valued Contributor

@loceee also has a dp sync script which is pretty cool and uses the JSS api to get the DP's
https://github.com/loceee/OSXCasperScripts/blob/master/synccdps/synccdps.sh

donmontalvo
Esteemed Contributor III

@bentoms this is awesome. Rsync using RSA keys, and having each replica check in to master for changes every minute (or whatever) would mean never ever having to sync again using Casper Admin. This came up today, and of course your site was the first that came up in our Google search. :)

--
https://donmontalvo.com

bentoms
Release Candidate Programs Tester

@donmontalvo glad you like it!

It's largely what we had with the old, old JSS utility back in the pre-9 days.