Posted on 12-28-2016 07:44 AM
We are moving our distribution point from a Mac server with a Promise Array. Our JSS lives on a RHEL VM right now. Our agency wants us to go Linux VM route now that the Apple HDD is end of life. Anyone got a horror stories about JDS? It seems like the logical next step but some of the things I've read worry me. Casper Imaging seems to have been a problem in the past. Its it still an issue on 9.96? Should i have them create a whole other VM just for the JDS or should i just add it to our existing JSS and then just have them add storage space for the CasperShare? Any tips or best practices? Thanks in advance.
Posted on 12-28-2016 10:12 AM
I had major performance problems with the Casper database when I used a JDS in our initial set up almost two years ago. A couple of months into it, I switched to a plain file share distribution point, and haven't had problems since. Your FSDP can be any SMB (or AFP) share on Linux, Windows or macOS server.
Our JSS lives in a CentOS VM.
Posted on 12-28-2016 11:44 AM
I'm totally confused by the hate for JDSes. We have 3 right now, we might add a 4th soon, and everything has been humming along perfectly. No database issues, no Casper Imaging issues, no performance issues, nothing.
Our JSS is 2012 R2, and the JSSes are a mix of 10.10.x/Server 4.x and RHEL 6.6.
Posted on 12-28-2016 12:51 PM
The file share DPs are far less complicated to maintain than the JDS environment. If you do decide to deploy the JDS solution, pay careful attention to versioning requirements of the OS. JDS has not been updated to provide supported binaries with recent or even older Ubuntu LTS releases, for example. I think most people have just determined that secured SMB shares are more simple in a virtualized environment and more capable of being regularly maintained.
Posted on 12-28-2016 01:19 PM
I'm not saying this just to be contrary, but how are SMB shares simpler? I either rack a Mac mini or have our server team roll up a RHEL VM, open a .pkg or .run installer as appropriate, and enter the JSS info. Then I'm done, and can start configuring network segments to use that JDS after a night of automatic syncing.
I'm all for using a simpler workflow, so if file shares are that, then I'd like to learn more about how they improve the process.
Posted on 12-28-2016 06:19 PM
the one thing to keep in mind with a JDS is that when initially adding files to your primary JDS instance the package first gets chuncked into the JSS database and then chuncked down to the primary JDS. once the files has gone through the database and finally resides on the primary JDS its then replicated to all the other JDS instances. the final replication is done directly from one JDS to the other so the file only has to go through your database once. but just know that you will need to have at lease as much free space as your largest package. i had a 50+ gig adobe creative suite package that caused my JSS to run out of free space and caused all sorts of problems.
this article is worth a read:
Manually Replicating Packages from a File Share Distribution Point to a Root JDS Instance
Posted on 12-28-2016 06:24 PM
The thought of JDS injecting many multi-GB size PKGs into the Jamf software database during replication is enough to make me cringe and cower. #Horror
I would much rather see sound replication processes documented and supported.
rsync
for Red Hat distribution points, and robocopy
(or rsync
) for Windows Server.
Scripts that enable full/diff replication would turn the horror story into a very happy Lego Jamf experience. :)
Posted on 12-28-2016 11:40 PM
We use lftp to replicate packages to all our mac servers. You need Homebrew, xcode, sftp-copy-id, and lftp of course. I just run this on remote servers.
lftp -u username,pass -e "set sftp:connect-program 'ssh -a -x -T'; mirror -v -c --delete --loop --use-pget-n=10 -P 2 /Volumes/ServerHD2/CasperAFPShare/Packages /Volumes/ServerHD/CasperAFPShare/Packages ; quit" sftp://casper.share.com
I like the segmented / parallel download, continue, and loop features of lftp dattebayo!
Posted on 12-29-2016 06:01 AM
I see you @Eigger . Your chakra is strong
Posted on 12-29-2016 06:45 AM
To add on to @jchurch 's comment, the packages are also first stored temporarily on the JSS instance you are working with before being sent over to the MySQL instance, before then being sent over to the JDS instance. So you need to keep this in mind when sizing all of your VMs or you will encounter problems with the disk filling up and all the fun that entails with the OS crash and database corruption. I am not sure why they have not included these details in the set-up documentation as this is a very critical understanding to have of the environment.
Also regarding using rsync for manual replication, be sure to run it with the checksum flag. There can be issues where it appears that files have been moved over properly, but the JDS binary seems to freak out on discrepant checksums and kill your file in this situation.
Posted on 12-29-2016 08:05 AM
Aside from the way files get to the JDS, through the JSS, I have no complaints about our two JDS instances. I have been running a JDS since they first came out, and if you remember how files get there when uploading via Casper Admin, it is not a huge hassle.
As far as Casper Imaging, the only issue using a JDS is with laying down an OS via block copy (IIRC), and that's easy enough to get around by simply adding an AFP share to the JDS. Then when using CI, if you are doing an OS replacement (nuke & pave), you just choose the AFP share from the JDS. Instructions for adding the share to Linux are here.
Posted on 12-29-2016 01:53 PM
JDS the way they are now is a good solution if you have the right Hardware and Network. If you can separate the JSS, Database, and Master Distribution Point, the better. Do some tweaking with your SQL as well to handle multiple connections. Upgrading to 10 Gig network between your servers and sites would be awesome too. Unfortunately, for us, although we have powerful servers, its our network that's limiting us. We have Satellite Links between our 7 Remote Sites. (We are in Barrow, Alaska) and the latency is just horrible! Hopefully, we get fiber next year! So for now, although not automated like JDS, we still stick to a plain afp file share distribution point. We are working on automating the lftp-no-jutsu as well as rsync is just not enough.
Speaking of next year, we "probably" are in for a treat. Jamf | Pro is coming out. I haven't seen any mention of improvements to how they replicate files to JDS as of yet. I heard of JDS 2.0, but idk how good it is supposed to be. New Self Service, Service Now, a lot of "hopefully" good stuff,
I recommend file share distribution points unless you have the right ingredients for JDS, and just wait for the release of jamf pro v.10 and JDS 2.0
Posted on 12-30-2016 06:35 AM
I have 5 instances of the JSS running with 4 JDS repositories (1 to the outside world) on Windows Server 2012 in our virtual environment. Works great.
Posted on 12-30-2016 07:39 AM
@stevewood: We nuke & pave all of our machines upon receipt and during reimaging (per security requirements), and even with an all-JDS setup the block copy works just fine. Maybe these were old versions of CI or the JDS that had an issue?
Posted on 12-30-2016 07:43 AM
@bvrooman I don't believe they have made any changes to CI or JDS that would resolve the block copy, and it may only be with compiled configurations. I just remember that in order for block copy to be available, you need an AFP file share distribution point. I'll have to dig through Jamf Nation to find the articles that discussed this (more than the link I shared above).
Posted on 12-30-2016 08:14 AM
Maybe the compiled configuration is it, then. I haven't used those in a few months, but I just checked the last batch of Macs that we set up and they all block-copied the base image.
Posted on 12-30-2016 08:17 AM
It's for any imaging. You'll need AFP/SMB.. so JDS needs AFP added as per this article.
That was a fun issue to discover during my CCE. :(
Posted on 01-03-2017 07:09 AM
It looks like that article is only applicable to 9.0 through 9.63... since we are on 9.96 currently, that may be why we haven't needed to add anything additional for Casper Imaging to work just fine.
Posted on 01-03-2017 09:05 AM
Hey all,
Just a word of caution when using manual methods (either our KB, Manually Replicating Packages from a File Share Distribution Point to a Root JDS Instance, or the other methods mentioned in this thread) to copy files to the JDS in order to avoid it chunking through the database:
We have a known issue (PI-000883) with this in which, when you upgrade the JDS, it will sometimes delete all of the files on the root JDS context if files have been moved there using the manual replication method mentioned above.
This PI has been listed as 'accepted/closed' and the solution given to us was, "The JDS installer now displays a warning message when a user attempts to install a non-root JDS instance."
The message displayed, when starting the JDS installer, is, "There are files currently in your CasperShare. If there is a root JDS configured in your JSS, these files may be deleted. Do you want to continue? (y/n): "
It does not always happen, but we have seen numerous cases in support where, after a JDS upgrade on the non-root JDS instance, the customer's JDS has been wiped clean of all files due to having moved files manually to the JDS in the past.
Amanda Wulff
Jamf Support
Posted on 01-10-2017 07:31 AM
Ah yes, @amanda.wulff 's comments are on the money. I forgot we encountered this once and it blew everything away and replaced the packages. Hashing on ~1TB of packages and replicating them through the infrastructure was not fun at all and took a few days. Buyer beware on the manual replications.
Posted on 01-30-2017 08:35 AM
I guess we should expect that, since JDS is aware of what packages are in each instance.
As opposed to normal distribution points, where Casper Admin only knows/cares what is on the Master.
We are on RHEL and have rsync set up on each replica distribution point, using RSA keys, running every 15 minutes, with logic to prevent concurrent runs, and logging actions/errors, so far very reliable and fast.