Posted on 01-20-2019 07:36 AM
I currently support Macs and Jamf Pro for a Cruise Line. Bandwidth is a premium for us so we look to reduce as best as possible. On each ship we have a MacMini running as a File Distribution Point, and each network segment points to the distribution point on the ship. That works great.
The problem is the File Distribution Points themselves. We have to update MacOS, AV, and the Jamf Pro toolset occasionally, but it won't mount the local file share. I can use a failover server which is shoreside, but that hurts our bandwidth. I tried hostname redirect which didn't work. I know I can just run the packages manually, but I would like to automate the deployment of the packages without having to resort to shoreside.
Any ideas?
Posted on 01-20-2019 05:17 PM
@ladygreyjedi just spit ballin' here, but you may be able to use a script that is run from a policy to copy the items to the "Waiting Room" folder: /Library/Application Support/JAMF/Waiting Room
Placing files in there manually and the running a recon, the jamf
binary should pick those up as "cached" files and install them. Would be worth a test.
Policy 1 - script that copies the file(s) in question from the distribution point folder on the Mac Mini to the "Waiting Room" folder and then runs a recon.
Policy 2 - Has the files that you copied in policy 1 as "Install Cached" and scoped to that Mac Mini.
Not sure if this will work, but I'm 99% sure that just having the files in that folder the jamf
binary will consider them cached.
Ok, yeah, just tested my theory. Copied a file into the "Waiting Room" folder, ran a recon, then created a policy to install that package from cached. Worked like a charm.
To make it generic, I would write the script to take a list of package names as Parameter 4, that way you could just update one policy whenever you needed to copy files into that "Waiting Room" folder.
That's at least one idea. I'm sure someone else might have another idea for solving this.
Hope that helps!
Posted on 01-20-2019 07:47 PM
You'll find switching from SMB/AFP to HTTP will make things go a lot smoother...thank you resumable downloads (very forgiving).
For replication, rsync is just as forgiving, tolerating network anomalies and bottlenecks. Once you upload a package to the Master DP you can walk away, and rsync will make sure it gets to all the Replica DPs.
If your Master DP is on shore, and our Replica DPs are on the ships, the combination of rsync (for replication) and HTTP (for client side resumable downloads) would be a very good bet. :)