Posted on 11-08-2016 02:35 PM
Is there any way to stage a large package rollout for example to 5 computers at a time in a smart group?
Example:
Each of our labs have approx 30 computers. When we try and rollout a large package to a lab it ends up not having time to finish after running all night because every single computer in the smart group tries to grab the large package all at the same time.
Can we specify to only allow 5 computers in the smart group to download the package at a time and have the other computers in the group wait until a spot opens up?
Solved! Go to Solution.
Posted on 11-09-2016 12:29 PM
@RobertBasil given your explanation that this is a one time event, and I know this won't be what you want to do, but you could consider building a DP on a laptop and then take that laptop to each location for the day. Again, not ideal, but you could set that laptop on each network for a day or two, get your install completed, and then move on to the next. Could even just ship to the location with instructions to get it running, and then have them ship to the next.
It's unfortunate that there are not more intelligent methods built into the Jamf Pro server to handle this. I know it is something that has been discussed on here for many years.
Posted on 11-08-2016 03:02 PM
I had to put 100GB of resource material in 2 labs. I used Composer to create 2 seperate .dmg (when I created a .pkg the results were not consistant) I created a policy that started at 1:00am and completed at 4:00am to about 55 computers
Posted on 11-08-2016 03:05 PM
We have a 25GB package for each computer times 30 computers.
Now you see our problem.
Posted on 11-08-2016 03:10 PM
We have a 10GB Fiber connection to the switch in closet then 1GB to the lab machines. Recently updated our GarageBand package with all of the instruments, not sure how big but it was big. I pushed it thru Casper remote during the day. No problem. maybe you have something else going on that is degrading network performance?
Posted on 11-08-2016 03:13 PM
Nothing degrading network preformance. Just don't want to have to create a local DEP in 14 different locations for this one big push that we will only need once.
If we go outside the building (which we are) we only have a 200MB connection for each school.
Posted on 11-08-2016 04:48 PM
Maybe I missed something, but can you set the package to "cache", then just let it go ahead of time, give yourself a few days for it to complete on all computers in all the labs, and then when you're ready to roll it out, enable a second policy set to "install cached" so the install occurs from a local package rather than over the network?
That would allow you to install it simultaneously on all computers without impacting the network at all (at the time of install).
Posted on 11-08-2016 05:39 PM
i would also set your distribution point to serve over http:// that way the download will be resumable if something interrupts things.
Posted on 11-08-2016 05:46 PM
also, if you only have a 200mb link between buildings you really should be setting up either a file share distribution point or a JDS in each building. we just have a dedicated mac mini in each building, but i installed a JDS on one of the lab machines once when a drive failed in one of my minis.
we are also running a caching server on each mini to locally cache app store traffic.
Posted on 11-08-2016 07:48 PM
When we try and rollout a large package to a lab it ends up not having time to finish after running all night because every single computer in the smart group tries to grab the large package all at the same time.
This actually should not happen because of the staggering, otherwise known as the random delay, that the jamf binary does before running the check-in trigger for policies (I assume you're using that trigger?). Depending on how you've set it up, there should be a random amount of time added to each computers execution of the check-in trigger. So for example, even if multiple Macs all checked in at the same exact time, they won't all execute the policy at the same time, since one Mac may only add a few seconds to the delay, and another may add up to a few minutes delay.
That being said, you may want to substantially increase your random delay amount in your JSS. its possible its only set to something like 5 minutes max. You can increase that up a bit, like to 15, or even 30 to really spread those executions on the trigger out a bit.
Outside of that, the only other technique I can personally recommend would be to create Smart Groups that use JSS ID ranges to grab a certain maximum amount of machines. You could create a Smart Group that has all the same criteria you would normally choose, but add the JSS Computer ID criteria and use something like "Greater than 1 & Less than 11" (which should only grab a max of 10 machines) And your next group would, for example, start at "more than 10 and less than 21" etc. The only issue with this approach is that it would require some babysitting. You really can't do a set and forget with this approach, because you'd need to keep editing the policy to add each subsequent Smart Group into the scope as time goes by. That may be a deal killer for you since you mentioned looking to run at night, when I presume you've left for the day and are expecting the Casper Suite to do the work for you.
Posted on 11-09-2016 08:29 AM
Posted on 11-09-2016 08:34 AM
@RobertBasil Caching should still help in getting the package there. Even with a slow pipe you can cache in the days leading up to the install. Then on the night of the install none of the machines will need to download. I use this method to deploy Adobe CC to our 120 or so devices when upgrading from one version to the next.
You can even combine with @mm2270 recommendation of breaking up scope.
Posted on 11-09-2016 09:22 AM
@RobertBasil Yeah caching should solve your bandwidth issues. You start caching days ahead of time, so it doesn't matter how long it takes.
Posted on 11-09-2016 09:30 AM
No it won't, because due to bandwidth limitations we can only push the 25GB per computer packages at night, if we do it during the day it overloads the network for the entire school. When we try and push 25GB per computer to a lab with 30 computers over a 200meg connection even after running all night most of the computers never have a chance to finish the download because they are all trying to download the 25gig package at the same time. So the next night the computers who didn't finish start all over again (yes, even using http).
Downloading and caching for later install or downloading and installing right away does not change the bandwidth issues.
Hence why I was trying to find a way to limit the number of computers in each lab who can download simultaneously (say 5 at a time) and then once one of those computers finishes another computer is added to the group of 5 automatically.
But it looks like I am going to have to use the JSS Computer ID criteria and update some smart groups by hand. Not the ideal way to do it, but it will work.
Posted on 11-09-2016 09:44 AM
Could you put a QoS (quality of service) rule on either end of the WAN link to limit the bandwidth usage during daytime hours just to the JSS distribution point IP address?
It would be highly beneficial to use an HTTPS distribution point as previously stated.
dan
Posted on 11-09-2016 09:49 AM
We thought about that as we currently apply QoS for other traffic, but the issue we are seeing in the JSS logs is even with a wide open pipe the size of the downloads for each of the 30 computers in the individual labs they are timing out and failing so adding a QoS to limit the traffic even further would just cause more failures.
Sorry, I typoed in my last message, we are using HTTPS on our DEP (JAMF Cloud).
Posted on 11-09-2016 12:29 PM
@RobertBasil given your explanation that this is a one time event, and I know this won't be what you want to do, but you could consider building a DP on a laptop and then take that laptop to each location for the day. Again, not ideal, but you could set that laptop on each network for a day or two, get your install completed, and then move on to the next. Could even just ship to the location with instructions to get it running, and then have them ship to the next.
It's unfortunate that there are not more intelligent methods built into the Jamf Pro server to handle this. I know it is something that has been discussed on here for many years.
Posted on 11-09-2016 01:09 PM
I like your solution @stevewood. you could even turn one of the workstations that is already there into a temporary DP and then turn it off when you are done! The bottom line is that since there is no builtin feature for queuing or scheduling, you will have to use your imagination and do a little extra work. For creative cloud I just create smaller push groups that I can reuse for similar projects.
Posted on 11-09-2016 01:12 PM
Posted on 11-15-2016 11:53 AM
As is pointed out in these two feature requests; Perform package verification AFTER downloading, and Verify pkg AFTER copy and this post, Package integrity verification is done on CasperShare over the network, and not on the client?, if you have Settings>Computer Management>Security>Package Validation set to "When Checksum is Present" (on by default) it actually doubles your network traffic for normal installs since it has to verify the checksum prior to downloading it (which is essentially requires downloading it), and once verified, it downloads it for install.
To avoid this you can turn on the package validation, which I would NOT recommend due to security concerns, or you could go the route of caching the package first since the checksum isn't verified prior to caching, and then doing a cache install, as is explained in this post, Use cached package installation to reduce network traffic. And generally caching large packages first is just a good method to go regardless.
But I do also dig the idea of making temporary distribution points at locations especially since you can control the sync of the packages to the dp with the awesome options with rsync/robocopy.