Stage a large package rollout?

RobertBasil
Contributor

Is there any way to stage a large package rollout for example to 5 computers at a time in a smart group?

Example:

Each of our labs have approx 30 computers. When we try and rollout a large package to a lab it ends up not having time to finish after running all night because every single computer in the smart group tries to grab the large package all at the same time.

Can we specify to only allow 5 computers in the smart group to download the package at a time and have the other computers in the group wait until a spot opens up?

1 ACCEPTED SOLUTION

stevewood
Honored Contributor II
Honored Contributor II

@RobertBasil given your explanation that this is a one time event, and I know this won't be what you want to do, but you could consider building a DP on a laptop and then take that laptop to each location for the day. Again, not ideal, but you could set that laptop on each network for a day or two, get your install completed, and then move on to the next. Could even just ship to the location with instructions to get it running, and then have them ship to the next.

It's unfortunate that there are not more intelligent methods built into the Jamf Pro server to handle this. I know it is something that has been discussed on here for many years.

View solution in original post

18 REPLIES 18

CapU
Contributor III

I had to put 100GB of resource material in 2 labs. I used Composer to create 2 seperate .dmg (when I created a .pkg the results were not consistant) I created a policy that started at 1:00am and completed at 4:00am to about 55 computers

RobertBasil
Contributor

We have a 25GB package for each computer times 30 computers.

Now you see our problem.

CapU
Contributor III

We have a 10GB Fiber connection to the switch in closet then 1GB to the lab machines. Recently updated our GarageBand package with all of the instruments, not sure how big but it was big. I pushed it thru Casper remote during the day. No problem. maybe you have something else going on that is degrading network performance?

RobertBasil
Contributor

Nothing degrading network preformance. Just don't want to have to create a local DEP in 14 different locations for this one big push that we will only need once.

If we go outside the building (which we are) we only have a 200MB connection for each school.

znilsson
Contributor II

Maybe I missed something, but can you set the package to "cache", then just let it go ahead of time, give yourself a few days for it to complete on all computers in all the labs, and then when you're ready to roll it out, enable a second policy set to "install cached" so the install occurs from a local package rather than over the network?

That would allow you to install it simultaneously on all computers without impacting the network at all (at the time of install).

jchurch
Contributor II

i would also set your distribution point to serve over http:// that way the download will be resumable if something interrupts things.

jchurch
Contributor II

also, if you only have a 200mb link between buildings you really should be setting up either a file share distribution point or a JDS in each building. we just have a dedicated mac mini in each building, but i installed a JDS on one of the lab machines once when a drive failed in one of my minis.

we are also running a caching server on each mini to locally cache app store traffic.

mm2270
Legendary Contributor III
When we try and rollout a large package to a lab it ends up not having time to finish after running all night because every single computer in the smart group tries to grab the large package all at the same time.

This actually should not happen because of the staggering, otherwise known as the random delay, that the jamf binary does before running the check-in trigger for policies (I assume you're using that trigger?). Depending on how you've set it up, there should be a random amount of time added to each computers execution of the check-in trigger. So for example, even if multiple Macs all checked in at the same exact time, they won't all execute the policy at the same time, since one Mac may only add a few seconds to the delay, and another may add up to a few minutes delay.
That being said, you may want to substantially increase your random delay amount in your JSS. its possible its only set to something like 5 minutes max. You can increase that up a bit, like to 15, or even 30 to really spread those executions on the trigger out a bit.

Outside of that, the only other technique I can personally recommend would be to create Smart Groups that use JSS ID ranges to grab a certain maximum amount of machines. You could create a Smart Group that has all the same criteria you would normally choose, but add the JSS Computer ID criteria and use something like "Greater than 1 & Less than 11" (which should only grab a max of 10 machines) And your next group would, for example, start at "more than 10 and less than 21" etc. The only issue with this approach is that it would require some babysitting. You really can't do a set and forget with this approach, because you'd need to keep editing the policy to add each subsequent Smart Group into the scope as time goes by. That may be a deal killer for you since you mentioned looking to run at night, when I presume you've left for the day and are expecting the Casper Suite to do the work for you.

RobertBasil
Contributor

@znilsson

Caching won't solve our issue as it the limited bandwidth downloading the packages that are the problem, not the install.

@mm2270

I was not aware of the JSS Computer ID criteria, that will not solve the problem but will help for sure. Thanks!

stevewood
Honored Contributor II
Honored Contributor II

@RobertBasil Caching should still help in getting the package there. Even with a slow pipe you can cache in the days leading up to the install. Then on the night of the install none of the machines will need to download. I use this method to deploy Adobe CC to our 120 or so devices when upgrading from one version to the next.

You can even combine with @mm2270 recommendation of breaking up scope.

znilsson
Contributor II

@RobertBasil Yeah caching should solve your bandwidth issues. You start caching days ahead of time, so it doesn't matter how long it takes.

RobertBasil
Contributor

@znilsson

No it won't, because due to bandwidth limitations we can only push the 25GB per computer packages at night, if we do it during the day it overloads the network for the entire school. When we try and push 25GB per computer to a lab with 30 computers over a 200meg connection even after running all night most of the computers never have a chance to finish the download because they are all trying to download the 25gig package at the same time. So the next night the computers who didn't finish start all over again (yes, even using http).

Downloading and caching for later install or downloading and installing right away does not change the bandwidth issues.

Hence why I was trying to find a way to limit the number of computers in each lab who can download simultaneously (say 5 at a time) and then once one of those computers finishes another computer is added to the group of 5 automatically.

But it looks like I am going to have to use the JSS Computer ID criteria and update some smart groups by hand. Not the ideal way to do it, but it will work.

dukedan
New Contributor II

Could you put a QoS (quality of service) rule on either end of the WAN link to limit the bandwidth usage during daytime hours just to the JSS distribution point IP address? It would be highly beneficial to use an HTTPS distribution point as previously stated.
dan

RobertBasil
Contributor

@dukedan

We thought about that as we currently apply QoS for other traffic, but the issue we are seeing in the JSS logs is even with a wide open pipe the size of the downloads for each of the 30 computers in the individual labs they are timing out and failing so adding a QoS to limit the traffic even further would just cause more failures.

Sorry, I typoed in my last message, we are using HTTPS on our DEP (JAMF Cloud).

stevewood
Honored Contributor II
Honored Contributor II

@RobertBasil given your explanation that this is a one time event, and I know this won't be what you want to do, but you could consider building a DP on a laptop and then take that laptop to each location for the day. Again, not ideal, but you could set that laptop on each network for a day or two, get your install completed, and then move on to the next. Could even just ship to the location with instructions to get it running, and then have them ship to the next.

It's unfortunate that there are not more intelligent methods built into the Jamf Pro server to handle this. I know it is something that has been discussed on here for many years.

djdavetrouble
Contributor III

I like your solution @stevewood. you could even turn one of the workstations that is already there into a temporary DP and then turn it off when you are done! The bottom line is that since there is no builtin feature for queuing or scheduling, you will have to use your imagination and do a little extra work. For creative cloud I just create smaller push groups that I can reuse for similar projects.

RobertBasil
Contributor

@stevewood

What a great idea! I'll start digging into possibly doing this tomorrow.

Thanks!

mike_paul
Contributor III
Contributor III

As is pointed out in these two feature requests; Perform package verification AFTER downloading, and Verify pkg AFTER copy and this post, Package integrity verification is done on CasperShare over the network, and not on the client?, if you have Settings>Computer Management>Security>Package Validation set to "When Checksum is Present" (on by default) it actually doubles your network traffic for normal installs since it has to verify the checksum prior to downloading it (which is essentially requires downloading it), and once verified, it downloads it for install.

To avoid this you can turn on the package validation, which I would NOT recommend due to security concerns, or you could go the route of caching the package first since the checksum isn't verified prior to caching, and then doing a cache install, as is explained in this post, Use cached package installation to reduce network traffic. And generally caching large packages first is just a good method to go regardless.

But I do also dig the idea of making temporary distribution points at locations especially since you can control the sync of the packages to the dp with the awesome options with rsync/robocopy.