Posted on 11-26-2013 07:35 AM
We are on the cusp of rolling out Mavericks to our users and our Network Admin is getting concerned about the strain of sending 1100 5.5GB packages ( almost 6TB! ) OTA. I did a little searching, but was unable to ascertain how/if Casper manages bandwidth usage.
Is this something that is innately managed by Casper or do I need to stagger my deployment to meet our throughput limitations? He can also throttle the output from the distribution point, but I am concerned this might result in some very long download times if Casper pushes them all out at the same time through a small pipe.
Any ideas / comments / insights would be appreciated
Solved! Go to Solution.
Posted on 12-02-2013 11:17 AM
As far as I know, the Casper Suite doesn't have a built in function to throttle the actual download speed of a package once it begins downloading. I think it will simply copy the file down as quickly as the network allows.
However, policy runs are automatically staggered. Meaning that even in the unlikely case of 100 or so Macs all checking in to the JSS at the exact same time, each one gets a randomized delay of anywhere from 1 - 300 seconds (up to 5 minutes) So in essence, even with 100 Macs all starting a policy at 12 noon, they will not all begin downloading the package at the same time.
But given this is a pretty large package, you'll still run into cases where the download will be ongoing on some machines as others kick in.
I would say staggering in small groups may be your best bet. When I had to do this one time in the past, I used Smart Groups using the JSS ID. So group 1 was JSS ID 1-100. Group 2, 101-200, Group 3 201-300, etc, etc,
As you said, given some of the Macs in each group won't be checking in until some later date, you'll end up with a backlog eventually. The other issue with this is you need to manually add in each SG periodically, since there's no easy way to automate that.
Posted on 11-26-2013 11:14 AM
How are your distribution points set up? Do you only have one, or will all the clients be copying the package from different locations? What do you mean by OTA, are you delivering the package over wireless connections?
I would personally deploy the installer on demand, or do staggered caching. If you really are doing this over wireless, that will be challenging.
Posted on 12-02-2013 10:57 AM
@alexjdale][/url - Sorry for the lack of detail.
This is going out to a High School from a single distribution point located at said High School. All of the students have MacBook Airs, so wi-fi is our connection of choice. The only alternative we have to wi-fi is to collect all of the machines and copy the installer from a thunderbolt drive. This is logistically challenging as the students are required to have their computers for most of their classes.
We have an enterprise-caliber wireless network with a controller and the facility is saturated with access points, however we also have 1200 machines using this wi-fi network, so we need to keep it functional during this deployment. I was really just trying to ascertain whether Casper had any built-in logic to regulate the load it imposes on the network; if so, I can dump them all at the same time and let Casper handle the details, if not, then I have to manually manage the outgoing policies in an attempt to limit the network load.
I am following the workflow outlined in the Technical Paper entitled "Deploying OS X v 10.7 or Later with the Casper Suite" It entails caching the OS on the client machines, then creating a self-service policy to install that locally cached OS. That allows the users to determine when the upgrade is performed.
Staggering the caching is only partially effective, as some students may go several days without booting their machine or use it only for short periods on campus; if I deployed in groups of 100, I would anticipate having a backlog of 100 in short order. It will nevertheless be my ad hoc method of managing bandwidth usage if Casper is not able to.
Posted on 12-02-2013 11:17 AM
As far as I know, the Casper Suite doesn't have a built in function to throttle the actual download speed of a package once it begins downloading. I think it will simply copy the file down as quickly as the network allows.
However, policy runs are automatically staggered. Meaning that even in the unlikely case of 100 or so Macs all checking in to the JSS at the exact same time, each one gets a randomized delay of anywhere from 1 - 300 seconds (up to 5 minutes) So in essence, even with 100 Macs all starting a policy at 12 noon, they will not all begin downloading the package at the same time.
But given this is a pretty large package, you'll still run into cases where the download will be ongoing on some machines as others kick in.
I would say staggering in small groups may be your best bet. When I had to do this one time in the past, I used Smart Groups using the JSS ID. So group 1 was JSS ID 1-100. Group 2, 101-200, Group 3 201-300, etc, etc,
As you said, given some of the Macs in each group won't be checking in until some later date, you'll end up with a backlog eventually. The other issue with this is you need to manually add in each SG periodically, since there's no easy way to automate that.
Posted on 12-03-2013 03:15 PM
You can also increase than randomization... For instance I have any every 30 triiger that Is randomized by 60 mins. That means that a policy runs every 30-90 mins.
So worst case scenario, a group has 30 mins to download before another group starts, and best at 90 before another starts.
Your best bet is to manage the bandwidth at the access point though. Maybe with a combination of @mm2270 s suggestion along with increasing the randomization times and a bit of throttling, you can have a successful deployment.
Posted on 12-05-2013 05:54 AM
I would say staggering in small groups may be your best bet. When I had to do this one time in the past, I used Smart Groups using the JSS ID. So group 1 was JSS ID 1-100. Group 2, 101-200, Group 3 201-300, etc, etc,
Wow what a great idea. For our Mountain Lion deployment I did it by the last letter of the computer name, and that was generated by an extension attribute. I like your idea a lot better.
Posted on 12-09-2013 10:06 AM
I ended up using some existing groupings based on computer name. These resulted in buckets of about 200 machines apiece. I also set the checkin time to 1 hour. We kept a close eye on bandwidth and, although usage was heavier than normal, it was manageable. We have been running for 3.5 days (on campus only) and just passed the 2.5 TB point this morning. Thanks for all the ideas.