Caching pausing, failing but not reporting failure, generally being not too useful

nigelg
Contributor

Hi all, I hope someone can tell me if caching works brilliantly for them because right now its more trouble than its worth with my setup!

I am running 9.32 (we plan to upgrade but I want to get my software out everywhere before I start upgrading the backend).

I have a 50GB file I tried to cache to machines over the weekend. They all processed the policy but a single machine completed it. I used ssh to look on other machines and for each one, when I ran nettop to view the download speed, it appeared the download had just started, almost only when I had remoted to the machine with ssh. Not with all of them, some would show 500MB of a download but would have started 4 hours before so they should have got lots more than that. I have caffeinate running on the machine during the policy which I am starting using launchctl - I can see the process running so the machines shouldn't be sleeping.

I have also tried this with smaller packages (around 10GB) and had the same problem - out of 12 machines, only 6 got the package downloaded. The rest weren't in sleep but I connected using ssh then they magically started caching again. Some had a small part of the package, some had none of it.

Also it appears that the policy to download a cached copy can show as completed when the files aren't fully downloaded as I have machines that have 400MB of the 50GB download showing as completed when they obviously aren't. So I can't trust the caching that has happened anyway.

Has anyone else had issues, not had issues, been able to transfer large files and have any tips for getting this working properly? Any suggestions would be great because the overhead to get this working at the moment, I would be better off making my machines wake at 12am and just do a straight install over the network.

4 REPLIES 4

jhalvorson
Valued Contributor

How are you serving up the files? JDS, Cloud Distribution Point, or File Share Distribution (smb, afp, http, or https?)

nigelg
Contributor

File distribution points over SMB, 1 in each of 2 sites.
Running 10.9.4 clients all connected by ethernet cable

nigelg
Contributor

Should I assume that no one is using caching successfully then?

jhalvorson
Valued Contributor

I don't have experience with using a SMB based file share for the Casper Suite. I am able to cache files using HTTP via Casper Share hosted on Mac OS X 10.9.

Are the file types .zip files, .dmg, or other?

As a test, you could use Disk Utility to create a .dmg of any size and see if the file size matters.

Are there logs on the server side that show clients are doing a disconnect?