I've run into a chicken-and-egg problem with Casper Imaging and the Jamf Cloud Distribution Point: in order to add a disk image file (.dmg) to an imaging Configuration, it has to be in my main repository. But when I try to upload it to my Jamf Cloud Distribution Point, it fails. Update: I was wrong in my original post by attributing the problem to the current 5GB file limit from Amazon S3), specifically,
With a single PUT operation you can upload objects up to 5 GB in size.
But in order to use that .dmg file in Casper Imaging (like with Target Mode imaging), you need that .dmg file to be available in a Configuration (which you create with Casper Admin) as well as in your local repository.
Here is how I work around the 5GB limitation with an empty placeholder file. Your workflow might vary from mine, but I hope this gives you a start.
I've been told that I should be able to upload via Jamf Pro instead of Casper Admin, but I tried and it failed for me, at least with hosted Jamf Pro. If it works for you, then you don't need my workaround.
(you can replace /jamf with /casper or whatever you like, but be consistent)
sudo mkdir /jamf
(of course, replace ladmin with the name of your local administrator name) Open Casper Admin. Open a new Finder window, choose Go > Computer. Double-click your boot volume so you can see the /jamf folder (it won't be displayed with the leading slash; I just refer to it like that in written instructions) Drag your /jamf folder to the sidebar in Casper Admin.
sudo chown ladmin /jamf
mkdir ~/Desktop/Zero-Byte-File mkdir ~/Desktop/Real-DMG-File
cp ~/Desktop/Zero-Byte-File/MyUnbootedImage.hfs.dmg /jamf/Packages/MyUnbootedImage.hfs.dmg
cp ~/Desktop/Real-DMG-File/MyUnbootedImage.hfs.dmg /jamf/Packages/MyUnbootedImage.hfs.dmg
Don't worry, you do not need to copy the empty .dmg back to your local repository when you replicate again!
@arekdreyer Awsome stuff!
I'm of the engineers who built JCDS and I'd like to provide some clarification here:
But when I try to upload it to my Jamf Cloud Distribution Point, it fails because of the current 5GB file limit from Amazon S3)
All JCDS->S3 uploads use the Multipart upload, which has a 5TB limit, so that's the theoretical limit for JCDS. In addition, all JSS->JCDS communication sends files in much smaller chunks to be reassembled later. Unfortunately, there are two product issues (PI-002171 and PI-003331) that are making things difficult when it comes to large files. Casper Admin uploads files to the JSS all in one request, so if anything happens to a connection on either the server or Casper Admin, the whole upload fails (PI-002171). Uploading through the Jamf Pro webapp sends small chunks directly to JCDS, bypassing the JSS entirely and retrying any file chunks that fail. That was built to be the most reliable way to upload files to JCDS, but currently your session can time out before that upload completes (PI-003331). Jamf is working to address these issues and, as we plan any upcoming work on distribution, I'll be sure to link back to this post so that we can make sure that this kind of workaround isn't necessary in the future.