I've been trying to find a way to package Logic and GarageBand with all of the loops installed and indexed for each user. The apps install fine, nothing weird/crazy to deal with. I download all of the loops and extra content with both programs and make a package from them after installed; I grab /Library/Application Support/GarageBand, /Library/Application Support/Logic, and /Library/Audio, another 46GB of data there. After that I use composer during the indexing and grab a bunch of database files there in the user directory.
After making a configuration installing the apps, then the content, then the index I am able to log into users and not have GB or Logic run the index (which is actually what I was trying to do in the first place); however, when I open GarageBand it tells me that a bunch of extras are missing and I have to re-download them. I run through the 6-10GB download process (with composer open) and let it install but composer only shows a few files being touched.
Just an edit: It looks like the loops are all installed and indexed, the instruments seem to be missing (lots of drums).
I'm trying to mess around with order of application installation but I don't know why that would have anything to do with it.
Anyone have any insight on this?
Solved! Go to Solution.
I know it sucks but in my opinion / experience repackaging the GarageBand and / or Logic Loops always ends badly, pretty much as you describe above.
Personally, I capture the loop installers and deploy this as post-poned installs. The user's do see the 'indexing loops' screen but it's only once per user per device and (especially with SSDs) this doesn't take too long.
The most important bit I've found is the order in which the loops are installed. Using the method I blogged about, you;ll get your loops in 2-4 batches, and these batches of loops have to be installed in order e.g.:
Batch 1: 10 loops
Batch 2: 20 loops
Batch 3: 5 loops.
All of the 'Batch 1' loops must be installed before batch 2 and 3. all of the 'Batch 2' loops must be installed after 'Batch 1' but before 'Batch 3'.
Hope that helps!
Thanks for the reply, @daz_wallace!
I've used the script to download the packs in the past and while it worked for getting the loops as you mentioned the indexing occurs. Indexing our systems seems to take anywhere from 10-15 minutes at a time (even on SSD). Not sure why, but even in testing it does that also. That's a lot of time to waste for a class period. Our machines purge users nightly also (this stops students from filling up the HD on the machines), so this would happen every day.
As mentioned, I was able to figure out how to stop the indexing by grabbing the appropriate files, but I'm just stumped as to why GarageBand seems to always say it's missing loops.
I'm wondering if GarageBand has receipts or something that tell the system that certain things were installed...but I don't see those files when I run composer.
No worries @jmahlman!
That is a long time for indexing, a little unusual in my experience. Have you tried on a clean system, just deploying the packages 'normally' (not repackaged versions) to see if the per-user indexing speed is better?
The only reason I can think that GB thinks loops are missing are:
- Some are missing (Sorry but stating the obvious!)
- The Index's that are being injected are incorrect.
The OS using package receipts to know whats installed and you can check this using
However I don't think you can trick this and as it's a database, I really don't think it'd be a good idea to try copying and deploying one from another Mac.
I think an important part was skipped over. The packages need to be installed in a specific order. If they aren't installed in a specific order you will run into problems. There was a thread on GarageBand where this was talked about. Do a search on the forums here.
The indexing takes the same amount of time when installing normally.
I have two machines side by side comparing the installed packages (one of them is the machine I used to make the packs) and every folder is identical. I am beginning to wonder if the indexes have some machine specific info in them that may not be translated over properly. I'm currently trying another method that will add the index packages to user templates on the machines AFTER Logic and GB are launched on the computer for the first time and checking if all packs are indeed installed.
@bpavlov I installed all of the packs using the respective programs then made a DMG with those files, I didn't install them this time around, Logic and GB did. We had that issue in the past and found the posts here very helpful with that.
Note: As jamf have deprecated installing PKGs using the jamf binary, you MUST use paramater 6 with "YES" in uppercase and without quotes... This will make the script use the installer binary to install the PKGs...
Got it thanks.
I guess its a wierd question, but does anyone know how to test and see if this actually installed? The filesystems on macs is so confusing to me....
I provided option 6, option 7, option 9, and option 4. When i run the script, it runs and says its installing everything to /. But if i browse to / there is nothing by the PKG file names there... Do i have to tell the jamf profile to install it somewhere else, like for instance /Users/Shared/Logic/Packages/ explicitly? or does it figure it out. There is no content in that directory after the script runs. Is that where it installs the files or elsewhere?
I can fire up logic pro and it shows that things are installed, but i am not sure if that was installed before or not. I dont have a virgin test client i guess is one of the problems. I wiped that "shared logic pacakages" directory and so i think some library items have the little download icon, and some do not, but the /Users/Shared/Logic/Packages/ directory is blank which is what is confusing me.
So / in mac and Linux is a system variable like that in Windows (such as %SYSTEMDRIVE% in command prompt, or $env:SystemDrive in powershell). Its the currently booted operating system drive variable.
In Jamf pro, when a package in a policy is set to cache, the policy only performs the download of the file. It caches it into a directory much in the same way SCCM downloads a package. In Windows this cache directory path is C:\Windows\ccmcache. On macOS, jamf downloads packages/caches into /Library/Application Support/Jamf/Downloads or /Library/Application Support/Jamf/Waiting Room/
Like in Windows, when something is mounted, its given a path. In Windows this is usually a drive letter. In macOS and Linux its mounted to a directory such as /Volumes/VolumeNameOfDMGFile
Note: The volume name isn't always the name of the DMG filename.
The script grabs all the PKG filenames in the above mounted path and loops through them, silently installing each to the currently booted system using the / variable.
The script outputs the string used to silently install them... It only shows the below as a single example. The script subsitutes the PKG filenames upon every loop and doesn't parse that in the log.
Install String: installer -pkg "/Volumes/LogicDefaultSounds/PKGFilename.pkg" -target / -allowUntrusted.
In your example screenshot of the log, you've got 33 pkg installers within the DMG.
Generally in most systems both Windows and MacOS, an install exit code of zero "0" without quotes equals sucess.
If an exit code isn't zero it increases a counter and passes that counter number as an exit code to give you an understanding of how many PKGs failed installation.
Generally, the PKG filenames listed in your DMG will show a package receipt on your computer object in jamf if successful and an inventory update has been done after the policy has completed. Navigate to your computer in Jamf, On the Inventory Tab, Package Receipts payload, Installler.app/SWU will list successfully installed PKS files.
This method is by far a better option than uploading each PKG to jamf and setting a policy to install each PKG. Sound sample libraries for Logic or GarageBand can be in excess of 1000 signed PKG installers and uploading/managing them as individual files would be very heavy work.
Yeah they are there on the asset inventory tab and the other things you said make sense for sure. I was just wondering how i could verify from the application perspective, but i guess i will have to make do with this. Unless i want to build a new mac from scratch and we dont have any spare (or spare licenses for logic) atm.
Ill just have to wait till i do a rebuild and assume its fine for now. Thanks for the thorough writeup!
This set of initial loops is only like 2gb. The rest of the content is 60gb! so i am really considering whether i want to burden jamf with that super large file.
Yes appears that i can upload a max 20gb package... oh well. Its only 20 devices and the teacher just wanted to be automated but its not that big of a deal to install those other sounds manually for them. ill probably just leave it at this for now. I guess i could break the 62gb file into 4 packages that chain install.... Probably too much effort in this particular case.
If your using Jamf Cloud and the Jamf Cloud Distribution point, you will have this limitation.
One of the reasons, we still using on-prem distribution points to deleiver these larger packages of sometimes 80GB to our our computer labs.
Unlike SCCM, in a policy you can force clients to fetch the data from a specific distribution point which overrides network segments (SCCM boundaries eqiv), which is what we do specifically for this.
The best solution to get around these limitations is to allow an additional file type format called .dmgpart which is the eqiv of spanning content over multiple files like eg .zip. I did put in a feature request to allow this however was flagged as not planned as they appear to be working on a solution to deal with the larger file sizes. Dont know of a release date.
If you set up a local DP close to your clients, you can do the same. However, managing packages becomes heavier.
Something to note: While our on-prem distribition point filesize is virtually unlimited, we do notice clients timing out (during download) and policies failing. We suspect labs of 30 to 40 computers trying to pull up and over 20GB+ (sometimes at or near the same time) tends to hammer uplinks. An attributing factor will be because jamf tends to do everything alphabetically in order, not saying its the only factor, just a contributing factor.