I need some help. I’ve used DeployStudio to do thick imaging for years, and while it’s been less than ideal, it’s worked. I stared using Casper about six months ago, and wanted to do thin imaging. Problem is, it’s a mess.
I’m netbooting a 10.11.6 that I built with autocaspernbi. My configuration is about 61 packages (according to configurations), several of those Adobe, and those plus some more are set to install after imaging. Part of the oddity is the autorun, it is only 48 packages.
I have 6 labs to image (120 machines), and for the life of me, I can’t get anything resembling consistency. I just imaged a 20 machine lab, and they all ended up in completely different states. Some installed everything, some installed nothing, and many partway in between. Several of the key packages didn't run at all, it seems, My local user creation package, my ARD setup package, etc.
Should I just nuke my configurations and start from scratch? They're just package listings... everything else is a configuration profile (binding to AD, etc).
I guess a big question is, what packages should be installed after imaging? ARD setup? local user creation? resetting jss admin password?
I’m about to go back to thick imaging since it actually “worked”. =/ But I'd really rather not admit defeat... I can't be the only one having these issues. =(
To be on the safe side, I'd recommend just setting all packages to install after imaging. Some may not 'require' that, but some certainly do, and the ones that don't require it will install the same either way.
In general, any package that has any sort of pre-flight or post-flight script must be installed after imaging, and many third party user creation or ARD configuration packages have those kinds of scripts in them. Adobe stuff definitely does.
There's a lot to this post. Could you describe your server infrastructure and method of imaging? I can imagine a variety of ways this can happen and have caused one or two myself. We're an edu that moved from DeployStudio to Casper Imaging years ago and have had great luck. Several really knowledgeable folks here still use Deploy Studio and manage with the JSS.
There are a few things I can comment on right now:
"wanted to do thin imaging. Problem is, it’s a mess."
It certainly CAN be. Mind you, it really depends on what your desired result is. In our environment, we still nuke the drive and run modular imaging as our students tend to have wreaked havoc on their systems prior to showing up for orientation. Don't get me wrong, thin imaging is a no-brainer for new-in-box units once you've got your head around it. How are you trying to "thin image" your units?
"several of those Adobe, and those plus some more are set to install after imaging."
Good so far! I find that it's really case by case and will change over time. For example, my MS licensing package has to run after imaging now-a-days, but didn't in the past. Like @psliequ states, anything with a pre/postflight scripts or anything that expects user (non-template) libraries to be populated need to be run after imaging.
"Should I just nuke my configurations and start from scratch? They're just package listings... everything else is a configuration profile (binding to AD, etc)."
If you're worried about it I suppose you can delete/rebuild them. However, if things aren't installing consistently the configuration list itself is NOT likely the culprit. I've found improperly or unsynchronised distribution points to be the most common cause of such an issue in my experience. I've had this happen due to: Failing drives and or weird issues when moving BACK from a compiled image to a modular one. Once a year I have to run a high speed, high volume (about 350 units) imaging process. I compile the images that I'll be distributing. While that "Compiled" configuration is supposed to become "modular" again when you update the configuration, I've found that behavior to be inconsistent. Therefore I delete the compiled image from all of my DPs after I've used it. Then again, I've learned that relying on AutoRun can be a bit finiky and have generally stopped using AutoRun data for policies or re-imaging.
"I’m about to go back to thick imaging since it actually “worked”"
There are many in-betweens! So far, my understanding goes something like this:
-Thin Imaging: Not imaging at all really. You've received a brand new computer, left the OS alone and simply installed whatever packages and scripts/profiles you need to configure a known state OS. Also, while this can be accomplished via Casper Imaging, it's my understanding that it's not usually the case. Most folks that are "thin imaging" are doing it via automated policies and/or Self Service.
-Modular Imaging: I believe that most of us, that image macOS devices use this method. It's really the same as the above, only that we've also re-installed the OS.
-Monolithic Imaging (Is this your "thick imaging"?): Prepping a computer or VM in exactly the way you want it, and clone it to stuff.
In any event, the most important question I have for you is: What are you trying to accomplish, specifically? You've got labs. They've been used and you need to refresh? What was their state prior to this?
We were in the same situation as you. We ended up giving up on Casper Imaging and going back to deploystudio. We did move away from monolithic imaging though. We put a base image in our workflow (AutoDMG) and then put a quickadd pkg from casper recon into the workflow. Netboots much faster than autocaspernbi and the imaging takes about 7 minutes per machine. You can then scope your software out via smartgroup. I'd advise to also put a script in before the imaging part of the workflow that removes the machine from the JSS. You can also throw your AD bind into the workflow using deploystudio's payload if you want to stay away from config profiles or policies in the JSS.
@staze I utilize "No Imaging" (what many call Thin, but since you're not truly imaging is it Thin Imaging) via DEP for any of my new machines. Before that I used a form of Thin Imaging, although all I did was install a single PKG (set to install at first boot) via Casper Imaging. That PKG file had a script that was placed into a hidden folder and a LaunchDaemon that would call that script on startup. CI would install that package and then reboot the machine, and then the script would run. That script (an older version can be found here ) takes care of setting some system level preferences (think time zone and authorization DB), and then uses
jamf policy -id <id> to call each of the software package policies I need to install on a machine. A more detailed description of the process can be found on my blog, here. I am beginning to remove some of the system level preference items from the script in favor of Configuration Proifiles.
For machines that are re-deployed (user leaves the company for example), I use CI to drop an OS (created with AutoDMG) and the same first boot script PKG that I mentioned above. This, in my opinion, would truly be considered Thin Imaging since you really are imaging a machine.
To boot the machines in either of the above scenarios (minus DEP), I use an NBI created with AutoCasperNBI. Sometimes, I will use a Thunderbolt SSD drive to boot to and run CI, just because it is quicker than NetBoot, but that's only when I'm in a hurry.
Now, in case you don't know, Apple is changing the game in 2017 when Apple File System gets released. An article on The Register's web site talks about the upcoming change that will prevent admins from erasing the OS off of a system and installing their own. I have confirmation from two different Apple employees that this will be happening and that with an upcoming update to Sierra, we will no longer be able to wipe the OS. The way it was explained to me, there will be a snapshot of the fresh OS maintained somewhere on the drive. When a machine needs to be "imaged", the snapshot is simply pulled up (think VM type restore). This means that we all need to embrace No Imaging/Thin Imaging as quickly as we can.
With regards to your specific questions, I think that installing via a post imaging script, as I described above, might help solve some of the problems you are seeing with inconsistencies. The ARD setup can be done as part of the script. I'm assuming your just enabling the admin user for ARD, and that can be done with this oneliner:
/System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -activate -configure -access -on -users <adminuser> -privs -all -restart -agent -menu
Your local user package can be created with CreateUserPKG which is available in the Mac App Store. Create that PKG, drop into Casper and then install via a policy that is called from your script.
When it comes time to re-image a lab, you could have one Casper Imaging Configuration that has an OS and the first boot script as I have, then set that as the AutoRun data for each of the machines. You can then use a policy to set the boot drive to your NetBoot server and reboot the machines. Casper Imaging on your NBI should auto start, pick up the Auto Run data, and then lay down the OS and first boot script.
Hope that helps a little. If anything is not clear, just ask away. There are plenty of ways to do this, and most of us have our own twists on it, but the beauty of Mac management is that you can pick and choose what works best for your environment. Want to use Deploy Studio and Casper together? Go for it. Want to use Munki and Casper together? Not a problem. Find the tool set that works for you.
@staze, I'm coming from exactly the same place, DeployStudio. Yes, it worked, but it wasn't very easy to update and for Adobe you need a script to license the software after booting the first time. Actually, you probably need that anyway with Casper. You need to break your problems down into smaller pieces and tackle them one at a time. Start with the essentials: an OS. Then add some packages and observe the results. Check your logs. For specific problems, don't hesitate to open a support ticket with JAMF.
I found packaged-based imaging fairly difficult to get a handle on, particularly with certain types of software that don't lend themselves to this approach. As the only administrator of over 150 Desktop Mac systems, I can't afford to visit every single machine. That said, I prioritize. As long as I get the key software packages out there I can clean up a lot of problems with Apple Remote Desktop afterward, or even JAMF policies to install things, which I am only now experimenting with.
AutoDMG is a godsend and its author, Meager Valp, is a hero. All my OS images are created with AutoDMG. To keep things "clean" I only add Apple software updates to these so that they are all Apple. Then my other packages all go after the first boot, the key ones being Adobe CC 2017 and Microsoft Office 2017.
Be sure to check out the macadmins site for Office builds. You'll find plenty of references to that site here.
For admin account creation I'm now using a Bash script on first boot.
Cheers from a college in Canada.
Sorry for going AFK... apparently jamfnation doesn't email when someone responds to a post. =(
Anyway, I got it working. I had to completely nuke my Configuration and re-create. I don't know what was wrong, but whatever it was, it was wrong. =( I created a new Configuration, just a standard. Maybe Smart configs are broken. =/ I'll note when I re-created it, my package count went down to 48, instead of the 61 it was saying previously... so something definitely broken there.
I should say we're on 9.96. In order to prevent killing our main JDS, I set up a new one on a linux VM. That was completely unrelated, but was a task in and of itself. 😃
Anyway, since recreating, I've imaged 100 computers and they've all worked well. My biggest issue since then has been that figuring out things like "how do I set up AD group access when the machine doesn't bind to AD until the configuration profile applies". But, that's my problem. 😃 Just simple chicken and egg stuff.
Thanks for all the responses. I'm sorry I didn't see them until now. 😃 Happy new year everyone!
btw: we don't have DEP... yet. Typical Higher Ed crap (everyone wants DEP, but no one is willing to take ownership of it that actually has the authority to take ownership of it).
" but no one is willing to take ownership of it that actually has the authority to take ownership of it. "
I put this mostly on Apple. They won't let smaller units/departments enroll themselves. Consequently higher ups who've never managed computers, have perhaps a couple of dozen computers, make decisions even for departments like mine with over 400 computers. Backwards. The DEP program should be based on # of devices directly managed not the number of people below you in the hierarchy!