(Possible) Misconceptions Before Stating AutoPKG Design Decisions

Banks
Contributor

I thought I'd keep this thread separate, as I know little about the JSS, having mostly interacted with the MDM product. Here goes. To the best of my knowledge:
1. Casper has no concept of tracks, that I am aware of, 'baked-in' to the product. E.g., there is no testing, 'canary in the coal mine' group designation
2. Software needs to be specially treated before being made available, (I've been noticing confusion regarding HTTP/S availability vs. traditional fileshares,) but a guiding principle seems to be it at least must be a flat package and not blow things up
3. There is no metadata for all the other things Munki provides out-of-the-box, including blocking applications, which must therefore be wrapped with a script
4. There is no concept of managed updates, so you need to create a new smart group each time you'd like to scope/prepare an 'uploaded' patch to clients
5. All software must have a category assigned to it before it can be made available, and categories have no specifically defined utility when it comes to pkgs, being applicable in many contexts/across data types
6. Software installs get put in the root of the JSS distribution point/fileshare and cannot be stored/accessed in folders
7. Installs that would otherwise fail over-the-network need to be specifically designated to be cached in the fugazi room
8. 'configurations' and policies exist separately from groups of clients, which provide something in the way of nesting functionality

My entry point to integrate AutoPKG is to have the pkgs built by AutoPKG automatically
1. create a category in the JSS if it isn't present(working) based on product name
2. copy the pkg to the JSS dist point
3. update the API to make the JSS aware that the pkg is there
And eventually
4. do any heavy lifting to get the pkg live for a 'canary' track, as makes sense within the Casper model
5. get Casper imaging involved

Sorry if I seem glib and clueless(what a combination,) I'm really excited to deliver functionality that jives with the Casper model.
Allister

26 REPLIES 26

donmontalvo
Esteemed Contributor III

@Banks][/url][/url wrote:

I thought I'd keep this thread separate, as I know little about the JSS, having mostly interacted with the MDM product. Here goes. To the best of my knowledge: 1. Casper has no concept of tracks, that I am aware of, 'baked-in' to the product. E.g., there is no testing, 'canary in the coal mine' group designation 2. Software needs to be specially treated before being made available, (I've been noticing confusion regarding HTTP/S availability vs. traditional fileshares,) but a guiding principle seems to be it at least must be a flat package and not blow things up 3. There is no metadata for all the other things Munki provides out-of-the-box, including blocking applications, which must therefore be wrapped with a script 4. There is no concept of managed updates, so you need to create a new smart group each time you'd like to scope/prepare an 'uploaded' patch to clients 5. All software must have a category assigned to it before it can be made available, and categories have no specifically defined utility when it comes to pkgs, being applicable in many contexts/across data types 6. Software installs get put in the root of the JSS distribution point/fileshare and cannot be stored/accessed in folders 7. Installs that would otherwise fail over-the-network need to be specifically designated to be cached in the fugazi room 8. 'configurations' and policies exist separately from groups of clients, which provide something in the way of nesting functionality My entry point to integrate AutoPKG is to have the pkgs built by AutoPKG automatically 1. create a category in the JSS if it isn't present(working) based on product name 2. copy the pkg to the JSS dist point 3. update the API to make the JSS aware that the pkg is there And eventually 4. do any heavy lifting to get the pkg live for a 'canary' track, as makes sense within the Casper model 5. get Casper imaging involved Sorry if I seem glib and clueless(what a combination,) I'm really excited to deliver functionality that jives with the Casper model. Allister

It's great to see folks working to provide integration with AutoPKG and other opensource tools, leveraging existing solutions is always smart. With that said, there seems to be some ongoing misconception over at the ##osx-server IRC channel, regarding how Casper actually works. Hopefully there can be less "I'm holier than thou, my way is best" and more "Let's get our tools to work together" threads over there on IRC. That's always good for everyone.

Casper training: http://www.jamfsoftware.com/training

From the IRC: http://osx.michaellynn.org/freenode-osx-server/

http://donmontalvo.com/jamf/jamfnation/2013-11-27-osx-server_IRC_thread.png

external image link

Happy Thanksgiving!

Don

--
https://donmontalvo.com

gregneagle
Valued Contributor

"With that said, there seems to be some ongoing misconception over at the ##osx-server IRC channel, regarding how Casper actually works."

So when you see this, why don't you set the record straight? Saying there are ongoing misconceptions, and not helping to clear them up, isn't terribly helpful.

There are many current and former Casper users on the ##osx-server IRC channel; I'd think if the channel in general has "some ongoing misconception ... regarding how Casper actually works", then the misconceptions must extend outside of ##osx-server.

"Hopefully there can be ... more 'Let's get our tools to work together' threads over there on IRC."

Agreed. But this requires that people from the JAMF community pitch in and actually help build something if they want tools like autopkg to integrate with Casper. No-one other than current JAMF admins have the incentive to do so.

Banks
Contributor

I should probably add WHY I'm asking to be corrected about these points: if there are current best practices or analogues to these concepts in use within the Casper toolset, I'd want to understand them. Right now Munki enjoys the quickest circuit between 'downloaded' and 'available to clients', and I just want to know what to target and how many phases I'd need to go through, if that is even endorsed/possible.

donmontalvo
Esteemed Contributor III

@gregneagle][/url wrote:

Saying there are ongoing misconceptions, and not helping to clear them up, isn't terribly helpful.

In lieu of the animosity towards Casper (and the admins who use it) over at the ##osx-server IRC forum, why not post your Casper related questions here? You've been very helpful to Casper admins on this forum, why not give the community a chance to help you better understand how it works? :)

Even if Casper doesn't do 100% of what everyone would like it to do, it's a wheel that doesn't need to be reinvented. It's quite a capable product, and it lends itself nicely to these kinds of integration efforts. :)

Happy Thanksgiving!

Don

--
https://donmontalvo.com

gregneagle
Valued Contributor

Don:

I have no animosity towards Casper and the admins who use it. I do have criticisms of the product.

There are Casper admins on ##osx-server. Why can't I ask questions when and where I want? Frankly, after reading lots of threads here, I'm not convinced the people here are any more informed than the people on ##osx-server. Certainly on ##osx-server I can talk with people with a wider range of experiences.

jhbush
Valued Contributor II

Greg, I'll admit I'm a bit lazy chiming in on the osx-server IRC channel concerning JAMF. It might be my New Years resolution. What would be nice is if there was some kind of Mac admin aggregation site that collected posts and IRC feeds from around the web.

gregneagle
Valued Contributor

"What would be nice is if there was some kind of Mac admin aggregation site that collected posts and IRC feeds from around the web."

Sounds like a great project for someone. When will you be able to start on it?

donmontalvo
Esteemed Contributor III

@jhbush1973][/url Oh snap! You walked into that one. LOL

Greg, thank you for being one of the most helpful and genuinely caring IT folks around. I hope to take you to lunch one day when I'm in California. It's the least I can do in return for the stuff I learned from you over the years. :)

To answer your question, post whatever you like, wherever you like, whenever you like...but expect a lot more here, where most Casper admins and JAMF engineers hang out.

It's like you're looking for a 14.1 game but you're looking in a bar instead of a billiard room. Which reminds me, I have a match with a former 14.1 world champ so signing off...gobble, gobble.

Don

--
https://donmontalvo.com

mrowell
Contributor

@Banks][/url][/url][/url][/url][/url Thank you so very much for doing this work on integrating AutoPKG into Casper.

I'll try and answer your questions from my Casper/JSS experience - which is solely with the our internal JSS..

1. Casper has no concept of tracks, that I am aware of, 'baked-in' to the product. E.g., there is no testing, 'canary in the coal mine' group designation

Not that I know of.

2. Software needs to be specially treated before being made available, (I've been noticing confusion regarding HTTP/S availability vs. traditional fileshares,) but a guiding principle seems to be it at least must be a flat package and not blow things up

My understanding is also that flat packages for Applications are the way to go and avoid the potential issues we see regarding HTTP/S availability vs. traditional fileshares. DMG style packages are required to make use of Casper's "Fill user templates" and "Fill existing user home directories" which are generally not relevant in installing/updating Applications, especially in the AutoPKG context.

3. There is no metadata for all the other things Munki provides out-of-the-box, including blocking applications, which must therefore be wrapped with a script

Not that I know of either. I work around this either by caching packages to be installed at logout or with a script that moves the existing Application to /tmp (even if it is running) and then installing the package.

I have seen issues installing an App directly over an existing App, where this breaks the App (e.g. Firefox and Skype). I think this was from DMG packages though.

4. There is no concept of managed updates, so you need to create a new smart group each time you'd like to scope/prepare an 'uploaded' patch to clients

The way I deal with is that I have an install and an update policy. And for every new version I have to replace the package in both policies manually. I then update a smart group which is the scope of the update policy. Here it would be good to just have the one policy, but I don't because I want the install policy to run in self service and to have the App delivered immediately. Where as the update policy is generally installed differently (ie cached or with script handling conflicts).

5. All software must have a category assigned to it before it can be made available, and categories have no specifically defined utility when it comes to pkgs, being applicable in many contexts/across data types

You don't have to define a category (at least via Casper Admin) it just gets the default category of "Unknown".

6. Software installs get put in the root of the JSS distribution point/fileshare and cannot be stored/accessed in folders

They do go into a Packages folder, but there is no hierarchy in that folder.

7. Installs that would otherwise fail over-the-network need to be specifically designated to be cached in the fugazi room

I'm not sure that caching provides any better download of packages, other than they can be downloaded in the background so to speak.

1. create a category in the JSS if it isn't present(working) based on product name

I'm not sure I would do this. I currently apply a category to my packages based on user function (e.g. Internet, Productivity, Help Desk, etc) simply to match the categories I apply to policies when they get exposed in Self Service. I use other categories to group my policies (e.g. imaging, updates, System Wide, etc). I don't use categories to find or organise packages.

I think in an AutoPKG workflow I would just dump them all into an AutoPKG or Packages category.

2. copy the pkg to the JSS dist point

I'm not familiar with the API but I hope you can use it to upload the package, otherwise that will be fun, especially with the new JDS that have their packages uploaded via the JSS and the database.

3. update the API to make the JSS aware that the pkg is there

Yipee!

4. do any heavy lifting to get the pkg live for a 'canary' track, as makes sense within the Casper model

I haven't really found a good canary track with Casper. What I do is create the packages and add them via Casper Admin. I then manually install them using Casper Remote to a few test machines. Manual I know, but easier than making test and production versions of all my policies and smart groups and keeping them updated. And if the updates are cached, which most of mine are, they are installed only as users reboot, so I get a 'free' rolling canary test as users reboot at various times (I have a policy to notify the user to reboot after 7 days).

With the smart groups I use as scopes for update policies I test for future versions as well, so I can do this canary testing without them being downgraded by the update policy.

5. get Casper imaging involved

I'm not sure what you mean here. My images have the absolute minimum, basically install OS, set time and connect to JSS.

It was too much work keeping the configurations up to date and much easier to just let the policies install on first boot.

@Banks][/url I hope that helps. If you have any further questions please ask.

Thanks,

Marcus

mrowell
Contributor

I should also thank @gregneagle][/url][/url][/url for his work on AutoPKG (as well as everything else). Even without it being integrated into Casper, AutoPKG has made my life easier.

Thanks, Marcus

rtrouton
Release Candidate Programs Tester

@Banks][/url][/url,

I think a good starting place here is putting in place an automated upload from AutoPkg to a Casper repo. I think that this is low-hanging fruit as a concept because this works within Casper's existing management model.

The main conceptual difference would be that the software would not need to be manually added by the Casper administrator to the Casper Admin application. Instead, either an API or added functionality in Casper Admin could handle the upload to the Casper repo. All uploaded updates should be initially unassigned to any software category as that works within Casper Admin's current functionality.

In many ways, just getting the software automatically added to the Casper repo would save time for Casper admins. They no longer have to spend time finding the software and they save additional time because the upload was handled for them. They still need to build policies and smart groups for updates and may need to do additional work to prepare to roll out these updates, but in my opinion the time savings alone makes this a worthwhile enterprise to embark upon.

Is that the best way to handle updates? Since you'll get different answers about "what's best" from different people, at this point I'm going to ignore Casper's current capabilities and talk about what I want. For my own point of view, here's what I would like to see for software updates:

  1. Machines need a particular software update
  2. Casper downloads those updates and adds them to the repo, either entirely within the JSS or via added functionality in Casper Admin.
  3. Casper gives a list of available updates to the Casper admin as a checkbox list. All downloaded updates are initially unassigned to any software category.
  4. Casper admin chooses the updates that they want to apply to their machines that need those updates and "activates" the updates by assigning them to specific JSS-created categories.
  5. The Casper agent on a particular client communicates with the JSS about what software is installed on that client Mac.
  6. The JSS instructs the Casper agent to download and apply any appropriate updates.

There will be an issue of scope, but I think the list of available updates should include Mac software updates from the following vendors:

  1. Apple
  2. Microsoft
  3. Mozilla
  4. Adobe

This should cover updates used by the majority of Mac enterprise setups. For those vendors that are outside this list, uploads from AutoPkg should help cover the gaps. There would still be cases where Casper admins would need to manually prepare and deploy updates, but in this scenario, this should now be an edge case rather than the norm.

Ideally, this would make update management much easier for Casper admins and make deploying software updates less of an issue of "How to I prepare my updates?" and more "How should I schedule rolling them out?"

Banks
Contributor

Thanks to those who have responded so far. Follow up question:
- Can you upload a pkg via API? I don't see it as an option
- Are there API docs separate from those on the jss itself?
@mrowell was particularly helpful mentioning that categories are utilized in self service for him.
I'm seeing confirmation of many of my assumptions, so we may as well turn the corner to implementation of a defined 'pipeline'. There are two ends: shoving the pkg in with any applicable processing, and finally green-lighting it on the other side in some capacity.

I'll open a separate thread once I've completed my design document, (early draft here: https://gist.github.com/arubdesu/6aadb5f6ac8b5d57b854 ,) but at this point it sounds like I'll start by refactoring what I've got to finish the pkg copy and XML generation for the API updates. The workflow I'd target is self-service at this point, continuing to rely on vendor pre-flight scripts for blocking applications. As things progress, if we can define a common use case or two to test against, I can probably overwrite the value of a generic key for a product to point to the new pkgname as part of a policy, and even further down the line modify the applicable smart group to target the version of the app in question. To be smarter in more contexts, calls to the jamfhelper can be scripted into the actual pkg generation, which would allow more on-demand pushes, when it comes to the front end of the pipeline. I believe that would completely close the loop and bring about some kind of feature parity.
More feedback greatly appreciated, Allister

rtrouton
Release Candidate Programs Tester

@Banks,

There's a couple of ways this may work:

  1. Connect to the repo using AFP / SMB / SFTP / whatever. Once connected, upload the installer files directly to the repo.

What would happen: The next time Casper Admin is opened, it would detect the added installer and ask to build a bill of materials for them. After that, the Casper admin would be able to work with the installers and add them to policies.

  1. Connect to the repo using the method that Casper Admin uses and upload the files that way.

The issue there is I don't believe JAMF has a public API for this. If there isn't a documented API available, JAMF would need to either document or build the API.

sam
New Contributor III
New Contributor III

The JSS is not able to upload packages from the API, and there is nothing outside the /api/ page that will help us out with the loading of files onto a distribution server.

Is it safe to say that the following would be an acceptable v1 workflow for the Casper integration:

  1. Move the built package to a local or mounted path representing the distribution point file path

  2. Check to see if a default category (e.g. testing) exists in the JSS. Create it if not already there (also allow override of this value in the recipe)

  3. Create the package data in the JSS (priority, reboot, etc.) and assign it to the default category (all options in the receipe)

I don't know. Is that too simple to begin with? @Banks][/url][/url][/url I'm still working through your code at https://gist.github.com/arubdesu/3cf7de46c7d118b8b180. Awesome work, BTW! Does this already achieve this functionality?

Moving forward, there may be some ways to get the JAMF Distribution Server (JDS) capable of uploading the file, but it uses certificate based communication from the JSS, so that isn't likely unless we can get to it directly from the API. I'll talk to some people about that ;) It still leaves traditional Distribution Points / Cloud Storage shares out of the party. Also interesting, there is an attribute of "switch_with_package" on packages that we should be able to utilize.

It seems like there are a few variables that are in play here as well.

JSS Server URL
API Username and Password
Distribution Point Path
Default Category

The Munki integration only asks for one variable (munki_repo_path), and it is stored in the com.github.autopkg.plist file. I see how @Banks][/url][/url][/url put all of these it in the recipe. Also, it looks like you can nest recipes, which might make more sense to have a JSSConfig recipe and use it as a parent. Can you have multiple parent recipes? Maybe it would be better if it were nested or built as a process using the plist.

gregneagle
Valued Contributor

"Also, it looks like you can nest recipes, which might make more sense to have a JSSConfig recipe and use it as a parent."

I thought about suggesting a JSSConfig parent recipe, but:

"Can you have multiple parent recipes?"

You can't currently, which means that you couldn't leverage an existing Foo.pkg.recipe or Foo.download.recipe.

Once there is a working proof-of-concept, we can look at what we need to define for Casper integration and see how we can implement it in autopkg. It might take the form of additional keys in the autopkg preferences, or it might take some other form. I wouldn't worry too much about the specifics yet.

-Greg

Banks
Contributor

I agree with Greg, it's not worth worrying about, anything I need for now is all right there with examples for me to implement, whichever way the design gets influenced on the integration end.
rtrouton, I don't consider something you need to either open a GUI app or click through several buttons/page refreshes to be particularly automated.. ;)

Mr. Johnson, thanks for chiming in. We can override each recipe with the environment-specific info you listed, and I'm well on my way with small successes with the API already(the linked code works, just doesn't copy the downloaded package yet.)

I've updated that ambitious design document mentioned earlier, which seems to have crystallized where I'm going to try to take things now: http://arubdesu.roughdraft.io/6aadb5f6ac8b5d57b854

For this conversation, I could use help with the following:
The API user needs access to create, read, and update certain objects.
1. To target machines in inventory with a less-than-current version, would Advanced Computer Searches be necessary for the api user to access? (all signs point to 'no')
2. For the goal of making software live, we probably can't take advantage of/don't need 'Accounts and Groups' access, right? (computer groups seems to be all I need)
3. Maybe 'Configurations' would be of use with imaging down the road? (I don't see it called out in the API, although i don't claim 100% to grasp the concept)
4. Would 'JSS Settings' access, like 'Autorun Imaging' or 'Casper Imaging,' be of use?
Thanks for everyone's input.

sam
New Contributor III
New Contributor III

Great draft on the design document! It is very informative and one of the more entertaining design documents that I have ever come across. One is typically not expecting to smile so much while reading those things.

1. To target machines in inventory with a less-than-current version, would Advanced Computer Searches be necessary for the api user to access? (all signs point to 'no')

You're right, no. Advanced Searches are just on-demand inventory searches with pre-defined criteria.

2. For the goal of making software live, we probably can't take advantage of/don't need 'Accounts and Groups' access, right? (computer groups seems to be all I need)

Right, no need for 'Accounts and Groups'. Those are user groups. Computer Groups should be all we need.

3. Maybe 'Configurations' would be of use with imaging down the road? (I don't see it called out in the API, although i don't claim 100% to grasp the concept)

Configurations may be useful, but I think we should be able to accomplish this by substituting package assignments in configurations. I don't think that has the same permission set. It appears to be an element on the package object.

4. Would 'JSS Settings' access, like 'Autorun Imaging' or 'Casper Imaging,' be of use? Thanks for everyone's input.

I don't think so. We should be clear of needing to get in there at all.

So basically, you are right on the mark with every assumption. The only other settings I am setting up outside what you had listed were the File Share Distribution Points and JDS. But those are going to be down the road a bit, so shouldn't be needed at this time.

timsutton
Contributor

Since the first step to any JSS integration is to get the package actually "imported" into the software repo, leaving aside any details about templates for smart groups or policies, @rtrouton][/url's points about the steps required to complete the import of packages is concerning, unless I'm misunderstanding.

Does this mean that the JSS is required to keep a BOM around for any package that's to be deployed to clients, regardless of whether it was copied directly to the repo or via Casper Admin? For every update of Firefox that a Casper admin packages with Composer and imports with Casper Admin, the JSS has a complete BOM for that package that it can use to track and handle installations (or removals, etc.)?

If so, that's an annoyance for trying to automate this, because as Allister points out, needing to manually "check in" with the GUI to finalize any package import isn't ideal. But maybe perhaps from JAMF can advise, whether this is really the case and if there could be some workaround.

The REST API is for server-side configuration (the JSS), so it's understandable that it would not have an API to generate a BOM, as I'd guess it is doing all this work on the client side in Casper Admin. (And assuming JAMF isn't supporting their own reverse-engineered BOM tools like https://github.com/hogliux/bomutils) I can still only make assumptions because I don't use Casper.

Munki works similarly, where the server can be any platform but actually administering the repo must be done on a Mac client so that packages and disk images can be manipulated.

Banks
Contributor

Hey Tim, I don't believe CasperAdmin does anything special whatsoever, nor does the JSS need anything, as evidenced by the XML returned from an API call to fetch the stored metadata about a pkg - I don't see a blob of anything BOM-looking. See https://gist.github.com/arubdesu/3cf7de46c7d118b8b180#file-interimpkgtemplate-xml
Yes, I'm going to have to tackle what looks like the lack of strict checking on the JSS when it comes to versions/checksums of packages.

Banks
Contributor

Strike that, it's likely an equivalent to a bom is being built for that exact functionality, as frogor was kind enough to point out to me via this link: http://john.bryntze.net/jbkb-v2/howto-uninstall-casperjamf-package/
Since I can't recall how I built that firefox pkg, I'll figure out if we can at least deploy an autopkg-built VLC for now, it doesn't seem like it should be a hindrance at this point...

mscottblake
Valued Contributor

I believe that @rtrouton][/url][/url was referring to the package data that @sam][/url][/url referenced when he said

3. Create the package data in the JSS (priority, reboot, etc.) and assign it to the default category (all options in the receipe)

I think it's required, and I think that the only way to add this data currently is through Casper Admin. You can dump any package into the distribution point, but I don't believe the JSS will see it until you launch Casper Admin and it creates the extra package metadata.

mscottblake
Valued Contributor

That being said, Casper Admin can "index" DMG packages, essentially creating a BOM for removal purposes. This is not possible with PKG packages since they can contain scripts to place files.

This is not something that you would need to concern yourself with because you'd be placing PKG packages into it.

sam
New Contributor III
New Contributor III

Right, a BOM is used only for indexing a package's contents. It is required for the uninstall function, but should not impede on the ability to deploy the package. BOMs were also useful if deploying a bundled pkg over HTTP because all pkgs are cached prior to installation, and downloading a bundle requires a bit of information. We should be good.

timsutton
Contributor

I get that BOM's aren't explicitly required in order to do an installation, but @rtrouton said: "After that, the Casper admin would be able to work with the installers and add them to policies."

This implies that without this step performed by Casper Admin to "finalize" the import, the installers aren't available to add to policies. @msblake's comment echoes this as well.

In this thread there's a lot of the word "should." Allister's JSSImporter processor claims that adding a new package via /packages/id/:id works, so does this mean none of this BOM generation is necessary? Will Casper Admin still prompt to generate a BOM for this new package it's found, or will it ignore it because it's been added via the API?

sam
New Contributor III
New Contributor III

@timsutton, sorry, I think I have confused things more then added clarity.

The BOMs that you are referencing are only used for HTTP distribution of bundled pkgs. Hence, BOM generation is not relevant here. Casper Admin will not prompt to generate a BOM. Not because it was added via the API, but because it isn't a bundled pkg.

The other BOMs we were thrown into the mix are called indexes. Those are only required for the uninstallation of a package. Indexes must be created using the Casper Admin application.

timsutton
Contributor

Ok. If by "bundled" you mean a bundle-style package (ie. not flat), then BOMs aren't absolutely needed for at least a basic import functionality, at least as long as a JSS import recipe is using a pkg recipe as its parent. AutoPkg can only build flat packages.