RingCentral For Mac.app - Let your users download the latest version via Self Service

Contributor III

2017-03-16 - There's been a lot of discussion regarding this post pertaining to security - and rightfully so!

That being said, I can not gauge or assign a value to your own internal risk assessment methodology. In other words, I take no responsibility for when and how you use this script should you decide to do so. This is entirely up to you. For myself and my DevOps and SecOps teams, this workflow warrants "low risk". You and your team may see this differently. YMMV.

Now, onto the post at hand...

Hey everyone! It's me... AGAIN!

One of the things that I love to do is to automate things. I once had a college instructor who said, "If you're working and not automating, then you're not doing your job correctly." I took those words to heart and they have served me well in my career!

On the heels of automation is the idea of DIY - In other words, automation for me equals DIY for my user-base.

For those of us who use JAMF Pro (formerly know as Casper Suite) day in and day out, we are well aware of how powerful Self Service can be in giving us this level of automation.

But at times, even Self Service, when left to its own devices, falls short.

So let's automate the automation!

In my workplace, our user community relies upon RingCentral, a VOIP provider. RingCentral provides various apps for desktop and mobile, etc.

My users who need the RingCentral for Mac desktop app can easily log into the RIngCentral Portal, go to the correct location, download and install the latest version of the app. That's great. But...

What if there was a way to make it even easier for my user community, especially for new hires or for those users in a hurry?

As the admin, we could certainly log into RingCentral ourselves and download the DMG file, unarchive the PKG file and upload said PKG to our JAMF Distribution Server.

As the admin, we would also be held accountable for maintaining that software's upkeep in a timely manner, whenever the vendor releases software updates, etc.

Well my friends... Ain't nobody got time for dat!

Since the vendor is already doing the heavy lifting with regards to updates, why not leverage that to your own benefit!

Many vendors make their software available to download on their web sites or FTP sites, etc. So here is a way to tie that into your existing Self Service infrastructure...

This particular workflow was designed for use with RingCentral's "RingCentral for Mac" application, but it can be tweaked and hacked to meet your needs with other vendors (i.e. Google Chrome comes to mind).

There's also the additional benefit of not hosting the software on your own server and having a slight reduction in bandwidth utilization.

Let's get started...

We are going to need a script that does the bulk of the work for us.

  1. Log into your JSS and navigate to Settings > Computer Management > Scripts
  2. Click the "+" button to create a new script
  3. Under the "General" tab, set these values:
    Display Name: RingCentral.sh
    Category: Software
    (NOTE: If you don't have a Software Category, you can create one in Settings > Global Management > Categories)
  4. Under the "Script" tab, copy and paste this script into the "Script Contents" field:

# RingCentral for Mac.app Installation Script
# Automatically download and install or upgrade the latest version of the
# RingCentral for Mac.app VOIP application
# Created by Caine Hörr on 2017-03-14
# v1.2 - 2017-03-16 - Caine Hörr
# Set all URLs to use https (SSL)
# Added the following line: echo "Downloading from:" ${VendorURL}${VendorRelativeFilenamePath} for logging purposes
# v1.1 - 2017-03-15 - Caine Hörr
# Added -i flag to VendorRelativeFilenamePath="$( curl -sI $VendorDownloadURL | grep -i "location" | awk '{ print $2 }' )"
# Added sudo to sudo rm -rf /Applications/RingCentral for Mac.app
# v1.0 - 2017-03-14 - Caine Hörr
# Initial RingCentral for Mac.app Installation Script


# Vendor URL

# Vendor Download URL

# Local directory to save to...

# DMG Mount Point
DMGMountPoint="/Volumes/RingCentral for Mac"


# Check to see if RingCentral for Mac already exists in /Applications
CheckForRingCentralApp="$(ls /Applications/ | grep "RingCentral for Mac")"

if [ "$CheckForRingCentralApp" = "RingCentral for Mac.app" ]; then
  echo "RingCentral for Mac.app currently installed..."

   # Check to see if RingCentral for Mac is running
    RingCentralRunning="$(ps aux | grep "/Applications/RingCentral for Mac.app/Contents/MacOS/Softphone" | awk '{ print $11 }' | grep "/Applications/RingCentral")"

    if [ "$RingCentralRunning" = "/Applications/RingCentral" ]; then
      echo "RingCentral for Map.app is running..."
      echo "Quitting RingCentral for Mac.app..."

      # Gracefully Quit RingCentral for Mac.app
      osascript -e 'quit app "RingCentral for Mac"'
      echo "RingCentral for Map.app is not running..."

    echo "Deleting /Applications/RingCentral for Mac.app..."
    rm -rf /Applications/RingCentral for Mac.app

  echo "RingCentral for Mac.app not currently installed..."

echo "Installing latest version of RingCentral for Mac.app..."
echo ""

# Vendor Relative Path to File
VendorRelativeFilenamePath="$( curl -sI $VendorDownloadURL | grep Location | awk '{ print $2 }' )"
echo "Downloading from:" ${VendorURL}${VendorRelativeFilenamePath}

# Remove 
 (CR) at the end ( 0d )

# Vendor Dynamic Filename as Downloaded

# Vendor Full Download URL

# Download vendor supplied DMG file to local save directory
curl ${VendorFullDownloadURL} -o ${LocalSaveDirectory}${VendorDynamicFilename}

# Mount vendor supplied DMG File
echo "Mounting" ${LocalSaveDirectory}${VendorDynamicFilename} "..."
hdiutil attach ${LocalSaveDirectory}${VendorDynamicFilename} -nobrowse

# Copy contents of vendor supplied DMG file to /Applications/
# Preserve all file attributes and ACLs
echo "Copying RingCentral for Mac.app into the /Applications/ folder..."
cp -pPR /Volumes/RingCentral for Mac/RingCentral for Mac.app /Applications/

# Identify the exact mount point for the DMG file 
ProperDMGMountPoint="$(hdiutil info | grep "$DMGMountPoint" | awk '{ print $1 }')"

# Unmount the vendor supplied DMG file
echo "Unmounting" ${ProperDMGMountPoint}"..."
hdiutil detach $ProperDMGMountPoint

# Remove the downloaded vendor supplied DMG file
echo "Removing" ${LocalSaveDirectory}${VendorDynamicFilename}"..."
rm -f $LocalSaveDirectory$VendorDynamicFilename

# Launch RingCentral for Mac.app
open /Applications/RingCentral for Mac.app

echo "Installation complete!"
  1. Click the "Save" button

Now that the script building part is done, let's build a policy that interfaces with Self Service.

  1. Navigate to Computers > Policies
  2. Click the "+" button to create a new policy
  3. Under the "Options" tab, set these values:
        Display Name: RingCentral for Mac (Latest Version)
        Enabled: Checked
        Category: Software
        Execution Frequency: Ongoing
        Make Available Offline: Checked
       Priority: Before
  4. Under the "Scope" tab, set these values:
        Target Computers: All Computers
        Target Users: All Users
  5. Under the "Self Service" tab, set these values:
    Make the policy available in Self Service: Checked
    Button Name: Install
    Description: Install the latest version of the RingCentral for Mac softphone application.
    Ensure that users view the description: Checked
    Icon: aaa8f2e78dee48a2a43887e96edb4dac
    (Right-click the icon image and save as "RingCentral_icon_128x128.png")
  6. Click the "Save" button

To quickly test your configuration, run these commands to update Self Service.

sudo jamf manage -verbose

NOTE: Either close Self Service before running the aforementioned command or if you leave Self Service open, right click on Self Service and click on "Reload".

You're probably wondering why I have so many "echo" commands in my scripts since the end user will never actually see the output from within Self Service. As it turns out, when you use "echo", the output is reflected within the associated Policy log. Here is a sample output after the script has been run...

[STEP 1 of 4]
Executing Policy RingCentral for Mac (Latest Version)
[STEP 2 of 4]
Running script RingCentral.sh...
Script exit code: 0
Script result: RingCentral for Mac.app currently installed...
RingCentral for Map.app is running...
Quitting RingCentral for Mac.app...
Deleting /Applications/RingCentral for Mac.app...
Installing latest version of RingCentral for Mac.app...

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
1 89.4M 1 1399k 0 0 2249k 0 0:00:40 --:--:-- 0:00:40 2247k
9 89.4M 9 8555k 0 0 5317k 0 0:00:17 0:00:01 0:00:16 5317k
18 89.4M 18 16.8M 0 0 6611k 0 0:00:13 0:00:02 0:00:11 6609k
27 89.4M 27 24.3M 0 0 6925k 0 0:00:13 0:00:03 0:00:10 6923k
38 89.4M 38 34.4M 0 0 7660k 0 0:00:11 0:00:04 0:00:07 7658k
48 89.4M 48 43.2M 0 0 7891k 0 0:00:11 0:00:05 0:00:06 8595k
58 89.4M 58 52.6M 0 0 8162k 0 0:00:11 0:00:06 0:00:05 9076k
69 89.4M 69 62.2M 0 0 8356k 0 0:00:10 0:00:07 0:00:03 9264k
80 89.4M 80 72.0M 0 0 8573k 0 0:00:10 0:00:08 0:00:02 9762k
90 89.4M 90 81.0M 0 0 8637k 0 0:00:10 0:00:09 0:00:01 9539k
99 89.4M 99 89.0M 0 0 8596k 0 0:00:10 0:00:10 --:--:-- 9386k
100 89.4M 100 89.4M 0 0 8605k 0 0:00:10 0:00:10 --:--:-- 9332k
Mounting /tmp/RingCentralForMac-8.4.5.dmg ...
Checksumming whole disk (Apple_HFS : 0)…
whole disk (Apple_HFS : 0): verified CRC32 $21478FE1
verified CRC32 $2709DA68
/dev/disk3 /Volumes/RingCentral for Mac
Copying RingCentral for Mac.app into the /Applications/ folder...
Unmounting /dev/disk3...
"disk3" unmounted.
"disk3" ejected.
Removing /tmp/RingCentralForMac-8.4.5.dmg...
Installation complete!
[STEP 3 of 4]
[STEP 4 of 4]

Now, isn't that nice?! Great for troubleshooting failed installs!

When the user clicks on the "Install" button, this little gem of a script will bypass RingCentral's log in page, find the latest version of the installer, download the installer, check to see if the user already has a version of the app already installed, gracefully quit the app if it's running, install the new app, and launch it so the user can log into the app and get to work.

I have tested this script fairly thoroughly in my own environment with a user-base of ~500 people and we have yet to experience any adverse side effects.

Keep in mind, your environment and requirements may be different. You should always test out before deploying into production.

If you know of any ways to make this script better, let me know!

I would like to add functionality that adds the application's icon to the dock as well. I toyed with that a bit but couldn't' get it to work to my satisfaction.

I also wanted to use string variables with the cp (copy) command, but cp was giving me grief so I hard coded everything just to make cp happy. Perhaps I'll revisit.

Anyway, hope you enjoy!

Kind regards,

Caine Hörr

A reboot a day keeps the admin away!


Valued Contributor

Hey Caine,

What's your experience with the stability of the curl command?

I've tried a couple of scripts like this for other products and I've had issues where the curl fails after x%. I've seen up to 10 retries fail using curl's --retry option.

Odd thing is that browsers manage to successfully download the same file(s) on the same Mac(s) on the same network(s) as the curl(s) fail.

Thanks in advance!

Contributor III


I have several scripts that rely on the curl command.

To date, I have not seen any failures directly related to curl.

For example, you may have noticed that I strip off the CR as part of prepping for curl. Sometimes that's necessary.

Checking with a utility like xxd helps identify such things up front.


printf %s $filename | xxd

To give a more real world example, using elements from the script from this post...

$ LocalSaveDirectory="/tmp/"
$ VendorURL="https://downloads.ringcentral.com"
$ VendorDownloadURL="https://downloads.ringcentral.com/sp/RingCentralForMac"
$ VendorRelativeFilenamePath="$( curl -sI $VendorDownloadURL | grep Location | awk '{ print $2 }' )"
$ VendorDynamicFilename="${VendorRelativeFilenamePath##*/}"
$ VendorFullDownloadURL=${VendorURL}${VendorRelativeFilenamePath}
$ printf %s $VendorRelativeFilenamePath | xxd
00000000: 2f73 702f 5269 6e67 4365 6e74 7261 6c46  /sp/RingCentralF
00000010: 6f72 4d61 632d 382e 342e 352e 646d 670d  orMac-8.4.5.dmg.

So as you can see, there is the (CR) at the end (0d), represented by a period after dmg.

Continuing with the aforementioned real world example, here is an example of a curl failure we will see if we don't remove the offending character.

$ curl ${VendorFullDownloadURL} -o ${LocalSaveDirectory}${VendorDynamicFilename}
curl: (3) Illegal characters found in URL

If we use the echo command against $VendorFullDownloadURL the result is https://downloads.ringcentral.com/sp/RingCentralForMac-8.4.5.dmg and the hidden character is not visible to the naked eye.

But as we saw earlier from piping the $VendorRelativeFilenamePath into xxd, we see the offending character quite clearly.

By adding in the following line...


We strip off the offending character and curl starts to play nicely again.

That being said...

To date, my issues with any failure to download via curl have typically been associated with the end user's system in some fashion, the JSS being offline, the vendor changing something on their end, etc... but rarely (never?) with curl itself.

Hope this helps.

Kind regards,

Caine Hörr

A reboot a day keeps the admin away!

Valued Contributor

@cainehorr I'm doing something similar, downloading the Office 2011 installer, in this example (there's more to this script, but the curl keeps failing more often than it succeeds)


# Define Initial variables


## Populate $installerChoices with the Choice Identifiers to be toggled.
installerChoices="word powerpoint outlook dcc vb automator dock"


finalDownloadUrl=$(curl "$downloadUrl" -s -L -I -o /dev/null -w '%{url_effective}')

# The downloads will continue until errors improve

/usr/bin/curl --retry 10 -C - -L "$finalDownloadUrl" -o "$downloadDirectory"/office2011Installer.dmg --create-dirs -m 60

Output from the curl command is as follows. Here we have one failure and one success, but in extended tests, the success rate is < 50%.

bash-3.2# /usr/bin/curl --retry 10 -C - -L "$finalDownloadUrl" -o /Library/myOrg/Packages/office2011Installer.dmg --create-dirs -m 60
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 78  966M   78  761M    0     0  12.7M      0  0:01:16  0:00:59  0:00:17 12.8M
Warning: Transient problem: timeout Will retry in 1 seconds. 10 retries left.
Throwing away 799011176 bytes
100  966M  100  966M    0     0  33.0M      0  0:00:29  0:00:29 --:--:-- 41.0M

Contributor III


I decided to take a stab at your Microsoft Office 2011 downloader... I see what you are saying - and I'm having as much luck as you are!

When I issue the following command, I see this...

hostname$ curl -sI https://go.microsoft.com/fwlink/?linkid=273962
HTTP/1.1 302 Moved Temporarily
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 0
Expires: -1
Location: http://officecdn.microsoft.com/PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg
Server: Microsoft-IIS/8.5
X-AspNetMvc-Version: 5.2
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Wed, 15 Mar 2017 21:09:41 GMT
Connection: keep-alive

So we know that the package to be downloaded is located at http://officecdn.microsoft.com/PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg

I decided to manually run your command with a slight modification... I removed the -s (silent) flag and added the -v (verbose) flag.

hostname$ curl https://go.microsoft.com/fwlink/?linkid=273962 -v -L -I -o /dev/null -w '%{url_effective}'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying
* Connected to go.microsoft.com ( port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate: go.microsoft.com
* Server certificate: Microsoft IT SSL SHA2
* Server certificate: Baltimore CyberTrust Root
> HEAD /fwlink/?linkid=273962 HTTP/1.1
> Host: go.microsoft.com
> User-Agent: curl/7.51.0
> Accept: */*
< HTTP/1.1 302 Moved Temporarily
< Cache-Control: no-cache
< Pragma: no-cache
< Content-Length: 0
< Expires: -1
< Location: http://officecdn.microsoft.com/PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg
< Server: Microsoft-IIS/8.5
< X-AspNetMvc-Version: 5.2
< X-AspNet-Version: 4.0.30319
< X-Powered-By: ASP.NET
< Date: Wed, 15 Mar 2017 21:12:49 GMT
< Connection: keep-alive
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Curl_http_done: called premature == 0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* Connection #0 to host go.microsoft.com left intact
* Issue another request to this URL: 'http://officecdn.microsoft.com/PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg'
*   Trying
* Connected to officecdn.microsoft.com ( port 80 (#1)
> HEAD /PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg HTTP/1.1
> Host: officecdn.microsoft.com
> User-Agent: curl/7.51.0
> Accept: */*
< HTTP/1.1 301 Moved Permanently
< Server: AkamaiGHost
< Content-Length: 0
< Location: http://officecdn.microsoft.com.edgesuite.net/PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg
< Date: Wed, 15 Mar 2017 21:12:50 GMT
< Connection: keep-alive
< X-CID: 2
* Curl_http_done: called premature == 0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* Connection #1 to host officecdn.microsoft.com left intact
* Issue another request to this URL: 'http://officecdn.microsoft.com.edgesuite.net/PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg'
*   Trying
* Connected to officecdn.microsoft.com.edgesuite.net ( port 80 (#2)
> HEAD /PR/MacOffice2011/en-us/MicrosoftOffice2011.dmg HTTP/1.1
> Host: officecdn.microsoft.com.edgesuite.net
> User-Agent: curl/7.51.0
> Accept: */*
< HTTP/1.1 200 OK
< Content-Length: 1013609078
< Content-Type: application/octet-stream
< Last-Modified: Fri, 10 Feb 2017 20:07:48 GMT
< Accept-Ranges: bytes
< ETag: "4ca2e159d983d21:0"
< Server: Microsoft-IIS/7.5
< Content-Disposition: attachment; filename=MicrosoftOffice2011.dmg
< X-Powered-By: ASP.NET
< Cache-Control: public, max-age=258111
< Date: Wed, 15 Mar 2017 21:12:51 GMT
< Connection: keep-alive
< X-CID: 2
  0  966M    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0* Curl_http_done: called premature == 0
  0  966M    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
* Connection #2 to host officecdn.microsoft.com.edgesuite.net left intact

So then I ran everything as you described... with exception of download location...

hostname$ downloadUrl="https://go.microsoft.com/fwlink/?linkid=273962"
hostname$ installerChoices="word powerpoint outlook dcc vb automator dock"
hostname$ downloadDirectory="/tmp"
hostname$ finalDownloadUrl="$(curl "$downloadUrl" -s -L -I -o /dev/null -w '%{url_effective}')"
hostname$ echo $finalDownloadUrl
hostname$ /usr/bin/curl --retry 10 -C - -L "$finalDownloadUrl" -o "$downloadDirectory"/office2011Installer.dmg --create-dirs -m 60

And sure enough... I see the errors you described!

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  1  966M    1 10.5M    0     0   179k      0  1:32:00  0:01:00  1:31:00  182k
Warning: Transient problem: timeout Will retry in 1 seconds. 10 retries left.
Throwing away 11016606 bytes
 12  966M   12  117M    0     0  2003k      0  0:08:13  0:00:59  0:07:14 2022k
Warning: Transient problem: timeout Will retry in 2 seconds. 9 retries left.
Throwing away 123115495 bytes
  2  966M    2 23.3M    0     0   397k      0  0:41:28  0:01:00  0:40:28  172k
Warning: Transient problem: timeout Will retry in 4 seconds. 8 retries left.
Throwing away 24437794 bytes
 11  966M   11  113M    0     0  1933k      0  0:08:31  0:00:59  0:07:32 1755k
Warning: Transient problem: timeout Will retry in 8 seconds. 7 retries left.
Throwing away 118790061 bytes
  9  966M    9 94.9M    0     0  1619k      0  0:10:11  0:01:00  0:09:11 2082k
Warning: Transient problem: timeout Will retry in 16 seconds. 6 retries left.

I completely see the issue you're having!

So now that we've seen it happing on your systems and my systems, we can easily discern a few things...

  1. This issue is not isolated to your infrastructure
  2. The issue could be curl related
  3. The issue could be Microsoft (download repository web site) related

Since curl works quite well for other resources (I've used the same script that I posted for Google Chrome, Slack, and Ring Central), I'm inclined to think that the error has something to do with Microsoft (or possibly Akamai since that seems to be who they are using for hosting services).

But then again, I noticed the server seems to change...

We start with Server: Microsoft-IIS/8.5
Then it switches to Server: AkamaiGHost
Then back to Server: Microsoft-IIS/7.5

We also see the host start with Host: officecdn.microsoft.com
Then it switches to Host: officecdn.microsoft.com.edgesuite.net where it remains...

So perhaps Microsoft is doing some weird load balancing act that is breaking the connection and causing it to resume?

I'm at a loss on this one as I don't know much about what Microsoft is doing on the back-end. But it certainly seems bizarre.

Kind regards,

Caine Hörr

A reboot a day keeps the admin away!

Valued Contributor

Yep. Your result is the sort of thing I get most often.

Well, at least I don't have a dumb typo and I know it's not me or my environment. Thanks for verifying.

Unfortunately, I see similar behavior on the Office 2016 package downloads, so I can't just call it legacy and walk away, but at least I have some more data to show Microsoft.


Contributor III


I pinged my network engineer to confirm that we are not throttling our downloads... Here is what he said...

"Sometimes MS is slow. Try wired as a comparison but be aware that at a different time you can get different results."

That's a direct quote! LOL!

The only thing I did change in your script...

You had this...

finalDownloadUrl=$(curl "$downloadUrl" -s -L -I -o /dev/null -w '%{url_effective}')

I changed it to this...

finalDownloadUrl="$(curl "$downloadUrl" -s -L -I -o /dev/null -w '%{url_effective}')"

I doubt the extra quotes make any difference...

echo still returns the following either way...

$ echo $finalDownloadUrl

Anyway - Sorry I couldn't help out more. If I think of anything, I'll let you know.

Best of luck!

Kind regards,

Caine Hörr

A reboot a day keeps the admin away!

Esteemed Contributor
Esteemed Contributor

I'm not a fan of curling things from the internet & installing without verifying the content, so as an alternative I'm just posting our method: jamJAR

With AutoPKG downloading, code signature verifying then checking items against virus total BEFORE making them available.. I feel better

Esteemed Contributor
Esteemed Contributor

(Hit post it too soon)..

Then we leverage Munki's logic to see if update is needed, it's checking of hashes to make sure the client has downloaded what we wanted it to & not been MITM'd.. as well as if anything is running that might blog the install.

New Contributor II

While this is an interesting thought process, it is highly dangerous and should not be recommended to use in production environments.

I'm actually surprised Miles has no objection to this.

You have a root daemon curling an endpoint and immediately installing software. If that endpoint was suddenly taken over and serving malware, you wouldn't know about it until it was too late. What if someone was able to MITM this traffic? In some of the examples posted you aren't even using https. That is a recipe for disaster.

If you want some form of automation, use Autopkg, install/test these packages and then deploy them. Yes this isn't "fun", but your company is worth it. If someone is thinking about doing this, don't do it. It's not worth the risk.

Valued Contributor

Yes it's worth noting the Ring Central downloads work over HTTPS, at least switch to those URLs. I also noticed the Ring Central download is a bit of a bear, 94 MB downloading at 2 MB/s. You'll get better speeds serving that from your distribution point, another benefit to using AutoPkg and AutoPkgr which will download it when it's updated, pkg it, and make it available in Jamf Pro. Whether you want to deploy it to a test group first, available for a self upgrade, or just roll it out everywhere.

Contributor III

@adamcodega - Yeah - we're using https - that was a group of typos on my part (my production script was already set for https) - updated the main post to reflect the change though - especially in light of this conversation.

Download speeds are a non-issue as we're all cloud-based anyway - same download speeds are seen for both 3rd parties and our JDS. :-(

Not sure how AutoPkg is any different. Curl downloads from vendor. AutoPkg downloads from vendor. Sounds like the same thing to me. Or am I missing something?

@eng - Not sure how you see a root daemon in action. Curl is just a binary that gets a UID from the executor unless otherwise specified. In this case, the executor is a script being run by the jamf binary.

Again, not really seeing a real need for AutoPkg (yet).

But to be safe, I also ran this by my internal DevOps/SecOps guys... Here's what they had to say...

Quote #1: "It's low risk but if there's reasonable safe guards, those should be used. Increase malware scanning if needed, maybe use non-root accounts if able, yadda."

Well, we do know that that jamf binary does run with escalated privs... Can't really get around that...

Quote #2: "Eh. [If https,] I’d rate it as low risk. Whatever. Nobody can MITM it and I really doubt upstream is going to get taken over"

I'm also not piping curl back to the shell ( ie curl some.domain.com | sh ) and from my own research, this seems to be a potential hazard.

So for the moment, I'm feeling my risk potential is fairly low.

@milesleacy - What are your thoughts?

Kind regards,

Caine Hörr

A reboot a day keeps the admin away!

Contributor III

@bentoms - Interesting workflow. This bring up a question...

  • If you're auto-downloading a file from a vendor via AutoPkg, isn't AutoPkg susceptible to the same potential MITM attack as any other form of download?
  • Furthermore, how are you comparing the MD5 hash sum using Munki?
  • Doesn't Munki need to know what the MD5 hash sum is in the first place?
  • How is this information being obtained?

Unless the vendor is posting the MD5 on their web site or in some other fashion for Munki to gleen, then whatever is downloaded can't be compared to what's on file as there wouldn't be anything on file because we don't yet know about it.

For example - If I download a copy of GoogleChrome.dmg, I have no idea what the MD5 is until I've downloaded it from say, https://dl.google.com/chrome/mac/stable/CHFA/googlechrome.dmg.

$ md5 googlechrome.dmg 
MD5 (googlechrome.dmg) = 33eeba7fb7277b7a35b5df90bb94af68

But now that I've downloaded it, there's no point in using AutoPkg.

But you reference using AutoPkg for keeping it up to date. Again, still missing a vital step of MD5 acquisition here.

If AutoPkg and/or Munki can gleen the MD5 from the source, then how does it determine that the source is also legit? Obviously SSL is one way... Any other methods?

I've done some reading on AutoPkg and Munki but still not seeing the connection here.

Kind regards,

Caine Hörr

A reboot a day keeps the admin away!

Esteemed Contributor
Esteemed Contributor

@cainehorr It's something like:

The MD5 happens when importing into Munki & is within the Catalog & PKGInfo files for the pkg.

So, AutoPkgr downloads.. verifies code signature (when present)... submits to virus total, then imports to Munki.. Munki generates an MD5.

Then client side, we tell use jamf to instruct Munki to install "RingCentral".. it searches the Munki Repo.. gets the details & if the client has current or newer version.. it does not attempt the install. The the jamJAR postflight removes the item from the clients manifest.

If the client has an older version, Munki will install the newer.. checking hashes between the catalog & what it's downloaded. Munki will make sure that no blocking apps are running before installing (like the app itself).

Once installed, the jamJAR postflight notifies the user & removes from the local manifest.

If the app is not installed, again Munki checks for blocking apps. If nothing blocking installation the proceed & the jamJAR postflight notifies the user & removes from the local manifest.

The munki repo can be secured with Cert based auth as well.

We've been pretty happy with it, spoke about this at macad.uk in Feb & i'll post back when the video is up.

Valued Contributor

Aww, @eng . I didn't know you cared. ;)
(That's some levity with the best of intentions. We're all friends here.)

There are a few things that I am taking as read going into this process...

  • The vendor's current public release is the product as it exists in the world and as it will be deployed. This is a statement of ideal, subject to certain provisions in practice.
  • Another ideal is that all software comes from the App Store. I recognize that this is by no means an achievable ideal as of this date, but that is the starting point. This process is to compensate for specific vendors' failure to publish via the App Store, where those vendors otherwise follow good practices such as code and package signing and https distribution of a signed package.
  • If we had any issues with the vendor's current product version, we'd have identified and called them out during the beta period.
  • If the vendor neither fixes nor delays a release with an impactful issue, we can disable the policy.

What I've shared so far is a snippet of a longer script in progress, the purpose of which is to download, verify integrity & authenticity, and install (with the option of using installerchoices) a package from a trusted source.

I have considered things like AutoPkg & munki, however the effort required for me and my team to build this script is less than that required to fully vet and adopt code from an untrusted source, given current resources, priorities, and requirements.

In my world, to properly vet projects like this requires as much expertise, and nearly as much time, as it takes to create the project in the first place. Since we only need a small portion of the capabilities of these projects, the math pretty plainly shows that writing the piece that we need is the less resource-expensive path.

In short, this process is not desirable. Vendors who make it necessary will be leaned upon to follow standards or be replaced. Until they comply or are replaced by a compliant vendor, this process exists to compensate for their shortcomings.

I can't speak to RingCentral, but Microsoft, for the most part, have dramatically improved their adherence to Apple standards in recent years (as far as I can tell, the only Office app that wouldn't pass App Store review is Outlook, but they did manage to get an iOS release published), but they haven't yet released Office for Mac through the App Store (I believe that it takes a minimum of speculation to infer that they're moving in that direction, but I digress). Because of all this, I find it a minor concession of the above ideals to deploy Microsoft's packages as-is.

@cainehorr To my mind, the biggest threat is that someone could hijack the source or your file transfer, causing you to get a package you didn't intend. Verifying some combination or all of... vendor's SSL certificate, verifying package integrity/authenticity via checksums, package signatures, and code signatures provide reasonable safeguards.

Contributor III


That's almost verbatim what my DevOps and SecOps guys stated.

I just had a verbal conversation with the head of SecOps. When I described the process, showed him the script, etc. he scoffed. His risk assessment was low.

My script/workflow is still in its infancy and it's growing up. I enjoy the challenge and the opportunity to write something like this. Sure, it's small potatoes compared to AutoPkg/Muni, but it's a start.

I'm grateful for this community and everyone's feedback. It's nice to see such passion in the community plist the interest of developing sustainable workflows and sharing that knowledge. I have learned much from this group as well as others.

Kind regards,

Caine Hörr

A reboot a day keeps the admin away!

Honored Contributor

I won't tell anyone how to do their job. Your Org hired you to be the resident expert on how to get things done so I am just going to assume you should know a lot more than I do about your Org's business requirements. So this is 100% my opinion and does not reflect the opinion of anyone else, including my employers current to past.

If you want to write a shell script to have it run locally and use things like curl through the jamf framework there are most definitely risks, which could be but not exclusive to:

  • You now have a failure point on every client, and every client is now going out to the Interent to download something. In a large scale environment this can cause issues
  • The jamf framework runs this as root, meaning you are connecting to an end point that you have no control over, with no authentication, and you are doing it blindly on each client. It would be very trivial for me to write a very simple postinstall script that puts all the user data into encrypted disk images, generate a unique key based of the device's UUID and then ship that key (the encryption password) to a web service. Essentially this would be an easy way to get a crypto virus if that end point from the vendor got hacked. If the site got hacked what would stop said hacker from putting a malicious package up there with something very bad in it? This is a definite risk.
  • Man in the Middle Attacks are a real thing, and since you are going out to the Internet there opens up more risks.
  • Since this is completely put on the client for logic, tracking down issues or quality assure type vetting would be hard. The Jamf Pro server is not a data warehouse and if you ship extended client logging to the Jamf Pro Server you are essentially creating tons of rows of data that will ultimately bloat your database.
  • Also things should be tested and not set to autopilot. What if the newest version of said package breaks something major? How do you ensure quality to your customers with a workflow like this?

Why projects like AutoPKG and AutoPKGr are a good thing:

  • 100s or more of people vetting and building recipes for it
  • single point of failure, if a package fails to get created you look in one single spot to figure that out, and don't have to try to track it down across your entire fleet
  • It will actually verify the code signature of said package
  • ability to easily use other's repos for collaboration
  • Open source you can view all the code yourself to ensure it is up to your requirements
  • Can easily be automated and integrated with other tools/products. Plenty of people have automated AutoPKG to automatically add new packages to their test branch or test environment with the Jamf Pro Server

Basically if you allow anything to run as root and grab something from the Internet with zero validation or testing you are putting your Org at risk. Sure those risks may never be a reality to you, and you may never have to deal with them. I will just say I'd rather take the safer approach just in case that ever did become a reality for me or my current employer at said time. Again, your Org hired you to be the expert so you know better than I do to meet your Org's requirements. Personally I would not do this unless it was a necessity and even then I would have it be known with the risks stated and let management decide if that was okay with them.

Valued Contributor

@tlarkin I hope all is well by you too!

The advice you give is good in general.

As it pertains to a trusted and high-volume vendor, with which we have a strong contract and a good process, I believe the risks are largely mitigated.

Re: "you now have a failure point on every client..."

I see it as I'm avoiding a failure point within my org.

A vendor such as Microsoft has entire teams, some of which may be small, but these teams' entire every workday focus is to provide these downloads to the audience of "every Microsoft consumer and b2b customer." This is clearly a much larger audience than my, by comparison, small deployment.

I trust not only my strong contractual agreements with the vendor, but the vendor's own self interest in maintaining a stable, secure download source.

I find it much more likely that any home brewed system, built and maintained by people who have many disparate responsibilities, will fail than I believe the vendor's source will fail.

I don't believe that I need control over Microsoft's endpoint or that I am at any greater or lesser risk by downloading a package directly from Microsoft to the client vs downloading it to a staging area first, providing the same sort of checks are made by the client that the staging system would make. I see the creation and maintenance of the staging system as duplicating effort we've already paid the vendor to do for us. We validated their code during the beta period. This is how quality is assured. So long as we validate that we are receiving that same code (using similar, if not identical, mechanisms to that of your proposed staging system), I see the elimination of staging as a removal of a point of failure, not an introduction of many.

If we see the unusual, but admittedly not unheard of, occurrence of a bug being introduced between beta and shipping, then we are actually in a better place than if we had reengineered the vendor's mechanisms. We are in a better place because, rather than one org with a possibly self-inflicted issue, we are now part of a support case involving the vendor's entire customer base. Which issue will get resolved more quickly?

Yes, open source systems have many more people working on the system than just your own team. The problem with that is that you can't say for certain that any of those people share your goals or recognize your needs. I have seen major issues that one or a few users or orgs have with open an source project get ignored because either the user/org in question is a minority voice, or has what the more knowledgeable/active project participants consider an edge case. When you find yourself in this situation, depending on your budget, you either need to devote your team's limited resources to solve the problem, hire an expert to solve the problem, put up with the problem, or abandon the open source tool and retool your process.

As to necessity, the app vendor made the exploration of sub-optimal alternatives necessary when they failed to publish via the App Store.

Honored Contributor

@milesleacy Yup like I posted earlier, your Org hired you to be the expert so I am assuming you are doing your due diligence and vetting risks to all the stakeholders at your Org. Hope all is well with you too!

I still would never curl down a package from a client script unless it was an absolute necessity, like there was no other way to install a package. I think the risks are real and when clients globally connect to a vendor URL there are so many layers of networks and ISPs they go through that I can't possibly vet them all, so by default I do not trust them.

I trust not only my strong contractual agreements with the vendor, but the vendor's own self interest in maintaining a stable, secure download source.

This would be perfect, if we were supplied some sort of API that could give us hashes of packages, versions, date stamps, certificate validation, auth tokens, or any other layer of added security. Just my difference in opinion is all. Vendors can and never will have any control over any outside entity that wishes to be malicious. They can only provide the tools and frameworks for you to validate that you are using legit software.

I don't believe that I need control over Microsoft's endpoint or that I am at any greater or lesser risk by downloading a package directly from Microsoft to the client vs downloading it to a staging area first, providing the same sort of checks are made by the client that the staging system would make. I see the creation and maintenance of the staging system as duplicating effort we've already paid the vendor to do for us.

While I do hope every vendor and/or open source project does all of their due diligence to fully QA their code, products, apps, etc. I still would not risk installing it on production machines with out at least some sort of testing and validation. Vendors and open source developers alike, will never be able to fully test their products in every possible environment, which means no matter what a contract says there is always a chance of risk. I don't want to split hairs and get into the odds of say a 1% chance of risk say over a 25% chance of risk. My view is to just reduce risks where ever we can is all. Having a solid process helps. So many vendors and open source tools provide APIs these days. Those APIs are meant to be used to build tooling around those product. Even the Jamf Pro server has a REST API which can be used to automate workflows from environment to environment.

The way I look at it is that yes, some of this stuff is a lot of work and investment up front. Sure you'd have to build and design these different environments to roll your solutions through the gauntlet of testing, sure you may have to spend a couple weeks sitting down and learning how AutoPKG works instead of deploying client side scripts. The difference, to me at least, is the pay off. In the long run having a really solid process that could download latest builds from AutoPKG and POST/upload them to your test environment automatically is much more beneficial and long term a better solution than having clients just blindly downloading and installing software via an agent that runs as root. Once you build it with automation it is no longer a hard thing to maintain and you have built a repeatable, controlled process that should always yield the same or similar results.

I wasn't kidding when I posted how trivial it would be to put all the user data into encrypted disk images and generate a unique password and ship it. All the binaries to do that are straight up built into macOS. That is a simple postflight script that could be injected into any PKG. With out validating, vetting, and testing you are always going to have a risk. How you verify and calculate those risks are totally up to your and your Org's security. I would just hope these risks are brought up when vetting a solution that has a root agent call doing GET requests against a URL and then installing an app as root with no testing or validation done.

Maybe that is cool and vetting and approved in your Org. I just would avoid things like this because the risks are either too many to vet or test against, or there just aren't the right tools around these vendor sites to really mitigate the risks. Maybe that will change soon and my opinion will as well. Until there are ways to accomplish testing and validation I think I will keep my stance of not doing it.

I also get that we at times have to use the duct tape and glue method in IT. Where certain requirements or needs end up causing us to build workflows that may be an exception to the norm. So I do get that there will be times, probably at every IT job ever, that you may have to do silly things to meet a need or requirement, but I see those as exceptions that should also have write offs and approvals before going down that route.


New Contributor II

My stance is this, trusted a vendor is great, but if you curl from root on a http endpoint, it really doesn't matter.

If a machine is on a guest/insecure wireless/wired connection, they could MITM the connection and serve bad content.

As stated by others in this thread, the autopkg community has done a lot of work to ensure downloads are secure and when possible, code signature validated. With regards to Office 2011, we even had a discussion whether we should use an undocumented https URL because microsoft's own XML for updates was http. You can see this discussion and code merge here.

If the entire process is https, hash/signature validated then good for you. But what was originally posted was definitely not that and was shared with the community. Trusting the vendor is okay, but blindly trusting the vendor is not.

If the SecOps team doesn't understand how a curl script from the jamf binary is not running as root, I would not trust them to adequately secure my Mac endpoints.

New Contributor III

@cainehorr thanks so much for posting this - this is awesome as I was dreading rolling out RC installations since RC's inability to publish to app store. Having said that, I am considering ideas to work around the security exposure of downloading a file from a URL that can be MiTM and run as root. Time to roll up sleeves... can we enforce certificate validation with curl to ensure the site is in fact ringcentral's ssl certificate? I've tried to use IP address instead of domain name and that doesn't work w/files hosted on cloudfront as someone can easily modify host file and get a diff file.

New Contributor II

So just an update if anyone else comes across this. As of May 9, 2019.

It seems RingCentral changed their installer name. So instead of RingCentral for Mac as the installer name, you have to change it to RingCentral Phone. Most importantly change line 109 to be the following.

cp -pPR /Volumes/RingCentral Phone/RingCentral for Mac.app /Applications/

I'm also not sure if something broke hdiutil but I can't get it to work with the $ProperDMGMountPoint. So I just commented out line 112 and changed 116 to the following in and it seemed to work.

hdiutil detach /Volumes/RingCentral Phone