As of today, anything I want to install from Self Service is eternally stuck on Executing. It seems to work just fine for everyone else in the company. Is there anything I can do to troubleshoot this? How can I tell what it gets stuck on? Any log anywhere?
The local install log and jamf log may be insightful.
/var/log/install.log
/var/log/jamf.log
If you have access to the Jamf server, you can look at the specific policy log.
E
We're seeing this issue as well. Any update or information regarding this @btomic
I just started seeing this behavior as well. Policy stuck executing and no evidence in jamf.log or install.log that the policy actually is executing. The same policy executed via another trigger (like recurring check-in) works perfectly.
@rice-mccoy Thanks for the reply. What has been your work-around/fix? A complete removeFramework and re-enrollment? I am currently feeding Jamf Support information and logs on this random issue, but I am not seeing a trend or commonality at this point. Some interesting things observed in the logs would be this:
System.log about every 10 seconds:
Jun 19 14:06:20 <MACHINENAME> com.apple.xpc.launchd[1] (com.jamfsoftware.jamf.agent[13270]): Service exited due to signal: Terminated: 15 sent by killall[13273] Jun 19 14:06:20 <MACHINENAME> com.apple.xpc.launchd[1] (com.jamfsoftware.jamf.agent): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
jamf.log about the same frequency:
Tue Jun 19 14:06:42 <MACHINENAME> jamf[12915]: Failed to load jamfAgent for user <mgmtaccount> Tue Jun 19 14:06:51 <MACHINENAME> jamf[12915]: Informing the JSS about login for user <userid>
I was able to confirm a MacAppStore policy (VPP policy) worked when invoked from Self Service, but other Policies would just sit there at "executing" as well.
@benducklow it has not been wide spread.
I think most cases cleared by force quitting the Jamf processes that were running (JamfAgent, JamfDaemon, jamf) not sure which one finally did it. Like you VPP policies weren't impacted.
If I see it again I will look more closely at the log for any matches. (We are a small shop.)
@rice-mccoy Thanks. By force quitting, were you just doing a kill command with the PID number from Terminal (ex: $ sudo kill <PID>)? If so, I guess we haven't tried that yet. Anything you were doing to (re)start those processes?
Thanks in advance for sharing your experience!
@benducklow Yes, sudo kill. The jamfdaemon seemed to automatically restart. Self Service opened back up and the policy immediately started (visual was executing > Downloading > Installing) and Jamf.log showed the regular alerting which wasn't present at all prior to sudo kill.
I'm having this issue too. Currently working with Jamf support too. Will let you know if we find a resolution.
This issue just popped up for us. Trying to troubleshoot it. Curious if anybody has figured out a solid solution.
@churcht No solid solution, but it has been identified with PI-005944. One thing that has 'kind of' worked (although not 100% according to feedback I have received) is to run the following against a machine:
#!/bin/bash # This script must be run as root or with "sudo" echo "Stopping the jamfAgent and removing it from launchd..." /bin/launchctl bootout gui/$(/usr/bin/stat -f %u /dev/console)/'com.jamfsoftware.jamf.agent' sleep 1 /bin/rm /Library/LaunchAgents/com.jamfsoftware.jamf.agent.plist echo "Running jamf manage to download and restart the jamfAgent..." /usr/local/jamf/bin/jamf manage
We've been doing is via Jamf Remote on a case by case basis with mixed success results.
I'm experiencing the same effects on some but not all machines. Just stuck and repeatedly informing JSS of a user login. Tried killing it and restarting the machine.
Anyone having any luck with this? I'm seeing this with increasing frequency, so it's becoming a real problem.
The logs don't give much - the selfservice_debug.log shows
Binary request: triggerPolicy
but then nothing after that, and the Self Service item will just remain on "executing" forever. On a Mac that works fine, it's followed immediately with
Message received from the jamf daemon : {
...
...
I've tried manually killing the processes as mentioned above, but it makes no difference.
Hi all,
I'm doing some more testing, and I think my issue is related to how I do invitation based enrolments (I have a script that uses an invitation code that doesn't expire). If I run "jamf removeFramework" and then re-enrol via a user-initiated enrolment (either via MDM profile, or QuickAdd), my Self Service works as intended.
I'm unable to have DEP in my current environment, so this is my only other option for now.
I've still got a lot more testing to do, but if someone else who is having this issue could give it a go, it'd be interesting to see if it fixes it for you as well.
@benducklow I set up your script to be run on a trigger. So if anyone on our team has issues we can just do sudo jamf policy -trigger fixselfservice
and it's been working almost all the time so far. Thanks so much.
Experiencing this issue as well, more recently, and killing the process doesn't seem to resolve it for us consistently. We are opening a ticket - will update if we get any more info.
I’ve had this issue for a few weeks and I asked around in my Jamf 300 class and no one knew anything about it. I’ve removed framework but same issue.
Hey all, on JAMF 10.11.1 here and I just saw this bug for the first time. Running the script @benducklow provided fixed the problem on the machine.
This was on a DEP machine as well, so it looks like the type of enrollment doesn't make much difference on whether or not you see this bug. Hope this doesn't become widespread, we have about 100 machines to get done and it's already taking more time with the "fun" that is DEP and not being able to image!
Update: We've gotten about five or six out, and only one has NOT needed the fix. Terrific, MORE steps to our provisioning process! Also, one laptop didn't wanna cooperate, I did a jamf policy in the terminal, and it mentioned it was upgrading the jamf helper. On a new install? But then it worked! Bizarre. Oh well, at least there is a fix. :/
Yeah this issue seems to have crept up again with the update to v10.11.1. The 'fix' was not working though. it may/may not have been due to a large cache deployment (Mojave install package ~6GB) that was happening, but support saw a rash of them come in in the following days after the upgrade..
I had a 10.13 Mac do this today. RemoveFrameWork and re-enroll didn't fix it the first time, so I rebooted after the RFW command and then enrolled and that worked. I have a ticket open on this with JAMF and they wanted a bunch of logs and stuff, which I gathered before "fixing" it.
I guess we will see what they see. It seems pretty random and the fix seems equally random. Some I've been able to just run the manage command, others have been full on re-enrollment.
Keep us posted @dpodgors and well as other following on this thread. I thought we had this identified to things happening in the background (ie; software download/cache happening), but that doesnt seem to be the case 100% of the time...
Yeah I was wondering if having Self Service deploy at Enrollment Complete was messing something up so I turned that off and set to install at checkin only, no change. Luckily Ben's script has fixed this in all cases for us, no restart needed so far. For those of you who've encountered this, does it stay fixed usually, or might it break again? I guess at least the jamf binary seems to be unaffected, so there are ways around it if it does. Still annoying though!
We're also getting reports of users unable to run policies through Self Service and getting stuck at "executing..." removeframework and re-enrolling did not help.
Just adding a +1 here. I'm seeing it too. Waiting to hear back if the recommended script above worked to remedy it.
Possibly unrelated but any script or installer that prompts for input (ex: jamfHelper) can hang the jamf daemon which also processes self-service requests. We saw this all the time due to poor admin scripting practices. To combat that and future admin error, I've deployed a monitor which watches for jamf daemon sub-processes and if it has been running longer than X seconds (say 6 hours), it gives it a quick kill.
In a pinch, this allows us to at least keep machines reporting until we identify the policy that caused the issue. Welcome are any improvements, like identifying jamf daemon children better.
#!/bin/bash
# jamfalive.sh
# Daemonize and call on interval to keep jamf alive
# Example: /Library/LaunchDaemons/com.company.jamfalive.plist
# Time to Live - adjust for your environment
ttl=21600 # 6 Hours
# ttl=14400 # 4 Hours
# ttl=10800 # 3 Hours
# Some general health checks without touching JSS
chmod +rx /usr/local/jamf/bin/jamf
chmod 6755 /usr/local/jamf/bin/jamfAgent
rm /usr/local/bin/jamf; ln -s /usr/local/jamf/bin/jamf /usr/local/bin/jamf
if ! launchctl list|grep -q jamf.daemon; then launchctl load -w /Library/LaunchDaemons/com.jamfsoftware.jamf.daemon.plist; fi
# Exclude launchDaemon monitor PID
JAMF_DAEMON_PID=$(pgrep -f "jamf launchDaemon")
# PIDS to age and kill if hung.
# pgrep -x is for process name exact match
JAMF_BINARY_PIDS=($(pgrep -x "jamf"))
JAMF_HELPER_PIDS=($(pgrep -x "jamfHelper"))
PID_LIST=( "${JAMF_BINARY_PIDS[@]}" "${JAMF_HELPER_PIDS[@]}" )
for P in ${PID_LIST[@]}
do
# skip the demon process
if [ "$P" == "$JAMF_DAEMON_PID" ]; then continue; fi
# Check the age of process
now=$(date "+%s") # epoch of right now
max_age=$(( now - ttl ))
start_time_str=$(ps -o lstart= $P|sed -e 's/[[:space:]]*$//') # date is provided in textual format from PS
start_time=$(date -j -f "%c" "$start_time_str" "+%s") # convert date to epoch seconds
# Kill all processes older than ttl seconds: $max_age
if [ $start_time -lt $max_age ]
then
pname=$(basename "`ps -o comm= -p $P`")
pfull=$(ps -p $P -o command=)
# Log the action - output location specificed in launchd definition
echo "$(date "+%a %b %d %T") $(hostname -s) jamfalive - PID: $P ($pname) $start_time_str"
echo "$(date "+%a %b %d %T") $(hostname -s) jamfalive - $pfull"
# Kill the process
kill -9 $P
fi
done
I found a solution this morning that worked for me.
Problem: Self Service apps stuck at “Executing”.
Solution: Download a new Self Service app from the JSS Self Service Settings and replace the old versions. Here’s why:
During original testing, I was using a 2013 Mac Pro and a Late 2013 iMac and Self Service apps all installed properly. It wasn’t until I tried Self Service on my MacBook Pro (13-inch, 2016, 4 Thunderbolt 3 Ports) that the problem showed up. I then set up 3 MacBook Pro’s (Retina, 13-inch, Early 2015) and had the same problem. I initially thought that it was something different in the newer Macs vs the 2013 models. But what?
My next thought was that the problem Macs were all using macOS 10.14.5 and the JSS Mac was on 10.13. and JSS: 10.11.1. So I updated the JAMF Server (Mac Pro 2013) to 10.14.5 and JSS 10.12.0. This did not solve the problem.
Finally, I thought about everything that had been changed, and asked “what hasn’t changed?” I immediately though of the Self Service app itself: the one that was being used had been Downloaded from the JSS before the server updates. I downloaded a new copy, installed it on the problem Macs, and the Self Service apps installed.
I'm very happy now.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.