All, after extensive testing, logging, and analysis the longstanding, intermittent issues affecting the managed endpoints in my enterprise has been traced to how the policy decision engine in GlobalProtect handles traffic destined for our cloud instance.
To further explain:
For some time upon execution of jamf commands (policy, primarily) in terminal following has been seen:
“An error occurred. There is no message.”
Subsequent runs would be successful. Nevertheless log captures of “mdmclient” and “com.apple.ManagedClient” are replete with various and sundry errors. Correlating these with netstat commands shows that jamf policy would be begin execution over the VPN tunnel (utunX)… and then quickly gets denied and directed to physical interface (en0). That this occurs, regardless of how quick, during initial TLS handshake hasn’t really helped matters.
Further attempts running jamf terminal commands shows that while most traffic appears to head out via native interface some was now allowed via the tunnel:
Socket-level inspection shows both GlobalProtect and the Jamf binary maintaining simultaneous HTTPS sessions to Jamf/AWS endpoints, but with different source addresses. GlobalProtect-owned connections appear to originate from the local LAN IP (192.168.x.x), while the Jamf process itself shows at least one active connection from the tunnel IP (10.x.x.x). This indicates that traffic handling is being mediated at the proxy/filter layer and is not reflected purely by route-table state. It also suggests Jamf traffic is not being cleanly or uniformly bypassed in practice, but is rather hitting an arbitrator in a policy-based decision engine.
Key Findings
- jamf policy frequently fails on the first attempt with:
“An error occurred. There is no message.” - The same command often succeeds immediately on the second attempt.
- route -n get cityofphoenix.jamfcloud.com has repeatedly shown Jamf traffic resolving via utun0, indicating that Jamf is not cleanly excluded from the tunnel by route alone.
- Packet capture and socket inspection show Jamf-related traffic appearing on both:
- utun0 using the tunnel IP
- en0 using the native LAN IP
- GlobalProtect NetworkExtension logs show Jamf flows being intercepted and evaluated by the GP extension before final path selection.
- In at least one captured case, a Jamf TCP flow was:
- initially seen on utun0
- then accepted by the provider on en0
- GP logs also show initial flow rejection followed by later acceptance for Jamf-related traffic, which aligns with the observed “first attempt fails, second attempt works” pattern.
- This behavior is consistent with flow-divert / app-proxy / DNS proxy policy arbitration, not simple static route-based split tunneling.
- Production and test GP portals produce materially different route tables, indicating that portal/gateway policy is a major factor in the resulting client behavior.
- Jamf Cloud is AWS-backed and resolves to multiple IPs over time, so IP-based exception handling is inherently more brittle than domain-based handling.
