[MDMController ] - Error processing mdm request, returning 400
Just started today when I got in. The log is almost completely this error on dozens of devices. At one point I thought i saw that this was related to comms with Apple's Cloud but not sure.
We have seen the same thing for 8 months now, it eventually brings our JSS interface to such a crawl that the web page times out. Tech support has not found or resolved this issue for us either.
Hope they do soon. We are having to restart tomcat 2 - 3 times a day to get it stable.
I am seeing this error as well, we have ~2000 devices and noticed it when it was redlining our mysql db cpu utilization. All the queries were MDM related and it seemed to be stuck looping aggressively. Restarting jss tomcat stopped the behavior, but would only remain normal for 1-6 hours before spamming 400 errors again spontaneously and redlining the database server again.
Not great, and looks like this issue is ongoing since 2014.
Maybe VPP is just screwed up right now. That would explain why my device-based installs aren't working.
IIRC they were going to integrate VPP into ASM this summer, which will likely cause a bump or two along the way.
Same issue here... What I see from my side is that "Update Inventory" command always get pending, all commands following "Update Inventory" will get pending as well unless I cancel the "Update Inventory" command.
Guys, we're on 9.100.0 and have seen the issue since August-ish (when we really started looking due to perf issues). We rolled out laptops to students the 2nd week and were having DB performance issues, etc. as a result of the error and some network changes. After ironing out several network bugs (new public IPs, etc.), we still noticed a sizable lag, especially if we deployed software packages.
Long story, kind of short, we've cleaned out logs and our DB is back down to a reasonable size, but we've had to work at it. The log file ballooned from ~650 MB to 7.92 GB in about a month....
Good news is the iOS side seems OK, but the commands will be slow to respond--app rollouts/installs are OK, inventory update, passcode clear, etc. take some time. After working to clean out logs/DB, it's much better, but still a delay larger than what we'd previously seen. We have an active open ticket at the moment to see how we can resolve.
We are having a similar issue. However, I'm not sure when it began, as our infosec team recently brought it to our attention. We recently upgraded to JSS to 9.101.0xxxx.
The LTM shows the error as: <142>Sep 29 07:49:58 lv-nap-ltm-1 httpd: <FOO SRC IP ADDRESS 1> <BAR DST IP ADDRESS> [29/Sep/2017:07:49:58 -0700] "PUT /mdm/ServerURL HTTP/1.1" 400 8443 -
<142>Sep 29 07:49:57 lv-nap-ltm-1 httpd: <FOO SRC IP ADDRESS 1> <BAR DST IP ADDRESS> [29/Sep/2017:07:49:57 -0700] "PUT /teesnap/mdm/ServerURL HTTP/1.1" 400 8443
Tomcat Logs as:
[ERROR] [43-exec-218] [MDMController ] - Error processing mdm request, returning 400. Device: 691, CommandUUID: e0b3cf38-xxxxxx
[ERROR] [43-exec-170] [MDMController ] - Error processing mdm request, returning 400. Device: 1140, CommandUUID: f0b322d1
you guys are lucky if you get a device ID or any details at all - my logs flood with this crap:
[MDMController ] - Error processing mdm request, returning 400. Device: Null, CommandUUID: mraNull
it's only when students are on prem, though...have a case open with jamf but not getting anywhere...