Odd behavior from Jamf recon on PCs

todd_mcdaniel
New Contributor III

We're seeing some erratic behavior from jamf recon running on Windows PCs that we're wondering if anyone else has seen.

Note: We're using the binaries from v9.93 and running on Windows 7.

Oddity #1: I manually run recon on a PC, the report visually looks fine. But when viewing on the JSS, the hardware panel is essentially garbled. Other panels appear to be populated correctly.

8786e1e0b62a4563b4a11b04c2d78856

Again, the data is intact when the XML is examined, it seems like the JSS is mishandling the data?

Oddity #2: After manually running recon in the same scenario as the previous issue. The JSS will no longer find the machine in any search. You can manually edit the http address to search for the machines specific JSS ID and it will appear, but not if you search for it's computer name, etc.

However, if the machines computer record is modified through the API (with Tugboat), it will again be visible in searches!

3 REPLIES 3

bpavlov
Honored Contributor

There’s a way to run recon on windows in verbose mode so that you can get a log and see where it might be failing.
I would also make sure that the recon on the Windows client is the same as the jss. It wasn’t clear if 9.93 is the same as your jss version.

todd_mcdaniel
New Contributor III

Same version.

Will check the verbose output.

Thanks!

todd_mcdaniel
New Contributor III

We discovered that you can't actually put 41 characters into an area only sized for 40. The optical drive on some of our PCs was reporting "HL-DT-ST BD-RE WH14NS40 SCSI CdRom Device", which if you look closely is 41 characters. In the schema for the JSS, the field is limited to 40 characters. This is what was causing the garbage data we were seeing.

This is what the error looks like: Data too long for column 'optical_drive' at row 1. It's visible at the default level of logging.

Temporarily fixing the error involves making changes to the MySQL schema itself. I say temporary, because an update to the JSS will reset the schema. I'm told a PI will be created to address this issue in a future version.