Posted on 10-13-2016 06:20 AM
I have an LDAP server with an OTP back end, and my admin users authenticate via this server as LDAP users. Up to version 9.93 this worked perfectly.
Something changed in 9.96 that broke this. What I've been able to determine via logs and tcpdumps is that JSS binds to LDAP first using the admin binddn to search the database, no problem so far. Then it binds as the userdn for authentication and gets a success response, still no problem. Then, for some reason, it continues to try and bind as the userdn, but since I'm using OTP, the password is no longer valid and authentication fails. I have a couple of dozen other LDAP clients authenticating the same way, including Linux with pam_ldap, vCenter, Brocade switches etc. None of these have any issues. In these cases, the client stops trying to authenticate after it receives the success response and logs on the user.
I'll point out that if I switch to static LDAP passwords it works, only because now the multiple authentication attempts can now succeed since the password never changes.
I've been back and forth with tech support on this. Their position is that they don't officially support OTP, and since the issue only presents itself if I implement an OTP backend, it's therefore my problem.
My position, of course, is that OTP is irrelevant. From the client perspective it's just plain LDAP. The OTP should be, and is invisible to every other LDAP client I've set up. The fact that JSS, for some reason, continues to try and authenticate after receiving a success response is non-standard, or a bug. The fact that it just happens to work for most people doesn't make it any less wrong. My use of OTP should be considered a fortunate accident that revealed a issue with their implementation that would have otherwise never been discovered.
I'm mostly venting here. Support has already decided this isn't worth their time. If anyone has any ideas I'd love to hear them. I'm not an LDAP or Tomcat expert. Perhaps there's some obscure setting somewhere that's forcing this behaviour (BTW we already tried disabling connection pooling).