Weirdness with 10.11.1 Ubuntu update - anybody else?

luisgiraldo
New Contributor II

It seems there may be some issue with the 10.11.1 Linux installer. On two servers where I applied this today, it wiped out log path settings and in both cases it couldn’t re-connect to the database after the upgrade, requiring re-entering of the MySQL password at the web app start page. I wonder if the Linux installer build was mixed up somehow...

In addition to that, I seem to have a weird issue with the jamf-pro CLI tools. Now, the "jamf-pro server stop/start" commands don't actually seem to do anything although oddly, jamf-pro server restart DOES work and bounces tomcat as expected.

Anybody else seeing any weird issues with the 10.11.1 installer?

4 REPLIES 4

willpolley
New Contributor III

Saw the exact same with install process.

Additionally, I had to reboot the box for jamf pro to be accessible this morning.

Ubuntu 18.04.2 LTS

russeller
Contributor III

I would double check what your tomcat memory is set to just incase: https://www.jamf.com/jamf-nation/articles/139/allocating-additional-memory-to-tomcat

cstout
Contributor III
Contributor III

@luisgiraldo, I'm experiencing heavy RAM utilization after the 10.11.1 update on Ubuntu 18.04 LTS. Heavy to the point of becoming inaccessible. I've had to restart the vm each day since the upgrade. Hoping for a fix soon that doesn't involve allocating more RAM per vm.

Also, I've seen and reported to Jamf unusual behavior with stopping/starting tomcat where the commands will run and report success but the service never actually restarts. For example, I can stop tomcat and it will confirm the service is stopped but the page remains accessible. :-)

howie_isaacks
Valued Contributor II

I had problems with all but one of my Jamf Pro servers running on Ubuntu when I ran this upgrade. I got them all resolved after working with Jamf Support for a couple of hours. My issue was that the Jamf Pro installer was not moving my config files back where they belong after the upgrade process completed. One server having this problem could be considered a one off issue, but when I have a problem with three servers, that's a big problem.