My JSS server is running on Ubuntu and runs at 100% CPU whenever students are on site. I imagine it's because of the number of Macs on campus. I've been back and forth with JAMF support a few times but the problem never gets resolved. I'm wondering if it would help to move MySQL to a separate server. The top processes on my jss server are always a slew of tomcat processes, but SQL does pop up here and there towards the top.
Has anyone ever done this and noticed a performance improvement? I saw a few years-old threads talking about doing it but nothing about results.
Done correctly it shouldn't make performance any worse, especially if you servers are close together. But if your issue seem to be load during busy times the best solution may be to add more management points in.
How many total clients on both iOS and Macs are you managing?
In short, separating the SQL and Tomcat load is a good thing as taking one of those two off the initial box should let the other one take over the resources...especially if you are in the growing phase of an Apple product rollout.
The JSS actually scales fairly well in that if you are overloading one Tomcat server, it is fairly easy to spin up another one or two and put them behind load balancers. How you host boils down to hardware/VMs available and how you wish to utilize them. I HIGHLY recommend taking the CJA course if at all possible. They teach you scaling and backend very well and they also teach solid load planning.
I'll give you my school district as an example...we manage 4500 iPads and 400 Macs. When I realized we were overloading our single server we scaled to first a VM and finally to a cluster of servers. Basically I have a VM that hosts MySQL, and 4 VMs that host Tomcat (3 behind a load balancer and the 4th an a purpose driven Tomcat instance to integrate with Cisco ISE). Finally I quietly tucked a 5th Tomcat instance on the MySQL VM itself (not to respond to any clients but to give me an admin console to control the cluster with. Other than VMs used (which we had some extra resources already from retiring a few servers), this config did not cost us a single dime and offers a bit more resiliency. I won't say redundancy as the MySQL install is still a single point of failure, but more resiliency if one of the VMs goes down. 5 VMs is probably overkill but we should grow into that nicely and have this last for 5 years without having to do a horrible amount of scaling.
Hope this helps,
Hmm, I wasn't aware I could use multiple front-ends. i don't have the budget for a load balancer I'll have to do some research to figure out my options but that's a good suggestion, thanks.
Currently we have ~1900 macs and ~300 iPads so maybe multiple tomcat servers is my best option.
I'd love to take the CJA course but I don't have the budget for that either. :)
In CJA class they also teach how to do it using a free, open-source load balancer called Pound. I've heard their teaching on a different one now, but Pound worked fairly well in class.
You have to be careful doing multiple front ends though...the clients still see the original single server URL (example: jss.yourorg.org), but your front ends might hold an address such as (jssbackend1.yourorg.org, jssbackend2.yourorg.org, etc...) You could even get away with not assigning a DNS but instead using IP addresses, but that's no fun. The load balancer holds the magic there. Basically I assign "jss.yourorg.org" to the load balancer and it forwards traffic to one of the backend servers depending on how you set that up.
While class does have a high price tag of $2500, it might be worth investing if you are having load issues...I would make sure the boss understands that there are actually load issues and discuss with him something to the effect of, "We both agree we are having load issues, our choices that I can see are to send me to class for $2500+ travel or hire a consultant at a higher cost."
As I'm writing this, I'm reading another guy's follow-up...you could do round robin DNS as well...that is one way of spreading the load. You would need to set up clustering on the JSS itself with that and work with your TAM if that is how you choose to solve this, but quite honestly, figure out how to get a $2500 budget for that class...you learn $5000-$10000 worth of lessons in scaling the JSS professionally. I went into the class not knowing what to do about our load issues. Before I left, I had a pretty good plan on how to solve them that only cost us some extra VMs and in the case of the load balancer, I'm using a software one (KEMP) that we already use for our PowerSchool Cluster. Didn't really cost any extra because we already owned the equipment/software. The only cost we had was the class itself and the travel budget.
In the end, I can share more of the specifics we did if you would like more info. I might have to ask you questions regarding hardware and access to stuff, but our new cluster definitely kicked load issues out of the equation and provides a way to scale if we need to add more capacity.
You can move the DB off the same server that the web app sits on. That's easy to do without clustering. Might be even easier to move the web app to a different server and point the FQDN to it. You will want to reach out to your JAMFBuddy or what ever they are these days for help with the DB if you connect from a different IP.
We manage about 9500 iPads and 1200 MBs. FQDN points to an HA pair of F5s that balance between 3 - 4 web apps on VMs depending on what's happening and then to a separate DB server running on it's own pizza box full of SSDs. I don't monkey with the web app settings much, keep them pretty vanilla. Always able to spin up more if needed. This seems to really work well for us.
Friendly suggestion--check your smart groups on both mobile devices and computers tabs in the JSS web GUI. I would remove anything that isn't mission critical or actively being used. Convert your smart groups to saved searches when possible.
We had an "iPad jail" configuration profile that would get scoped to students if they had Apps that weren't in the App Catalog. So, to see if an App wasn't in the catalog, it had to compare against our list of 100+ apps, apparently all the time, maybe not just during inventory updates. Long story short, we moved to App whitelisting and were able to remove that smart group, which probably accounted for ~30-40% average CPU utilization drop. Our JSS now averages 3-5% utilization during the day, while all users are on site and using their devices.
There is some white pages out here on the nation someplace that has recommendations and formula for infrastructure needed based on client numbers and types. I will tell you my experience is the iPads are far more Tomcat and SQL intensive then OSX devices every could be. We went to a cluster about 3-4 yrs ago after being just a one box shop for the prior 5-6 yrs. I still run my Tomcat and SQL boxes on the original XServes with 32GB RAM, most of that dedicated to Tomcat on the Master, and have 3 Mac Mini's running the webapps behind the balance and a DMZ box that is a Windows 2012 server box. We are at about 7500 iPads and 7000 Macs