Posted on 06-25-2013 05:09 PM
Within the last couple of months, our JSS has slowed down immensely.
Saving in Casper Admin can take up to 10 seconds.
On the web end, Management > Policies lags to load, creating a policy is slow, saving a policy is very slow. Is there anything in particular that might be cause this?
Posted on 06-26-2013 11:16 AM
Is your MySQL database local to the JSS?
Posted on 06-26-2013 02:38 PM
We have it located on a seperate server.
Posted on 07-01-2013 09:26 AM
How big is your database? If you're retaining a lot of logs it can bog things down.
Posted on 07-01-2013 10:43 AM
Set your log clearing to everything older than one week, then purge your existing logs one by one. We found if you try doing that all in one go, you'll busy out your database and that'll stop your JSS flat. Alternatively do it all out of hours.
Posted on 07-16-2014 06:34 AM
I've noticed this issue lately. For a while it seemed like everything worked pretty quick, now doing something like viewing the policy logs for a system loads very slowly. My log retention settings are such that no group of logs is kept more than a week. Flushing occurs at night. The database is on the same server as JSS. We only have 41 systems in our JSS at the moment.
Is there anything I can do to boost performance? The server has 16GB of ram and gigabit network connection. It is a Windows 2008 R2 server.
Thanks,
Aaron.
Posted on 07-16-2014 07:48 AM
This is something we'd want to get a case going with your Technical Account Manager for, as we'll need to take a look at a full JSS summary, which isn't something we'd want to post on a public forum since it contains quite a bit of information about your environment.
Generally, slow downs, if they are related to the JSS itself and not due to failing hardware or a wonky OS (low disk space is most common, especially on Windows, for non-JSS related reasons for slowness) are caused by one or more of the following:
- Huge tables. Most common offenders: applications, application_usage_logs, log_actions, plugins, fonts, unixapps (8.x only), unix_executables (8.x only). Size can be managed by making sure log flushing is turned on and runs on a regular basis, as well as trimming the amount of policies that force an immediate inventory update. In general, we don't want a policy to update inventory immediately unless it's absolutely critical that an inventory update happens immediately and cannot wait until the computer normally checks in.
- A huge database. This ties in to the above; as an off the top of my head example, if we've got 500 devices and a 2GB database, the database is considered to be abnormally large.
- MySQL settings being at the defaults, which are too small for any environment. We want to usually see at least 301 connections and a max_allowed_packet size of at least 512M. For larger environments (1000+) we tend to go with 601 MySQL connections.
- Tomcat settings being a bit too narrow. For 301 MySql connections, we'd want Tomcat's maxThreads to be at 752 (Threads should be roughly 2.5x the amount of MySQL connections). Bare minimum maximum memory for Tomcat should be 2GB, ideally 4GB. For larger (over 1000 devices) environments, 6-8GB minimum for the Tomcat maximum memory. We usually like to take roughly half the server's available RAM to allocate to Tomcat's upper limit.
- If we're still on the 9.2x series, there are defects that will cause specific slow behavior, notably loading policy logs and history in Inventory records; this is usually tied into one or more of the above as well.
However, before giving advice on what to check or change, we'd want to have a fresh copy of a full JSS Summary and get a case going with your Technical Account Manager.
Thanks!
Amanda Wulff
JAMF Software Support
Posted on 07-16-2014 07:55 AM
Hi Amanda,
Thank you for the informative response. I'll be happy to go through and check what you have mentioned. I had a Health Check not too long ago, and all was good.
I'd like to have a look at things myself and see if I can find out what is going on. The two things I have noticed slow down on are 1) bringing up the Policy Logs for a system and 2) when I go back to the Policies page, it will hang for a bit and then re-render and become responsive.
It seems like everything else is normal.
I will send a report to my account manager and reference this thread.
Thanks,
Aaron.
Posted on 07-16-2014 08:04 AM
Sounds good!
I took a peek at the summary from back in April and immediately noticed a couple of red flags (most notably under 100 devices, a 1.4GB database, and 77 policies updating inventory); hopefully those were taken care of in the previous health check, though and I'm just looking at an old, inaccurate snapshot.
That new summary will give a more accurate picture, I'm sure.
Your TAM sits two desks down from me, so I'll catch him and bring him up to speed.
Thanks!
Amanda Wulff
JAMF Software Support
Posted on 07-16-2014 08:20 AM
Amanda,
I guess we should talk because I was not aware anything was wrong or red flag. I'm a bit embarrassed now. I use a lot of recons because so many of my policies are based on smart group membership.
I'm going to send the report in now.
Posted on 07-16-2014 08:59 AM
It's not too bad; I see a lot of customers who do that as nobody really thinks about what a combined potential thousands of inventory updates can do to table sizes over time.
I think the most I've seen before was 467 policies that updated inventory, and most were set to run at regular check in or at login/logout. That made for some MASSIVE (like close to 20GB) applications table sizes as, with the number of devices they had, it was running between 82000 and 115000 inventory updates every day!
Each individual update is tiny, but when there are thousands and thousands going on they can build up pretty fast, even with log flushing turned on.
The JSS only uses the most recent report for its scoping anyway, and each update appends the inventory records it doesn't overwrite or just add new data, it appends a full inventory record, which is why we tend to recommend that, unless it's critical that an inventory update happen immediately (and sometimes it is), that we just let the computer submit its inventory report during its regular check in. It helps cut down on unnecessary inventory records.
Funnily enough, that was something I didn't know until a few months after I started working in support; I'd just assumed it only updated the records if something changed, AND that it only added the changes. Turns out I was completely wrong about that, but it sure did explain how some of those tables grew so large. :)
I did give your TAM an update as to what was going on and he let me know your summary came in; looked through it and sent recommendations back to him, so he should be reaching out soon if he hasn't already.
Thanks!
Amanda Wulff JAMF Software Support
Posted on 07-16-2014 09:02 AM
Excellent. Thank you for the information. I will keep that in mind when I design policy. And thank you reviewing my report and talking with my TAM.
Thanks!
Aaron.
Posted on 07-16-2014 09:17 AM
Hi @amanda.wulff - Thanks for the info in your last post. Good stuff.
I have to admit I wasn't aware that the JSS was adding an entirely new record at each recon collection. I have assumed now for years that it simply updated the existing record with the new information.
Can you shed any light on why it works this way? Was this a conscious design decision or simply a limitation in MySQL and the inventory engine?
I guess I don't understand the reason for not just updating the existing record. Is it because of the hardware/software change reports that also get stored? Or some other reason?
I'm also asking because in using the API I see that I can "update" an existing record with very specific bits of data with a PUT command. I'm not collecting all of the data, just one very specific item, dropping that into an xml and uploading to the record. The specific items gets updated on the JSS and all older data remains, so its clearly updating and not requiring a new full record. I had assumed that the recon process did something similar, though not quite as crude as I what I sometimes do in my scripts with curl commands.
I'd love to understand this better, because I'm not sure if it has ever been explained to us in this way and would probably change some of our methods to avoid the frequent slowness issues we experience today.
Thanks!
Posted on 07-16-2014 10:27 AM
I'm not actually sure about the why; I've worked at various places in the past that used MySQL or SQL databases in one way or another and have seen it set up both ways (full appends every inventory collection vs. only adds data to a record if it has changed or is new and does not append the record with a full copy) but never really thought about it beyond, "Oh, okay, that's just the way this particular program was designed to do it."
In the JSS, the old records are still there and can be accessed through reports with date ranges, it just appears to be that the JSS uses the most recent inventory report for a lot of the scoping people do that would hit an inventory report and doesn't really look to the past reports for that.
As you've seen with the API, you can update certain parts of a record. It's just the recon that does the full record, based on what we have turned on for inventory collection in the JSS preferences, which is what runs when we have Update Inventory checked in a policy (or just run a recon directly).
When I started, they used the 'stacks of cups' analogy to explain it to us during training; I prefer to take the more amusing look of Recon being some guy that just comes into your office/to your desk every few minutes and drops a stack of papers on your desk and says, "Yeah, I'll be back with another set in about thirty minutes, give or take, or whenever I get more."
After awhile, your whole office will be overrun with papers, mostly full of similar (or the same) data if you don't clean it out now and again, and that Recon guy has no concept of how much clutter he's creating on your desk and in your office, it's like he just walks in, drops his stuff off, and leaves again.
On top of that, if you do need to look at the full history or something from a past time frame, now you've got to dig through the huge stacks of papers that guy left on your desk.
It's a more entertaining mental image than cups being stacked all over the place. :)
That's also why we tend to see the delay in pulling up policy history logs or inventory records when the applications or log_actions (the two main offenders; log_actions is where the policy log records live) tables start to creep up in size. Once we start getting over 25 million lines in a table, it can take a little time to sort through it all.
Having a decent server with a good amount of RAM and having your Tomcat threads and MySQL settings opened up a bit can help, but at some point it'll just be a matter of it flat out being a lot of data to sift through and we'll see noticeable slowness when accessing parts of the JSS that hit those large tables.
In some of the older 9.2x versions we had a couple of defects that caused it as well, due to how the queries were structured, but those have since been fixed; I believe they were taken care of by 9.23.
I can't find your account in our system based on the JN username, but if you want someone to take a look at a full JSS summary to see if anything raises any red flags for slowness in there, you can send one in to your Technical Account Manager and create a case asking if they'd just give it a quick once over to look for any potential problems.
If you haven't had a JSS health check recently, it may be worth it to ask about that as well.
Turns out I've been doing them as part of my routine troubleshooting and cleanup for about 9 months now.
Thanks!
Amanda Wulff
JAMF Software Support