VM JSS vs Physical JSS

New Contributor III

We are currently running our JSS in a VM environment with 11,268 computers and 1,768 mobile devices. We have an external facing JSS as well.

Internal VM is running mysql and tomcat running on the same machine. Specs:
Windows Server 2008 R2
16gb of RAM
Dual Core Processor All DP's are run off of different Servers

We are seeing the CPU pegged constantly, we are considering moving to a physical box to try to get better performance and were curious how others have their servers configured.

We have been reading through and nothing seems too consistent. Any info is appreciated.


Valued Contributor II

Assume you talked to support and they checked your sql/tomcat settings? max pool size we have set to 300 at their recommendation as well (i believe in database.xml)

We're on a VM with same specs, but little more than half as many devices as you & internal only.

We hope to never go back to a physical box/

Valued Contributor

Cluster. Separate the database from the web server, possibly put a load balancer out front.

What processes specifically do you see taxing the CPU?

New Contributor III

@dpertschi It is both Tomcat and MySQL combined that are killing the system, it trades off between the two at any given time.

We have talked about making them two separate boxes as it seems to be the combination that is killing it.

New Contributor III

@CasperSally Yes, we spoke with support and have pretty much maxed everything out along those lines. We spent a few hours on the phone with them the other day working out an issue after the 9.62 upgrade. We are seeing a lot of issues from the iPad side of things with regards to push services taking up a lot of resources.

Valued Contributor II


have they ran the db query for pending commands?

I have an issue (for me it's on macs) that causes tens of thousands of pending commands to build up.

It sounds like you know your issue, having both on same box, but might be worth looking at if you think it's push related.

Is adding more RAM an option?

New Contributor III


We initially had an issue where the server was crashing because, somehow, our Wifi profile got removed during the upgrade and it was sending out commands that were building up to machines that we don't push that profile to.

We removed that but I am still seeing a ton of processes, not the initial 15k that we had built up from the wifi profile, but still a relatively large amount. This is why I wanted to get some specs from others and see what they recommended if they had a similar environment.

We have 16gb of ram and set the max amount to 12 the other day. The ram is pretty steady between 7-8gb used at any given time. It seems to be specifically the processor usage that is killing us.

Contributor III

+1 on what @dpertschi said. Separate the two for sure. Also load balancer and cluster. You will definitely see much better performance if you do this, VM or physical box.

Honored Contributor II
Honored Contributor II

Already been said but I would +1 for a load balancer, multiple tomcat servers connecting to a separate MySQL server.

I wouldn't personally switch to physical servers as you will lose a lot of flexibility when it comes to allocating additional hardware resources.

Valued Contributor III
Valued Contributor III

+1 load balancing
+1 database on another box
+1 clustered

VM vs physical host shouldn't matter. Spreading resources and lightening the load of the VM will surely help.

New Contributor

42,000 managed OS X devices
5 Clustered web apps / 4 client facing / 1 master non-client facing
1 Very large RHEL mysql enterprise in InnODB mode
Globally network load balanced single distribution point, not truly managed by Casper
Windows VMs are: 2008 R2, 4 cpu, 16GB RAM
Linux DB VM is: RHEL 5.x, 8 cpu, 64GB RAM
We have to setup our TomCat JAVA memory to start 1GB and cap at 8GB or 16GB for each Web App

This config works great and as long as our hosting doesn't "steal" our VDC resources we generally have no issues. WE run on a daily inventory and hourly check in.