Imaging speed fluctuations

tatiang
New Contributor

We are building a laptop config and have begun testing it by netbooting and
then imaging machines connected to a Gigabit switch which also contains the
server with the JSS and CasperShare. The Base OS package will install at
90-100 MB/sec (according to Data sent/sec in Activity Monitor-->Network on
the server), but as soon as the remaining packages (e.g. anything from Adobe
CS 5 to small scripts) start imaging, the speed drops to 0-15 MB/sec.

When we first tested the configuration (which was slightly different from
now), the full config would install at Gigabit speeds, but now -- as I
mentioned -- it installs at 100 Megabit speeds. I've even disconnected the
server from the rest of our network to rule out the possibility of
non-Casper bandwidth use.

Any ideas why the process slowed down?

Tatian



Tatian Greenleaf
Associate Director of Technology
Saint Mark's School
(415) 472-8000 x1014

12 REPLIES 12

John_Wetter
Release Candidate Programs Tester

It's likely more of an AFP throughput issue than anything. Can you otherwise push more data just with standard AFP file copies?

John

--
John Wetter
Technical Services Manager
Educational Technology, Media & Information Services
Hopkins Public Schools

Bukira
Contributor

How much RAM do you have in the server?

Criss Myers
Senior Customer Support Analyst (Mac Services)
iPhone / iPad Developer
Apple Certified Technical Coordinator v10.5
LIS Development Team
Adelphi Building AB28
University of Central Lancashire
Preston PR1 2HE
Ex 5054
01772 895054

tatiang
New Contributor

12 GB. It's a fairly new server (Mac Pro).

Tatian



Tatian Greenleaf
Associate Director of Technology
Saint Mark's School
(415) 472-8000 x1014

tatiang
New Contributor

This happens on a single machine netbooted and imaged via a Gigabit
connection through a switch that contains the server. I'm guessing RAM
isn't the issue here, but I'll monitor it. Thanks.

Tatian



Tatian Greenleaf
Associate Director of Technology
Saint Mark's School
(415) 472-8000 x1014

tatiang
New Contributor

Thanks, I'll try using a compiled config. Are there any drawbacks to using
compiled configs? We use dmgs, and have a mix of small and large packages. The slowdown seems to happen with anything non-OS. So If I image with the
base OS and any large package, it slows down. Smaller packages are harder
to time, but I suspect they are also going slowly. The difference between
Gigabit and 100 Megabit speeds is significant enough that a visual reading
of the status bar tends to be indicative enough.

Tatian



Tatian Greenleaf
Associate Director of Technology
Saint Mark's School
(415) 472-8000 x1014

tlarkin
Honored Contributor

I see this problem off and on at my work and a lot of times I think it has to do with poor disk I/O on the client side. I also think that totally throws off AFP throughput.

Not applicable

I believe if you check the box to erase the hard drive, it will do a block copy on the first package that is installed (which is usually the OS). After that package, the rest are installed via a file copy, which will be slower to copy/install.

As you mentioned, it should be a lot faster using a compiled config as all your packages will be combined into a large one. Although, I have not tried compiled configurations yet, but in theory, the speed should be more in line with what you would expect.

Travis

Bukira
Contributor

A normal install for me is 2hrs a compiled config is 1 1/2 hrs so as it's block level much faster so I 100% recomnend, all my Installs are compiled

The only draw backs

Compiled cannot compile, scripts, printers or adobe, they get downloaded after the block complied install, but that's life.

If u have pkg's I think they are slower as it copies the pkg to local /tmp then Installs where as dmg's are mounted of the server and installed

Hence why I asked which u had and If large or small

I'm sure afp for 100 1gb files would be slower that 1 100gb file

Bukira
Contributor

Also complied configs take ages to build and require extra disk space and if u amend a config u need to recompile

donmontalvo
Esteemed Contributor III

That's the route we take. We handle imaging in one location, then we spit out compiled configurations for each client. All development is done from the same development tree in the back office, but once the compiled image is sent to the DP, the imaging guys are free to add components to the install process. This gives back office control of the image while it gives front office folks flexibility. Added plus...it's soooo much faster. We tested a compiled config (20G+) on 10 NetBooted computers. Total time to image (over gigabit) was under 15 minutes.

Don

--
https://donmontalvo.com

tlarkin
Honored Contributor

I create my OS image with instaDMG. Then create a compiled configuration of image + apps that I want standard. Then about once a every few months I run the instaup2date python script and update the base OS image. Each time I have to recompile, but it allows me to maintain a modular OS + packages. I have to repack individual apps individually as well to recompile so it is all up to date.

However, I just now have 6 SUS servers. 1 parent and 5 children software update servers. So all I have to do now is set the parent server, all the children will sync, then tell casper to run updates based on network location.

tatiang
New Contributor

I think we may just go with compiled configs, but it's still frustrating
that the uncompiled packages won't copy faster.

I did check the AFP transfer speed and it was what I would expect: 3
GB/minute via the Gigabit switch. So AFP is transferring data much, much
faster than Casper Imaging.

Tatian



Tatian Greenleaf
Associate Director of Technology
Saint Mark's School
(415) 472-8000 x1014