W3C home > Mailing lists > Public > public-qa-dev@w3.org > July 2007

Re: mod_perl memory usage

From: olivier Thereaux <ot@w3.org>
Date: Thu, 26 Jul 2007 16:29:47 +0900
Message-Id: <8B2E0829-F474-48F4-9FE0-D4CF773D34F9@w3.org>
To: QA-dev <public-qa-dev@w3.org>

Hi Ville, hi all.

On Jul 26, 2007, at 16:02 , Ville Skyttä wrote:
> On Thursday 26 July 2007, you wrote:
>> mod_perl2 helps with speed a lot, but appears to suck in a lot of
>> memory, and two of our servers died by lack of swap, today...
> I haven't run mod_perl 2 in production and I'm generally a bit  
> rusty with this
> stuff, but at least with mod_perl 1.x on Linux a few years ago, you  
> _really_
> didn't want the mod_perl httpd processes to be swapped out in any
> circumstances.

Indeed, good to get a confirmation.
I'm afraid we learned that the hard way yesterday...
Since then I lowered MaxClients and was looking at
which has a lot of interesting info, including tools and rules to  
find how much memory each apache+mod_perl2 process takes.

In particular:

fugu:~# ps -o vsize,%cpu,size -C apache2
    VSZ %CPU    SZ
43880 62.8 21224
41568 63.4 19968
41492 67.5 19892
12168  0.0  2644
55200 33.9 32316
39424 53.0 17824
58168 29.0 35260
54220 28.6 30372
48308 35.3 25484
42732 39.2 21132
46292  1.0 22468
49336  1.8 26656
53896  1.6 30048

The interesting thing, it seems, is that with the aggressive usage of  
mod_expires, I gather that most of the requests we get are for the  
check script (and checklink and the feed validator - maybe we should  
look at running the former under mod_perl2 too) so footprint is  
consistently high.

On the two servers with 4G of mem I've set maxclients to a safe 56.
On the server with 2G, I've set it down to 30.
(this is a bit conservative, but we can tweak later)

This may cause some wait time in case of high traffic, but in my  
experience it's actually faster than wait time when load is too high.

> These processes are often huge compared to usual httpd processes,  
> but that's
> mostly fine as a lot of memory they use is actually shared between  
> them.
> Well, up until they get swapped - that's when IIRC the shared  
> memory in them
> goes non-shared and the total memory usage pretty much explodes.   
> Also, the
> memory once reserved by a mod_perl httpd process is never released  
> back to
> the OS, but it does get reused.

ah, ok, so that's probably why, if you set the limit too high, the  
load stays up even if the # of connections is low.

> I don't know if there's a mod_perl 2 specific guide to this, but  
> for mod_perl
> 1.x there's good stuff at
> http://perl.apache.org/docs/1.0/guide/performance.html

Looks like the same resource karl and I found. cool :)

> I'd guess things are more or less the same with 2.x, in a nutshell:  
> find out
> how many mod_perl/httpd processes you can have without any of them  
> getting
> swapped out and limit the number of them to that.  See also  
> MaxClients and
> MaxRequestsPerChild in the above performance doc.

Uhm... MaxRequestsPerChild has always been a grey area to me. Right  
now it's set to 1000 on our servers, which should yield benefits, and  
be conservative if any of our stuff is leaking memory.


It would be good to estimate how much memory is leaked, and apply a  
more appropriate value there.

Thanks Ville, your advice always appreciated.
Received on Thursday, 26 July 2007 07:29:33 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:36:27 UTC