[R-sig-hpc] Managing R CPU/memory usage on Linux

Allan Engelhardt allane at cybaea.com
Tue Dec 7 16:31:43 CET 2010


Control groups is probably the way to go if you have it, as Etienne 
suggested, but also consider increasing swappiness, e.g.

echo 100 >> /proc/sys/vm/swappiness

(or more likely the equivalent in whatever /etc/sysctl.conf is on 
Ubuntu).  You are quite low on RAM.  You will of course want a 
decent-sized swap file.

And without control croups you should set resource limits, cf. man 
limits.conf.

You could also try 'ionice -c 3' but the benefit is likely to be low 
(nice -19 already drops your scheduling class to the equivalent of -c 2 
-n 7).

Allan

On 19/11/10 21:41, Davor Cubranic wrote:
> We have some Linux compute servers that our users occasionally overwhelm
> with their R batch jobs to the point where the machines are completely
> unresponsive and have to be rebooted. This seems to be happening more
> often recently, and got me wondering what other people do to manage the
> CPU/memory resources used by R on their servers.
>
> We 'nice -19' the R process, but that doesn't seem to help. Are there
> any other R options or OS settings that would be useful? Or should I
> consider installing a queuing manager and closing the servers to
> interactive logins? As far as I can tell, out users just run existing R
> packages from CRAN, and there is no parallelization or distributed
> computing going on.
>
> The machines are dual-CPU 64-bit Intels with 4GB of RAM and running
> Ubuntu 8.04. So they won't be making the TOP500 list any time soon, but
> I would have hoped the kernel would be a litter better at squelching
> down CPU and memory hogs.
>
> Thanks,
>
> Davor
>
> _______________________________________________
> R-sig-hpc mailing list
> R-sig-hpc at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc



More information about the R-sig-hpc mailing list