[R-sig-hpc] Managing R CPU/memory usage on Linux

Gary Artim gartim at genepi.berkeley.edu
Mon Nov 22 18:46:31 CET 2010


there is always /etc/security/limits.conf (fedora's cntl file, ubuntu is
the same) . It could be set up to warn or
kill off overconsumers. You seem a bit light on memory. I would
definitely expand that. I ended up creating a small cluster and have the
head node control the allocation / number of compute nodes. Another
option is to virtualize, that has it limits too, but seem you don't yet
have the hardware for that (memory). 


On Fri, Nov 19, 2010 at 01:41:48PM -0800, Davor Cubranic wrote:
>We have some Linux compute servers that our users occasionally overwhelm 
>with their R batch jobs to the point where the machines are completely 
>unresponsive and have to be rebooted. This seems to be happening more 
>often recently, and got me wondering what other people do to manage the 
>CPU/memory resources used by R on their servers.
>
>We 'nice -19' the R process, but that doesn't seem to help. Are there 
>any other R options or OS settings that would be useful? Or should I 
>consider installing a queuing manager and closing the servers to 
>interactive logins? As far as I can tell, out users just run existing R 
>packages from CRAN, and there is no parallelization or distributed 
>computing going on.
>
>The machines are dual-CPU 64-bit Intels with 4GB of RAM and running 
>Ubuntu 8.04. So they won't be making the TOP500 list any time soon, but 
>I would have hoped the kernel would be a litter better at squelching 
>down CPU and memory hogs.
>
>Thanks,
>
>Davor
>
>_______________________________________________
>R-sig-hpc mailing list
>R-sig-hpc at r-project.org
>https://stat.ethz.ch/mailman/listinfo/r-sig-hpc



More information about the R-sig-hpc mailing list