[R-sig-hpc] OpenMPI vs MPICH2 on debian system

Jonathan Greenberg greenberg at ucdavis.edu
Thu Aug 11 23:06:31 CEST 2011


Stephen:

Thanks!  I guess the issue is that I'm seeing the slaves only spawn to
a single CPU on a 4-core system, which makes me think that the problem
is more significant, but I can't tell if its part of the openmpi
install or the Rmpi call.  I don't recall having this problem on my
other debian system, but I unfortunately forgot to document how I set
it up on that system (also, I had root access to the original system
but now I'm trying to get this installed via my sysadmin).

--j

On Thu, Aug 11, 2011 at 2:02 PM, Stephen Weston
<stephen.b.weston at gmail.com> wrote:
> Hi Jonathan,
>
> I think this is normal.  Consider the following script rmpi.R:
>
> library(Rmpi)
> print(mpi.universe.size())
> mpi.quit()
>
> If I run this using:
>
>  % R --slave -f rmpi.R
>  [1] 1
>
> the universe size is 1.  I also see this if I run the script
> interactively.
>
> Specifying three hosts using orterun:
>
>  % orterun -H localhost,localhost,localhost R --slave -f rmpi.R
>  [1] 3
>  [1] 3
>  [1] 3
>
> In this case, the script is execute three times, and each instance
> sees a universe size of 3, since I specified three hosts.
>
> If I use orterun to run the script only once, using "-n 1", as you
> would for a spawned cluster:
>
>  % orterun -n 1 -H localhost,localhost,localhost R --slave -f rmpi.R
>  [1] 3
>
> you see that one copy of the script is executed, and the universe size
> is still 3.
>
> In other words, if you don't use orterun (or mpirun, mpiexec, etc) the
> universe size is always 1, at least as far as I've been able to discover.
>
> - Steve
>
>
>
> On Thu, Aug 11, 2011 at 4:39 PM, Jonathan Greenberg
> <greenberg at ucdavis.edu> wrote:
>> R-sig-hpc'ers:
>>
>> I'm a big fan of the snow/Rmpi package, and we've recently tried to
>> get it running on a new debian system in our lab.  My sysadmin is not
>> too keen on non-debian package installs (you know, ./configure, make,
>> make install), although I can convince him to do an apt-get -b source
>> / dpkg install.  We tried a straightforward binary install of
>> openmpi-dev, but it appears to be spawning the threads only on one cpu
>> (the same one the master is running on) when doing a:
>>
>> require("snow")
>> cl <- makeCluster(4, type = "MPI")
>> # look at top/gkrellm or run some stuff, only 1 CPU lights up.
>>> mpi.universe.size()
>> [1] 1
>> stopCluster(cl)
>>
>> Does this have to do with something within Rmpi/snow, or is this a
>> "bad" install of openmpi?  Would doing an apt-get -b source
>> openmpi-dev/dpkg install solve this?  Would mpich2 (or another mpi
>> flavor) be a better choice?
>>
>> Thanks!
>>
>> --j
>>
>>
>> --
>> Jonathan A. Greenberg, PhD
>> Assistant Project Scientist
>> Center for Spatial Technologies and Remote Sensing (CSTARS)
>> Department of Land, Air and Water Resources
>> University of California, Davis
>> One Shields Avenue
>> Davis, CA 95616
>> Phone: 415-763-5476
>> AIM: jgrn307, MSN: jgrn307 at hotmail.com, Gchat: jgrn307
>>
>> _______________________________________________
>> R-sig-hpc mailing list
>> R-sig-hpc at r-project.org
>> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc
>>
>



-- 
Jonathan A. Greenberg, PhD
Assistant Project Scientist
Center for Spatial Technologies and Remote Sensing (CSTARS)
Department of Land, Air and Water Resources
University of California, Davis
One Shields Avenue
Davis, CA 95616
Phone: 415-763-5476
AIM: jgrn307, MSN: jgrn307 at hotmail.com, Gchat: jgrn307



More information about the R-sig-hpc mailing list