[R-sig-hpc] Rmpi with Open MPI on Debian
Sklyar, Oleg (London)
osklyar at maninvestments.com
Wed Feb 11 14:49:01 CET 2009
I had the same problem with RHEL5 and the main problem was the lack of
documentation. What you might want is to look into
/etc/openmpi-default-hostfile, which in my case had to be populated
with:
mynode1 slots=7
mynode2 slots=7
etc. You might also want to try and create a local user-specific
hostfile if default OpenMPI configuration on Debian supports that,
pretty much with the same contents located in ~/.openmpi/hostfile
Both worked for me. However Rmpi still reports that the universe size is
1, i.e. in conctrast to LAM I could not rely on that value to get the
number of CPUs I can use w/o LB.
Dr Oleg Sklyar
Research Technologist
AHL / Man Investments Ltd
+44 (0)20 7144 3107
osklyar at maninvestments.com
> -----Original Message-----
> From: r-sig-hpc-bounces at r-project.org
> [mailto:r-sig-hpc-bounces at r-project.org] On Behalf Of Ingeborg Schmidt
> Sent: 11 February 2009 13:33
> To: r-sig-hpc at r-project.org
> Subject: [R-sig-hpc] Rmpi with Open MPI on Debian
>
> Hello,
> I wish to use Rmpi with Open MPI on Debian. Slaves should be
> spawned on serveral computers which should be able to
> communicate with a single master. However, there does not
> seem to be a default hostfile for Open MPI that is used. So when I use
> library(Rmpi)
> mpi.spawn.Rslaves()
> it only spawns one slaves on the localhost instead of several
> thereads on all my computers. I am unable to find any useful
> documentation of Open MPI (yes, I checked the FAQ on
> open-mpi.org). Is there such a thing as a default hostfile
> that is used when calling mpi.spawn.Rslaves() ? Or is there
> any other way to use mpi.spawn.Rslaves() with Open MPI so
> that slaves are spawned across multiple computers?
>
> I am unsure about calling R via orterun. The only tutorials
> regarding orterun and R I found (e.g.
> http://dirk.eddelbuettel.com/papers/bocDec2008introHPCwithR.pd
> f ) seemed to imply that there either is no master or the
> master identifies itself by looking at it's mpi.comm.rank() .
> Moreover running
> paste("I am", mpi.comm.rank(), "of", mpi.comm.size())
> via
> orterun --hostfile MYHOSTFILE -n CPUNUMBER Rslaves.sh RTest.R
> testlog needlog /PATH/TO/R
> results in
> "I am 0 of 0"
> on every node.
> This is not what I want, I would like only the master to
> execute my R script and send relevant methods to the slaves
> via mpi.bcast.Robj2slave(). My code contains commands like
> mpi.remote.exec() which I would like to keep. I have not yet
> seen any examples that are able to combine calling R via
> orterun with communication between the slaves with
> mpi.remote.exec() etc.
>
> By the way: Can you recommend a method to lower the thread
> priory of the R slaves so that other calculations done on the
> same computers are not disturbed? Is placing a nice (Linux
> command to lower thread priority) before R in the Rslaves.sh
> sufficient when using mpi.spawn.Rslaves()?
>
> Cheers,
> Ingeborg Schmidt
>
>
>
>
> _______________________________________________
> R-sig-hpc mailing list
> R-sig-hpc at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc
>
**********************************************************************
Please consider the environment before printing this email or its attachments.
The contents of this email are for the named addressees ...{{dropped:19}}
More information about the R-sig-hpc
mailing list