[R-sig-hpc] Working doSNOW foreach openMPI example

Tena Sakai tsakai at gallo.ucsf.edu
Fri Jan 14 02:00:30 CET 2011


Greetings Justin,

Thank you for your post.  The example you show is of high interest
to me, but I can't get it to work with my mpi software, it seems.
below is a bit of feedback.  I am running a redhat linux machine(s)
with openMPI v 1.4.3.

First, identical to your example, except hostfile specification:

  $ mpirun -n --hostfile myhosts --no-save -f rtest.R
  --------------------------------------------------------------------------
  mpirun was unable to launch the specified application as it could not find
  an executable:

  Executable: myhosts
  Node: vixen.egcrc.org

  while attempting to start process rank 0.
  --------------------------------------------------------------------------
  $ 

I put 1 after -n:

  $ mpirun -n 1 --hostfile myhosts --no-save -f rtest.R
  --------------------------------------------------------------------------
  mpirun was unable to launch the specified application as it could not find
  an executable:

  Executable: --no-save
  Node: 10.255.255.254

  while attempting to start process rank 0.
  --------------------------------------------------------------------------
  $ 

Let me try without --no-save:

  $ mpirun -n 1 --hostfile myhosts -f rtest.R
  --------------------------------------------------------------------------
  mpirun was unable to launch the specified application as it could not find
  an executable:

  Executable: -f
  Node: 10.255.255.254

  while attempting to start process rank 0.
  --------------------------------------------------------------------------
  $ 

Get rid of -f:

  $ mpirun -n 1 --hostfile myhosts rtest.R
  [compute-0-0.local:16448] [[42316,0],1]->[[42316,0],0]
mca_oob_tcp_msg_send_handler: writev failed: Bad file descriptor (9) [sd =
9]
  [compute-0-0.local:16448] [[42316,0],1] routed:binomial: Connection to
lifeline [[42316,0],0] lost
  $ 

Here's what happens when I run rtest.R interactively:

  $ R --no-save

  R version 2.10.1 (2009-12-14)
                .
                .
  > library( doSNOW )
  Loading required package: foreach
  Loading required package: iterators
  Loading required package: codetools
  foreach: simple, scalable parallel programming from REvolution Computing
  Use REvolution R for scalability, fault tolerance and more.
  http://www.revolution-computing.com
  Loading required package: snow
  > library( panel )
  > 
  > cl <- makeMPIcluster( 3 )
  Loading required package: Rmpi
        3 slaves are spawned successfully. 0 failed.
  > registerDoSNOW( cl )
  > 
  > clusterEvalQ( cl, library(panel) )
  [[1]]
   [1] "panel"     "snow"      "Rmpi"      "methods"   "stats"
"graphics" 
   [7] "grDevices" "utils"     "datasets"  "base"
  
  [[2]]
   [1] "panel"     "snow"      "Rmpi"      "methods"   "stats"
"graphics" 
   [7] "grDevices" "utils"     "datasets"  "base"
  
  [[3]]
   [1] "panel"     "snow"      "Rmpi"      "methods"   "stats"
"graphics" 
   [7] "grDevices" "utils"     "datasets"  "base"
  
  > 
  > res <- clusterCall( cl, function() {
  +                                 Sys.info()["nodename"]
  +                          }
  +                    )
  > print( do.call(rbind, res) )
       nodename    
  [1,] "vixen.egcrc.org"
  [2,] "vixen.egcrc.org"
  [3,] "vixen.egcrc.org"
  > 
  > sme <-  matrix( rnorm(100), 10, 10)
  > clusterExport ( cl, "sme" )
  > 
  > myfun <- function () {
  +                 for ( i in 1:1000 ) {
  +                         x <- eddcmp ( sme )
  +                 }
  +          }
  > 
  > ged <- 0
  > 
  > system.time( {
  +                 ged <- foreach ( i = 1:10 ) %dopar% {
  +                            myfun ()
  +                        }
  +               } )
     user  system elapsed
    0.964   2.744   3.717
  > 
  > system.time( {
  +                 ged <- foreach ( i = 1:10 ) %do% {
  +                            myfun ()
  +                        }
  +               } )
     user  system elapsed
    8.760   0.004   8.803
  There were 50 or more warnings (use warnings() to see the first 50)
  > 
  > system.time( {
  +                 for ( i in 1:10 ) {
  +                    ged <- myfun (  )
  +                 }
  +               } )
     user  system elapsed
    7.385   0.000   7.407
  There were 50 or more warnings (use warnings() to see the first 50)
  > 
  > stopCluster( cl )
  [1] 1
  > mpi.quit()
  $

Regards,

Tena Sakai
tsakai at gallo.ucsf.edu


On 1/13/11 1:08 PM, "Justin Moriarty" <justin300 at hotmail.com> wrote:

> 
> Hi, 
> Just wanted to share a working example of doSNOW and foreach for an openMPI
> cluster.  The function eddcmp() is just an example and returns some inocuous
> warnings.  The example first has each node return its nodename, then runs an
> example comparing dopar, do and a for loop.  In the directory containing
> rtest.R it is run from the command line with:
> "mpirun -n --hostfile /home/hostfile --no-save -f rtest.R"
> 
> Here is the code for rtest.R:
> 
> #################
> library(doSNOW)
> library(panel)
> 
> cl<-makeMPIcluster(3)
> registerDoSNOW(cl)
> 
> clusterEvalQ(cl,library(panel))
> 
> res<-clusterCall(cl, function(){Sys.info()["nodename"]})
> print(do.call(rbind,res))
> 
> sme<- matrix(rnorm(100),10,10)
> clusterExport(cl,"sme")
> 
> myfun<-function()
> {
> for(i in 1:1000)
> {
> x<-eddcmp(sme)
> }
> }
> 
> ged<-0
> 
> system.time({
> ged<-foreach(i=1:10) %dopar%
> {
> myfun()
> }
> })
> 
> system.time({
> ged<-foreach(i=1:10) %do%
> {
> myfun()
> }
> })
> 
> system.time({
> for(i in 1:10)
> {
> ged<-myfun()
> }
> })
> 
> stopCluster(cl)
> mpi.quit()
> 
> #################
> Cheers,
> Justin
> 
> 
> 
> 
> 
> 
> 
>  
> [[alternative HTML version deleted]]
> 
> _______________________________________________
> R-sig-hpc mailing list
> R-sig-hpc at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc



More information about the R-sig-hpc mailing list