[R-sig-hpc] [ignore, fixed] Re: SNOW Issues: Error in writing to connection (on localhost)

Cedrick Johnson cedrick at cedrickjohnson.com
Mon Mar 14 07:15:58 CET 2011


Nevermind. It was a matter of PEBCAK (problem exists between chair and
keyboard).

Installed ssh on the new machine and it works. That was the only
difference between the two machines, the working one had an ssh server
running (different post altogether about why snow only runs on port
22).

Now it works as intended (was just testing out locally before turning
up the "volume" per se and having it distribute across multiple
machines (works as expected too).

My apologies.
C

On Mon, Mar 14, 2011 at 12:10 AM, Cedrick Johnson
<cedrick at cedrickjohnson.com> wrote:
> Hi All-
> I am having trouble on a new machine with a fresh install of R and the
> requisite SNOW packages (snow, foreach, doSNOW). When I attempt to
> start 2 instances on localhost and use the %dopar% utility, I get the
> following message:
>
> Error in serialize(data, node$con) : error writing to connection
>
> Here's that systems' details:
> OS: Linux Mint 10 (Ubuntu 10.10)
> SessionInfo (relevant packages):
>
> other attached packages:
>  [1] doSNOW_1.0.3                 foreach_1.3.0
> codetools_0.2-8
>  [4] iterators_1.0.3              fogbank_2.1.0
> XML_3.2-0
>  [7] fExoticOptions_2110.77       fOptions_2110.78             snow_0.3-3
>
> When I look at the created cluster object:
>> cl.tmp
> [[1]]
> $con
>                    description                           class
> "<-localhost.localdomain:10187"                      "sockconn"
>                           mode                            text
>                          "a+b"                        "binary"
>                         opened                        can read
>                       "opened"                           "yes"
>                      can write
>                          "yes"
>
> ***************
>
> On another system, the actual hostname of the system is shown (and I'm
> able to run the code successfully):
>
>> cl.tmp2
> [[1]]
> $con
>                    description                           class
> "<-fogb-chi-cj1:10187"                      "sockconn"
>                           mode                            text
>                          "a+b"                        "binary"
>                         opened                        can read
>                       "opened"                           "yes"
>                      can write
>                          "yes"
>
>
> Here's what I'm trying to run. Unfortunately, the data is too large to attach:
>
> xj = colnames(HistoricalYields)
>
> require(snow)
> require(doSNOW)
> require(foreach)
>
> cl.tmp = makeCluster(rep("localhost",2), type="SOCK")
> registerDoSNOW(cl.tmp)
>
> parallel.arima <- function(data) {
>        library(forecast)
>        fit = auto.arima(ts(HistoricalYields[,data]), approximation=TRUE,
> allowdrift=TRUE, stepwise=TRUE)
> }
> system.time(res <- foreach(dat=xj) %dopar% parallel.arima(dat))
>
> ** Error (as above)
>
>
> One more note, this appears to work for the single case. I'm not quite
> sure what the ramifications would be if I tried running this with
> distributed nodes (I'm going to test that out in a few minutes to
> see):
>
> clusterExport(cl.tmp, c("HistoricalYields",xj))
> system.time(res <- clusterApply(cl.tmp, fun=parallel.arima, xj))
>
>
> Thanks for any advice/help.. Should I be using clusterExport and
> bypassing %dopar% altogether?
>
> Thanks,
> Cedrick
>



More information about the R-sig-hpc mailing list