[R-sig-hpc] Question on foreach package

Stephen Weston stephen.b.weston at gmail.com
Mon Jul 18 21:40:23 CEST 2011


The example that you're trying is tiny.  The overhead in most of the
parallel programming packages in R is such that you don't get a
speed improvement for problems that take less than a few seconds,
let alone 0.05 seconds.

Actually, since the parallel version isn't slower, I wonder if
you registered a parallel backend.  Did you, and if so, which
one?

- Steve


On Mon, Jul 18, 2011 at 3:25 PM, Megh Dal <megh700004 at yahoo.com> wrote:
> As per the documentation of foreach package, if I use "%dopar%" then computation happens parallaly and on the contrary for %do%", it happens sequentially. Here, I tried both "%dopar%" and "%do%" for one of the examples given in the help page of ?foreach:
>
>> a <- matrix(1:1600, 40, 40)
>> b <- t(a)
>> system.time(foreach(b=iter(b, by='col'), .combine=cbind) %dopar%   (a %*% b))
>    user  system elapsed
>    0.04    0.00    0.05
>> a <- matrix(1:1600, 40, 40)
>> b <- t(a)
>> system.time(foreach(b=iter(b, by='col'), .combine=cbind) %do%   (a %*% b))
>    user  system elapsed
>    0.05    0.00    0.05
>
> However surprisingly, I did not see any improvement in the computation time. I am using windows vista with dual core CPU (I think it is dual core as when I open Task manager -> Performance, I see there are 2 windows for CPU Usage History......... I am correct that it is dual core, right?) Therefore as it is dual core, shouldn't the computation time with "%dopar%" will be half of "%do%"?
>
> Am I missing something?
>
> Your help will be highly appreciated.
>
> Thanks
>
> _______________________________________________
> R-sig-hpc mailing list
> R-sig-hpc at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc
>



More information about the R-sig-hpc mailing list