[R] How to utilise dual cores and multi-processors on WinXP

rhelp.20.trevva at spamgourmet.com rhelp.20.trevva at spamgourmet.com
Tue Mar 6 16:33:09 CET 2007


Hello,

I have a question that I was wondering if anyone had a fairly straightforward answer to: what is the quickest and easiest way to take advantage of the extra cores / processors that are now commonplace on modern machines? And how do I do that in Windows?

I realise that this is a complex question that is not answered easily, so let me refine it some more. The type of scripts that I'm dealing with are well suited to parallelisation - often they involve mapping out parameter space by changing a single parameter and then re-running the simulation 10 (or n times), and then brining all the results back to gether at the end for analysis. If I can distribute the runs over all the processors available in my machine, I'm going to roughly halve the run speed. The question is, how to do this?

I've looked at many of the packages in this area: rmpi, snow, snowFT, rpvm, and taskPR - these all seem to have the functionality that I want, but don't exist for windows. The best solution is to switch to Linux, but unfortunately that's not an option. 

Another option is to divide the task in half from the beginning, spawn two "slave" instances of R (e.g. via Rcmd), let them run, and then collate the results at the end. But how exactly to do this and how to know when they're done?

Can anyone recommend a nice solution? I'm sure that I'm not the only one who'd love to double their computational speed...

Cheers,

Mark



More information about the R-help mailing list