[R-sig-hpc] Easiest Road to Parallel R?

Brian G. Peterson brian at braverock.com
Tue Jul 21 17:59:47 CEST 2009


Per the papply manual:
"If Rmpi is not available, or there are no slaves, implements this as a 
non-parallel algorithm."

so it seems that you have most likely not spawned any slaves.

Dirk has already suggested SNOW on top of Rmpi to simplify the spawning 
of worker processes.  This might be the most advisable next move. (Also, 
do read the survey paper he suggests)

Barring that, have you called the mpi.spawn.Rslaves() command to spawn 
workers?  Then you can also test Rmpi with mpi.apply() directly.

Regards,

  - Brian

Thomas Hampton wrote:
> Hi Brian,
>
> As I understand it (I will check) MPI is the basis of everything they 
> do in this computer in
> terms of parallel computing, so we know that it works.
>
> The Rmpi library loads fine, though that took a bit of doing.
>
> papply() creates an error-free status message when it runs a trivial 
> example, but it reports
> that is is running in serial mode.
>
> I can get on this machine easily and answer any specific questions you 
> can come up with
> promptly.
>
> Thanks very much for your assistance. I feel that the cavalry has 
> arrived.
>
> Yours,
>
> T
>
> On Jul 21, 2009, at 8:39 AM, Brian G. Peterson wrote:
>
>> Thomas Hampton wrote:
>>> We have a substantial beowulf cluster and would like to
>>> get parallel R going. Our systems administrators attempted
>>> without success to get the R function papply to run
>>> properly without success. I passed their comments/questions on to this
>>> list in a previous message. The way I understand it, the various 
>>> pieces are
>>> there and report no errors, but the final result is that no 
>>> parallelism is achieved.
>>>
>>> Is there some more bullet-proof route to parallel R than mpich2, 
>>> Rmpi and papply?
>>>
>>> We are on a beowulf cluster, red hat linux.
>> In the future, it would be best if you simply "Reply All" to your 
>> previous message to keep the thread intact.  It makes it easier to 
>> find the thread.
>>
>> Have you verified that MPI is running correctly and can pass messages 
>> between machines?  In any parallel system made up of many parts, you 
>> need to verify each component independently.  Your prior message was 
>> extremely short on details of what had been tried to establish 
>> communications.
>>
>> Regards,
>>
>> - Brian
>>
>> -- 
>> Brian G. Peterson
>> http://braverock.com/brian/
>> Ph: 773-459-4973
>> IM: bgpbraverock
>>


-- 
Brian G. Peterson
http://braverock.com/brian/
Ph: 773-459-4973
IM: bgpbraverock



More information about the R-sig-hpc mailing list