[R-sig-hpc] Creating different working directories for each node?

Brian G. Peterson brian at braverock.com
Mon Aug 11 23:10:50 CEST 2014


You didn't tell us which clusterning mechanism you're using, but most 
will allow you to retrieve a unique node id or node number, which you 
could use to create a subdirectory.

Also, 'tempfile' in R will create a vector of unique temporary file 
names, which you could pass to the nodes, or append with a node id.

Brian

On 08/11/2014 03:59 PM, Novack-Gottshall, Philip M. wrote:
> Greetings,
>
> I'm trying to run some code on a cluster in which an internal qhull
> convex-hull function repeatedly writes then scans a dummy file. (Very
> inefficient, I know, but c'est la vie.) The problem is that all nodes
> share the same default working directory, and get confused because they
> are all trying to read/write the same dummy file. (So far as I know,
> there is no way to specify unique file names for the dummy files in the
> internal .C function, especially with my lack of C prowess.)
>
> I've been playing around with having my function (called using
> snowfall's 'sfClusterApply') specify unique working directories (using
> 'setwd'), say one wd for each CPU, so that the individual dummy files
> are set within unique directories. This seems a plausible wrap-around so
> long as I confirm that I'm matching up the wd with the correct CPU process.
>
> I wanted to check whether anyone has any other recommendations before I
> waste my time on further troubleshooting.
>
> If it's relevant, I'm running my code using management package snowfall
> on a CentOS/OpenMPI Intel cluster with 2 hyperthreaded 6-cores in each
> of 16 nodes (allowing 382 functional CPUs/unique working directories for
> the job).
>
> Thanks,
> Phil
>



More information about the R-sig-hpc mailing list