[R-sig-hpc] Creating different working directories for each node?
Novack-Gottshall, Philip M.
pnovack-gottshall at ben.edu
Mon Aug 11 22:59:55 CEST 2014
Greetings,
I'm trying to run some code on a cluster in which an internal qhull
convex-hull function repeatedly writes then scans a dummy file. (Very
inefficient, I know, but c'est la vie.) The problem is that all nodes
share the same default working directory, and get confused because they
are all trying to read/write the same dummy file. (So far as I know,
there is no way to specify unique file names for the dummy files in the
internal .C function, especially with my lack of C prowess.)
I've been playing around with having my function (called using
snowfall's 'sfClusterApply') specify unique working directories (using
'setwd'), say one wd for each CPU, so that the individual dummy files
are set within unique directories. This seems a plausible wrap-around so
long as I confirm that I'm matching up the wd with the correct CPU process.
I wanted to check whether anyone has any other recommendations before I
waste my time on further troubleshooting.
If it's relevant, I'm running my code using management package snowfall
on a CentOS/OpenMPI Intel cluster with 2 hyperthreaded 6-cores in each
of 16 nodes (allowing 382 functional CPUs/unique working directories for
the job).
Thanks,
Phil
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Phil Novack-Gottshall
Associate Professor
Department of Biological Sciences
Benedictine University
5700 College Road
Lisle, IL 60532
pnovack-gottshall at ben.edu
Phone: 630-829-6514
Fax: 630-829-6547
Office: 332 Birck Hall
Lab: 107 Birck Hall
http://www1.ben.edu/faculty/pnovack-gottshall
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
More information about the R-sig-hpc
mailing list