[R-sig-hpc] creating many separate streams (more streams than nodes)

A.J. Rossini blindglobe at gmail.com
Fri Apr 22 14:32:53 CEST 2011

On Thu, Apr 21, 2011 at 12:49 AM, Paul Johnson <pauljohn32 at gmail.com> wrote:
>>> Paul
>> Parallel random number generators are supposed to behave well in exactly
>> this scenario.
>> Ross
> The rlecuyer package has this theory behind it.  Suppose the random
> stream is like this:
> ------------------------------------------------------------------------------------------------------
> That is long enough that you can divide it into pieces and use them
> separately for separate jobs.
> There are not really 8000 separate generators, there are 8000 chunks
> out of the one long set of numbers.
>  for job 1            2                           3
>            4                      5
> |___________|____________|________________|_____________|_________
> So if you believe the 1 long stream is good, each individual piece is OK.
> This is published in the Lecuyer essay I mentioned in the first post,
> and so far as I know nobody has torn after it.
> In the snowFT code that Hana pointed me toward, the way they do this
> is clever.  I would have had to fight with it.   On each node,
> initiate the same 8000 streams, then when you run a job, just have the
> function you use grab the appropriate stream.
> As far as the publications on the verification of the SPRNG way, well,
> there are some, but I can't say if they are credible or not.  That
> approach spawns the generators with slightly different parameters. In
> theory , I find it more appealing, but the folks who know details are
> more dubious about it. Here's the one definite site I have.
> @article{srinivasan_testing_2003,
>        title = {Testing parallel random number generators},
>        volume = {29},
>        number = {1},
>        journal = {Parallel Comput.},
>        author = {Ashok Srinivasan and Michael Mascagni and David Ceperley},
>        year = {2003},
>        pages = {69--94}
> },
> Since rsprng is in bad shape, I don't know that a person really ought
> to pursue that at the moment.

There are a number of issues here ..  first, we have the theory issue
-- parallel pRNG theory is difficult (well, decent practically
applicable theory for standard serial pRNG's is hard enough).  What
I've seen and tried to wrap my head around, suggests that L'Ecuyer and
his streams approach is about as good as it gets, perhaps with
weaknesses, but that is a very technical discussion.  The older SPRNG
stuff is dicey, I didn't really buy their arguments.  I've not seen
the newer stuff.

Then there is the implementation (sstreams and sprng implementations).
 Also they have some issues, but I'm happier with the sstreams

Then there is the integration (either via the SNOW* approach, or
similar).   This is about the assignment of parallel streams of pRNG
output with different components in the system -- either the compute
node, or the compute job, or the serially-generated order of the
compute job execution.  We did the first and third with SNOW, and the
second, which is "the always reproducible way" with snowFT.

Just like using a serial pRNG with lots of seeds being a reasonable
way to do things (accidentally right, not right by design), the first
and third ways are usually reproducible, but it's an accident, not by


blindglobe at gmail.com
Muttenz, Switzerland.
"Commit early,commit often, and commit in a repository from which we
can easily roll-back your mistakes" (AJR, 4Jan05).

Drink Coffee:  Do stupid things faster with more energy!

More information about the R-sig-hpc mailing list