[R-sig-hpc] "chunking" parallel tasks

Martin Morgan mtmorgan at fhcrc.org
Tue Jan 26 18:10:01 CET 2010


On 01/26/2010 07:48 AM, Stephen Weston wrote:
> In parallel computing, "chunking" is used to bundle multiple messages
> together, since big messages can very often be sent much more efficiently
> than short messages.  That means that in systems like "snow" and "foreach",
> if your task executes very quickly, most of the time may be spent moving
> data around, rather than doing useful work.  By bundling many tasks
> together, you might be able to do the communication efficiently enough so
> that you get a benefit from doing the tasks in parallel.  However, if you
> have short tasks and large inputs and/or outputs, chunking won't really
> help you, since it isn't going to make the communication more efficient.
> You need to figure out some way to decrease the amount of data that is
> being moved around.

Another perhaps more common use is for 'cheap' load-balancing, where the
individual tasks may take variable amounts of time (e.g., during
simulations). Say each of the first 100 tasks take 10 times longer than
the next 900. Dividing the 1000 tasks equally between 10 nodes means
that the entire computation is limited by the first node, which takes
(100 x 10) / 1 time units, versus (900 x 1) / 9 for the remaining 9
nodes. Choose a smaller chunk size, e.g., 10. All nodes get 10 big
tasks, and when complete come back for more. Ignoring communication time
the elapsed time is something like (100 x 10) / 10 (for the 100 big jobs
each taking 10 time units, divided between 10 processors) + (900 x 1) /
10  (for the remaining 900 small jobs, each taking 1 time unit, divided
across 10 nodes).

Since not explicitly mentioned elsewhere, the Rmpi mpi.*apply functions
have a job.num argument for this purpose.

Martin

> 
> The nws package supports "chunking" via the eachElem "chunkSize" element
> of the "eo" argument.  The multicore package supports chunking as an all
> or nothing thing via the "mc.preschedule" argument to mclapply.  The doMC
> package uses the backend-specific "preschedule" option, which it passes on
> to mclapply via the "mc.preschedule" argument.  The doMPI package uses
> the backend-specific "chunkSize" option to specify any chunk size, much
> like nws.
> 
> The iterators and itertools packages contain various functions that create
> iterators that allow you to split up data in chunks, so they support "chunking"
> in their own way.  That allows you to do manual chunking, as I call it, with
> any of the foreach backends.
> 
> The snow package has some internal functions that split vectors and matrices
> into chunks.  They are used in functions such as parMM, parCapply, and
> parRapply.
> 
> - Steve
> 
> 
> On Tue, Jan 26, 2010 at 9:44 AM, Brian G. Peterson <brian at braverock.com> wrote:
>> Mark Kimpel wrote:
>>>
>>> I have seen references on this list to "chunking" parallel tasks. If I am
>>> interpreting this correctly that is to decrease the overhead of multiple
>>> system calls. For instance, if I  have a loop of 10000 simple tasks and 10
>>> processors, then 10 chunks of 1000 would be processed.
>>>
>>> Which of the parallel packages has the ability to take "chunk" (or its
>>> equivalent) as an argument? I've googled chunk with R and come up with
>>> everything but want I'm interested in.
>>>
>>
>> Google "nesting foreach loops"
>>
>> The foreach package will do what you want.  Steve Weston has posted some
>> examples to this list on this topic as well.
>>
>> Regards,
>>
>>  - Brian
>>
>> _______________________________________________
>> R-sig-hpc mailing list
>> R-sig-hpc at r-project.org
>> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc
>>
> 
> _______________________________________________
> R-sig-hpc mailing list
> R-sig-hpc at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc


-- 
Martin Morgan
Computational Biology / Fred Hutchinson Cancer Research Center
1100 Fairview Ave. N.
PO Box 19024 Seattle, WA 98109

Location: Arnold Building M1 B861
Phone: (206) 667-2793



More information about the R-sig-hpc mailing list