[R-sig-hpc] work splitting method on quadcore CPU
Brian G. Peterson
brian at braverock.com
Tue Aug 11 13:00:33 CEST 2009
Matteo Mattiuzzi wrote:
> Hello!
>
> I'm processing Raster-images splitting the work on a quadcore CPU using snowfall.
> The Problem is, that in the images there are areas with much CPU work and other with less. Using "sfInit(parallel=TRUE, cpus=4)" some CPU has very much work to do (~12h) and the other just a few minutes.
>
> The function I use is "sfLapply()" because it had the best "system.time()" results. I tryed "sfClusterApplyLB()" but CPUs work here with quite low performance (~15% each) and it took more time.
>
> Using "sfClusterSplit()" the vector is divided in 4 consecutive parts, I'm looking for a different method to divide that vector:
>
> Example: not V1 = 1,2,3; V2= 4,5,6 but V1=1,3,5; V2=2,4,6, or maybe divided by random is good too.
>
> Thanks to all, geatings Matteo
>
Split your task into more pieces than 4 using a vector length that is
some multiple of 4 to divide the work into smaller pieces. If you
divide it into smaller parts, the work will be distributed round-robin
style to the CPU's as they become available. This should minimize the
unequal loading.
- Brian
--
Brian G. Peterson
http://braverock.com/brian/
Ph: 773-459-4973
IM: bgpbraverock
More information about the R-sig-hpc
mailing list