[R-sig-hpc] Matrix multiplication

Brian G. Peterson brian at braverock.com
Tue Mar 13 13:54:45 CET 2012


> >>> Simon Urbanek <simon.urbanek at r-project.org> 03/13/12 07:27 AM >>>
> 
> On Mar 12, 2012, at 5:40 AM, Patrik Waldmann wrote:
> 
> > Dear members,
> > 
> > I noticed that there isn't a function for matrix multiplication in the new parallel library. What would be the most efficient way to do a matrix multiplication there?
> > 
> 
> The parallel package is for *explicit* parallelization. R already does implicit parallelization (using OpenMP or multi-threaded BLAS or both) automatically - this includes matrix multiplication.

On Tue, 2012-03-13 at 10:23 +0100, Patrik Waldmann wrote:
> What does automatically mean? Is X%*%t(X) parallelized?

Matrix multiplication %*% is a BLAS function, as Simon and Claudia already told you.

So, if your BLAS does multithreaded matrix multiplication, it will use
multiple threads 'implicitly', as Simon pointed out.

Because the actual matrix multiplication operation is carried out by the
BLAS, R doesn't really care how the BLAS does it... it could be on one
thread (non-parallel), on multiple threads (as with gotoblas or openblas
configured that way) or on a GPU (as with Magma BLAS), and R would not
care.

'explicit' parallelization if for taking some other code in R and
explicitly telling R to use a certain number of worker nodes to
accomplish the task.  This type of parallelization is often used for
simulation and optimization, where the block of code to be parallelized
may be very large.

Be aware that there can be unintended negative interactions between
implicit and explicit parallelization.  On cluster nodes I tend to
configure the BLAS to use only one thread to avoid resource contention
when all cores are doing explicit parallelization.


-- 
Brian G. Peterson
http://braverock.com/brian/
Ph: 773-459-4973
IM: bgpbraverock



More information about the R-sig-hpc mailing list