[R-sig-hpc] How to configure R to work with GPUs transparently
Norm Matloff
matloff at cs.ucdavis.edu
Fri Mar 19 18:42:25 CET 2010
On Fri, Mar 19, 2010 at 10:25:49AM -0700, Norm Matloff wrote:
> For example, there is the issue of having R sense that one actually does
> have a suitable GPU, and that CUDA (or OpenCL) is installed. One could
> have the innards of lm(), say, do this sensing, but it would mean a
> significant delay. As an alternative, this could be done when R starts
> up, and then having R set some environment variable; I don't see
> problems with this, but there could be some.
For those who may not be familiar with GPU programming and who might
wonder if there is a disconnect between my reply and Dirk's, I should
clarify:
I took Brad's posting to mean that he is proposing that many of the more
computation-intensive R functions be extended, so that the code in lm(),
say, would first check to see if a GPU and the GPU software are present.
The code would then take different actions in the two cases (present and
nonpresent). This is in contrast to a situation in which any R code
would automatically use GPUs, which as Dirk points out, is not possible.
By the way, I should also add that while GPUs are great for the
"embarrassingly parallel" applications, it's hard to make them work well
for other kinds of parallel apps. See for instance an analysis of parallel
graph algorithms on GPUs at http://arxiv4.library.cornell.edu/abs/1002.4482
Norm
More information about the R-sig-hpc
mailing list