[Rd] R external pointer and GPU memory leak problem

Charles Determan cdetermanjr at gmail.com
Mon May 16 14:42:38 CEST 2016


Hi Yuan,

I think this is likely more appropriate for the r-sig-hpc mailing list.
However, regarding you design and comment about R's 'current' GPU package
(I don't what you consider this, gputools?) I think you should look at two
other packages.  I believe the gmatrix (
https://cran.r-project.org/web/packages/gmatrix/index.html) implements
exactly what you are trying to do for NVIDIA specific code.  There is also
the gpuR package (https://cran.r-project.org/web/packages/gpuR/index.html)
package which also implements the object 'on GPU' functionality you desire
but in OpenCL so it works for 'all' GPUs.

If you really want to continue your development I strongly recommend you
look in to using Rcpp and the XPtr objects for external pointers.  They
handle the pointer protection and finalizer without you needing to worry
about them.

Regards,
Charles

On Sat, May 14, 2016 at 10:43 AM, Yuan Li <i2222222 at hotmail.com> wrote:

> My question is based on a project I have partially done, but there is
> still something I'm not clear.
>
> My goal is to create a R package contains GPU functions (some are from
> Nividia cuda library, some are my self-defined CUDA functions)
>
> My design is quite different from current R's GPU package, I want to
> create a R object (external pointer) point to GPU address, and run my GPU
> function direct on GPU side without transferring forth and back between CPU
> and GPU.
>
> I used the R external pointer to implement my design. But I found I have
> memory leak problems on GPU side, I can still fix it by running gc()
> function explicitly in R side, but I'm just wondering if I missed something
> in my C code. Would you please indicate my mistake, because this is my
> first time write a R package, and I could possibly made some terrible
> mistakes.
>
> actually, I have wrote bunch of GPU functions which can run on GPU side
> with the object created by following create function, but the memory leak
> kills me if I need to deal with some huge dataset.
>
> Here is my create function, I create a gpu pointer x, and allocate GPU
> memory for x, then make a R external pointer ext based on x, and copy the
> cpu vector input to my gpu external pointer ext,
>
>
> /*
> define function to create a vector in GPU
> by transferring a R's vector to GPU.
> input is R's vector and its length,
> output is a R external pointer
> pointing to GPU vector(device)
> */
> SEXP createGPU(SEXP input, SEXP n)
> {
> int *lenth = INTEGER(n);
>        PROTECT (input = AS_NUMERIC (input));
>        double * temp;
>        temp = REAL(input);
> double *x;               ##here is the step which causes the memory leak
> cudacall(cudaMalloc((void**)&x, *lenth * sizeof(double)));
> //protect the R external pointer from finalizer
> SEXP ext = PROTECT(R_MakeExternalPtr(x, R_NilValue, R_NilValue));
> R_RegisterCFinalizerEx(ext, _finalizer, TRUE);
>
> //copying CPU to GPU
> cublascall(cublasSetVector(*lenth, sizeof(double), temp, 1,
> R_ExternalPtrAddr(ext), 1));
>        UNPROTECT(2);
> return ext;
> }
>
>
>
> here is my finalized for my create function,
>
> /*
> define finalizer for R external pointer
> input is R external pointer, function will finalize the pointer
> when it is not in use.
> */
> static void _finalizer(SEXP ext)
> {
> if (!R_ExternalPtrAddr(ext))
> return;
>        double * ptr= (double *) R_ExternalPtrAddr(ext);
> Rprintf("finalizer invoked once \n");
> cudacall(cudaFree(ptr));
> R_ClearExternalPtr(ext);
> }
>
>
> My create function can run smoothly, but if I run the create function too
> many times, it shows out of memory for my GPU device, which clearly implies
> memory leak problem. Can anybody help? Help alot in advance!
> ______________________________________________
> R-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

	[[alternative HTML version deleted]]



More information about the R-devel mailing list