[Rd] R external pointer and GPU memory leak problem

Yuan Li i2222222 at hotmail.com
Sat May 14 17:43:48 CEST 2016


My question is based on a project I have partially done, but there is still something I'm not clear.

My goal is to create a R package contains GPU functions (some are from Nividia cuda library, some are my self-defined CUDA functions)

My design is quite different from current R's GPU package, I want to create a R object (external pointer) point to GPU address, and run my GPU function direct on GPU side without transferring forth and back between CPU and GPU.

I used the R external pointer to implement my design. But I found I have memory leak problems on GPU side, I can still fix it by running gc() function explicitly in R side, but I'm just wondering if I missed something in my C code. Would you please indicate my mistake, because this is my first time write a R package, and I could possibly made some terrible mistakes.

actually, I have wrote bunch of GPU functions which can run on GPU side with the object created by following create function, but the memory leak kills me if I need to deal with some huge dataset.

Here is my create function, I create a gpu pointer x, and allocate GPU memory for x, then make a R external pointer ext based on x, and copy the cpu vector input to my gpu external pointer ext, 


/*
define function to create a vector in GPU 
by transferring a R's vector to GPU.
input is R's vector and its length, 
output is a R external pointer
pointing to GPU vector(device)
*/
SEXP createGPU(SEXP input, SEXP n)
{  
int *lenth = INTEGER(n);
       PROTECT (input = AS_NUMERIC (input));
       double * temp; 
       temp = REAL(input);
double *x;               ##here is the step which causes the memory leak
cudacall(cudaMalloc((void**)&x, *lenth * sizeof(double)));
//protect the R external pointer from finalizer
SEXP ext = PROTECT(R_MakeExternalPtr(x, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ext, _finalizer, TRUE);
 
//copying CPU to GPU
cublascall(cublasSetVector(*lenth, sizeof(double), temp, 1, 
R_ExternalPtrAddr(ext), 1));    
       UNPROTECT(2);
return ext;
}



here is my finalized for my create function,

/*
define finalizer for R external pointer
input is R external pointer, function will finalize the pointer 
when it is not in use.
*/
static void _finalizer(SEXP ext)
{
if (!R_ExternalPtrAddr(ext))
return;
       double * ptr= (double *) R_ExternalPtrAddr(ext);
Rprintf("finalizer invoked once \n");
cudacall(cudaFree(ptr));
R_ClearExternalPtr(ext);
}


My create function can run smoothly, but if I run the create function too many times, it shows out of memory for my GPU device, which clearly implies memory leak problem. Can anybody help? Help alot in advance!  		 	   		  


More information about the R-devel mailing list