[Rd] R external pointer and GPU memory leak problem
Simon Urbanek
simon.urbanek at r-project.org
Mon May 16 19:58:54 CEST 2016
Yuan,
AFAICS things are all working as designed. If everything gets collected properly after a gc() then your finalizers are correct. You have to remember that R relies on garbage collection the release memory so only when R requires more memory it will run a garbage collection. The problem with interfaces to external memory (like GPU) is that R has no idea that you have anything large attached to your tiny external pointer, so as far as R is concerned there is no need to run a garbage collection. Therefore you have to make sure you manage your collection points accordingly - your interface should trigger a garbage collection based on your memory needs - just like R does for the memory it allocates. So you may want to add that logic into your allocation piece - either by keeping track of the amount of memory you allocate on the GPU side or by running a GC when new GPU allocations fail.
Cheers,
Simon
On May 14, 2016, at 11:43 AM, Yuan Li <i2222222 at hotmail.com> wrote:
> My question is based on a project I have partially done, but there is still something I'm not clear.
>
> My goal is to create a R package contains GPU functions (some are from Nividia cuda library, some are my self-defined CUDA functions)
>
> My design is quite different from current R's GPU package, I want to create a R object (external pointer) point to GPU address, and run my GPU function direct on GPU side without transferring forth and back between CPU and GPU.
>
> I used the R external pointer to implement my design. But I found I have memory leak problems on GPU side, I can still fix it by running gc() function explicitly in R side, but I'm just wondering if I missed something in my C code. Would you please indicate my mistake, because this is my first time write a R package, and I could possibly made some terrible mistakes.
>
> actually, I have wrote bunch of GPU functions which can run on GPU side with the object created by following create function, but the memory leak kills me if I need to deal with some huge dataset.
>
> Here is my create function, I create a gpu pointer x, and allocate GPU memory for x, then make a R external pointer ext based on x, and copy the cpu vector input to my gpu external pointer ext,
>
>
> /*
> define function to create a vector in GPU
> by transferring a R's vector to GPU.
> input is R's vector and its length,
> output is a R external pointer
> pointing to GPU vector(device)
> */
> SEXP createGPU(SEXP input, SEXP n)
> {
> int *lenth = INTEGER(n);
> PROTECT (input = AS_NUMERIC (input));
> double * temp;
> temp = REAL(input);
> double *x; ##here is the step which causes the memory leak
> cudacall(cudaMalloc((void**)&x, *lenth * sizeof(double)));
> //protect the R external pointer from finalizer
> SEXP ext = PROTECT(R_MakeExternalPtr(x, R_NilValue, R_NilValue));
> R_RegisterCFinalizerEx(ext, _finalizer, TRUE);
>
> //copying CPU to GPU
> cublascall(cublasSetVector(*lenth, sizeof(double), temp, 1,
> R_ExternalPtrAddr(ext), 1));
> UNPROTECT(2);
> return ext;
> }
>
>
>
> here is my finalized for my create function,
>
> /*
> define finalizer for R external pointer
> input is R external pointer, function will finalize the pointer
> when it is not in use.
> */
> static void _finalizer(SEXP ext)
> {
> if (!R_ExternalPtrAddr(ext))
> return;
> double * ptr= (double *) R_ExternalPtrAddr(ext);
> Rprintf("finalizer invoked once \n");
> cudacall(cudaFree(ptr));
> R_ClearExternalPtr(ext);
> }
>
>
> My create function can run smoothly, but if I run the create function too many times, it shows out of memory for my GPU device, which clearly implies memory leak problem. Can anybody help? Help alot in advance!
> ______________________________________________
> R-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>
More information about the R-devel
mailing list