[R-sig-hpc] Installing gputools fails on Ubuntu 14.04LTS

Charles Determan cdetermanjr at gmail.com
Mon Jul 6 14:18:42 CEST 2015


Erol,

Glad to here you got it working.  The errors, I believe, are a consequence
of how the code was written not a consequence of anything you have done.
These warnings mean that various variables that live on the device are
trying to be accessed on the host.  I believe most of the functions should
still work but it is something that shouldn't be ignored either.  There has
been some recent development on the github repo for the package (
https://github.com/nullsatz/gputools) so perhaps the code is being updated
to fit CUDA's standards.  I would submit issues there if you have any
further problems with the functions themselves.

A quick self-plug, if you are just exploring some GPU related applications
within R you may wish to check out my gpuR (
https://github.com/cdeterman/gpuR) and gpuRcuda (
https://github.com/cdeterman/gpuRcuda) packages for OpenCL and CUDA backend
code.  My intention with these packages is to make GPU computing as simple
as possible for the R user.  They are still in development and I need to
add some more functions before I release them but I would love to have
these packages get some exposure so people can submit what they would like
to see implemented and contribute ideas to make them better.

Regards,
Charles


On Thu, Jul 2, 2015 at 7:13 PM, Erol Biceroglu <
erol.biceroglu at alumni.utoronto.ca> wrote:

> Hello Charles,
>
> Great news, it worked, this is wonderful.  I opened R Studio, loaded the
> library, and ran gpuCor, so looks like it worked.
>
> There were a lot of warnings in the output (I apologize in advance for the
> large block of text), I'm hoping they're harmless as it appears to be
> working.
>
> Thank you very much for your help.
>
>
>
> * installing to library ‘/home/erol/R/library’
> * installing *source* package ‘gputools’ ...
> checking "CUDA compiler"... "environment variable NVCC not set"
> checking for nvcc... /usr/local/cuda/bin/nvcc
> "using NVCC=/usr/local/cuda/bin/nvcc"
> checking "root of the CUDA install directory"... "environment variable
> CUDA_HOME not set"
> "using CUDA_HOME=/usr/local/cuda"
> checking "location of CUDA libraries"... checking for
> "/usr/local/cuda/lib/libcublas.so"... no
> checking for "/usr/local/cuda/lib64/libcublas.so"... yes
> checking "R"... "using /usr/lib/R for the root of the R install directory"
> "using /usr/lib/R/include for R header files"
> checking for rpath flag style... checking for cc... cc
> checking whether the C compiler works... yes
> checking for C compiler default output file name... a.out
> checking for suffix of executables...
> checking whether we are cross compiling... no
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether cc accepts -g... yes
> checking for cc option to accept ISO C89... none needed
> rpath flag style... gnu
> checking build system type... x86_64-unknown-linux-gnu
> checking host system type... x86_64-unknown-linux-gnu
> configure: creating ./config.status
> config.status: creating src/Makefile
> ** libs
> ** arch -
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" rinterface.cu -o rinterface.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" mi.cu -o mi.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" sort.cu -o sort.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" granger.cu -o granger.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" qrdecomp.cu -o qrdecomp.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" correlation.cu -o correlation.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" hcluster.cu -o hcluster.o
> hcluster.cu(449): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(457): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(468): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(516): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(517): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(518): warning: a __device__ variable "hcluster_min_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(518): warning: a __device__ variable "hcluster_min_col_d"
> cannot be directly read in a host function
>
> hcluster.cu(522): warning: a __device__ variable "hcluster_min_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(523): warning: a __device__ variable "hcluster_min_col_d"
> cannot be directly read in a host function
>
> hcluster.cu(523): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(523): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(524): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(524): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(530): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(531): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(532): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(532): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(532): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(539): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(541): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(543): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(548): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(549): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(550): warning: a __device__ variable "hcluster_min_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(551): warning: a __device__ variable "hcluster_min_col_d"
> cannot be directly read in a host function
>
> hcluster.cu(552): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(553): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(554): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(561): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly written in a host function
>
> hcluster.cu(579): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(589): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(636): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(637): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(638): warning: a __device__ variable "hcluster_min_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(638): warning: a __device__ variable "hcluster_min_col_d"
> cannot be directly read in a host function
>
> hcluster.cu(642): warning: a __device__ variable "hcluster_min_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(643): warning: a __device__ variable "hcluster_min_col_d"
> cannot be directly read in a host function
>
> hcluster.cu(643): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(643): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(644): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(644): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(650): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(651): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(652): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(652): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(652): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(659): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(661): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(663): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(668): warning: a __device__ variable "hcluster_dist_d" cannot
> be directly read in a host function
>
> hcluster.cu(669): warning: a __device__ variable "hcluster_count_d"
> cannot be directly read in a host function
>
> hcluster.cu(670): warning: a __device__ variable "hcluster_min_val_d"
> cannot be directly read in a host function
>
> hcluster.cu(671): warning: a __device__ variable "hcluster_min_col_d"
> cannot be directly read in a host function
>
> hcluster.cu(672): warning: a __device__ variable "hcluster_sub_d" cannot
> be directly read in a host function
>
> hcluster.cu(673): warning: a __device__ variable "hcluster_sup_d" cannot
> be directly read in a host function
>
> hcluster.cu(674): warning: a __device__ variable "hcluster_merge_val_d"
> cannot be directly read in a host function
>
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" distance.cu -o distance.o
> distance.cu(829): warning: a __constant__ variable "distance_vg_a_d"
> cannot be directly read in a host function
>
> distance.cu(837): warning: a __constant__ variable "distance_vg_a_d"
> cannot be directly read in a host function
>
> distance.cu(838): warning: a __constant__ variable "distance_vg_a_d"
> cannot be directly read in a host function
>
> distance.cu(838): warning: a __device__ variable "distance_d_d" cannot be
> directly read in a host function
>
> distance.cu(843): warning: a __constant__ variable "distance_vg_b_d"
> cannot be directly read in a host function
>
> distance.cu(848): warning: a __constant__ variable "distance_vg_a_d"
> cannot be directly read in a host function
>
> distance.cu(848): warning: a __constant__ variable "distance_vg_b_d"
> cannot be directly read in a host function
>
> distance.cu(849): warning: a __device__ variable "distance_d_d" cannot be
> directly read in a host function
>
> distance.cu(853): warning: a __device__ variable "distance_d_d" cannot be
> directly read in a host function
>
> distance.cu(858): warning: a __constant__ variable "distance_vg_a_d"
> cannot be directly read in a host function
>
> distance.cu(859): warning: a __constant__ variable "distance_vg_b_d"
> cannot be directly read in a host function
>
> distance.cu(860): warning: a __device__ variable "distance_d_d" cannot be
> directly read in a host function
>
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" matmult.cu -o matmult.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" lsfit.cu -o lsfit.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" kendall.cu -o kendall.o
> /usr/local/cuda/bin/nvcc -c -Xcompiler "-fpic  -g -O2 -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security
> -D_FORTIFY_SOURCE=2 -g" -I. -I"/usr/local/cuda/include"
> -I"/usr/lib/R/include" cuseful.cu -o cuseful.o
> cuseful.cu(55): warning: result of call is not used
>
> cuseful.cu(55): warning: result of call is not used
>
> cuseful.cu: In function ‘float* getMatFromFile(int, int, const char*)’:
> cuseful.cu:55:24: warning: ignoring return value of ‘int fscanf(FILE*,
> const char*, ...)’, declared with attribute warn_unused_result
> [-Wunused-result]
>    fscanf(matFile, " \n ");
>                         ^
> /usr/local/cuda/bin/nvcc -shared -Xlinker -rpath="/usr/local/cuda/lib64"
>  -L"/usr/local/cuda/lib64" -lcublas  rinterface.o mi.o sort.o granger.o
> qrdecomp.o correlation.o hcluster.o distance.o matmult.o lsfit.o kendall.o
> cuseful.o -o gputools.so
> installing to /home/erol/R/library/gputools/libs
> ** R
> ** preparing package for lazy loading
> ** help
> *** installing help indices
> ** building package indices
> ** testing if installed package can be loaded
> * DONE (gputools)
>
>
>
> Erol Biceroglu
>
>
> *erol.biceroglu at alumni.utoronto.ca <erol.biceroglu at alumni.utoronto.ca>*
>
> On Thu, Jul 2, 2015 at 8:08 AM, Charles Determan <cdetermanjr at gmail.com>
> wrote:
>
>> Are you zipping the extracted package back up?  It doesn't look like it
>> as your output doesn't show any compilation by the nvcc compiler.  You
>> don't need to run the configure script yourself, R will run it for you if
>> you just have it there.  Try the following:
>>
>> 1. Move gputools_0.5.tar.gz to some backup directory (you don't want it)
>> 2. Switch to the directory with the gputools directory and run `tar czf
>> gputools.tar.gz gputools` to compress it (assuming that is where you
>> created the 'configure' file)
>> 3. Then try to install the newly zipped source file `R CMD INSTALL
>> gputools.tar.gz`
>>
>> Charles
>>
>> On Wed, Jul 1, 2015 at 7:29 PM, Erol Biceroglu <
>> erol.biceroglu at alumni.utoronto.ca> wrote:
>>
>>> Hi Gary,
>>>
>>> I've ran the lines of code
>>>
>>> *sudo aptitude install r-base-dev*
>>> *cd /usr/lib/R*
>>> *sudo ln -s /usr/share/R/include .*
>>>
>>> which upgraded my R to 3.2.1
>>>
>>> I then attempted to install with command:
>>>
>>> *R CMD INSTALL gputools_0.5.tar.gz*
>>>
>>> which yields the same error.
>>>
>>> ....[I omitted many lines of output since it's the same as before]
>>> *Error in library.dynam(lib, package, package.lib) : *
>>> *  shared object ‘gputools.so’ not found*
>>>
>>> Thanks for your time and effort on this, very much appreciate it.
>>>
>>>
>>> Erol Biceroglu
>>>
>>>
>>> *erol.biceroglu at alumni.utoronto.ca <erol.biceroglu at alumni.utoronto.ca>*
>>>
>>> On Wed, Jul 1, 2015 at 6:05 PM, gartim <gartim at genepi.berkeley.edu>
>>> wrote:
>>>
>>> > check this also
>>> > http://superuser.com/questions/568349/how-to-install-gputools-in-r
>>> > -- gary
>>> >
>>> > On Wed, Jul 01, 2015 at 04:31:15PM -0400, Erol Biceroglu wrote:
>>> > >Hello,
>>> > >
>>> > >Thank you both for the help.
>>> > >
>>> > >The short answer is it's still not working, with the same error
>>> message.
>>> > >Here are the actions I took.
>>> > >
>>> > >1) So in my /gputools folder I see a 'configure.ac' file.
>>> > >
>>> > >I opened it in a text editor, and changed all the AC_HELP_STRING lines
>>> > with
>>> > >the appropriate paths (lines 6, 18, 42, 55, 64, respectively):
>>> > >
>>> > >  AC_HELP_STRING([--with-nvcc=/usr/local/cuda/bin/nvcc],
>>> > >  AC_HELP_STRING([--with-cuda=/usr/local/cuda],
>>> > >  AC_HELP_STRING([--with-r=/usr/lib/R],
>>> > >  AC_HELP_STRING([--with-r-include=/usr/share/R/include],
>>> > >  AC_HELP_STRING([--with-r-lib=/usr/lib/R/lib],
>>> > >
>>> > >saved the file...
>>> > >
>>> > >2)
>>> > >So then I ran
>>> > >*autoconf configure.ac <http://configure.ac> > configure *
>>> > >and then
>>> > >*chmod +x configure*
>>> > >
>>> > >I get a "configure" file in my /gputools folder.
>>> > >
>>> > >3)
>>> > >I then ran the executable by typing:
>>> > >*./configure*
>>> > >
>>> > >and then got the following output:
>>> > >
>>> > >*checking "CUDA compiler"... "environment variable NVCC not set"*
>>> > >*checking for nvcc... /usr/local/cuda/bin/nvcc*
>>> > >*"using NVCC=/usr/local/cuda/bin/nvcc"*
>>> > >*checking "root of the CUDA install directory"... "environment
>>> variable
>>> > >CUDA_HOME not set"*
>>> > >*"using CUDA_HOME=/usr/local/cuda"*
>>> > >*checking "location of CUDA libraries"... checking for
>>> > >"/usr/local/cuda/lib/libcublas.so"... no*
>>> > >*checking for "/usr/local/cuda/lib64/libcublas.so"... yes*
>>> > >*checking "R"... "using /usr/lib/R for the root of the R install
>>> > directory"*
>>> > >*"using /usr/lib/R/include for R header files"*
>>> > >*checking for rpath flag style... checking for cc... cc*
>>> > >*checking whether the C compiler works... yes*
>>> > >*checking for C compiler default output file name... a.out*
>>> > >*checking for suffix of executables... *
>>> > >*checking whether we are cross compiling... no*
>>> > >*checking for suffix of object files... o*
>>> > >*checking whether we are using the GNU C compiler... yes*
>>> > >*checking whether cc accepts -g... yes*
>>> > >*checking for cc option to accept ISO C89... none needed*
>>> > >*rpath flag style... gnu*
>>> > >*checking build system type... x86_64-unknown-linux-gnu*
>>> > >*checking host system type... x86_64-unknown-linux-gnu*
>>> > >*configure: creating ./config.status*
>>> > >*config.status: creating src/Makefile*
>>> > >
>>> > >4)
>>> > >so then, I go back to my home directory and run:
>>> > >* R CMD INSTALL gputools_0.5.tar.gz*
>>> > >
>>> > >and then I get:
>>> > >
>>> > >* installing to library ‘/home/erol/R/library’
>>> > >* installing *source* package ‘gputools’ ...
>>> > >** libs
>>> > >Warning: no source files found
>>> > >** R
>>> > >** preparing package for lazy loading
>>> > >** help
>>> > >*** installing help indices
>>> > >** building package indices
>>> > >** testing if installed package can be loaded
>>> > >Error in library.dynam(lib, package, package.lib) :
>>> > >  shared object ‘gputools.so’ not found
>>> > >Error: loading failed
>>> > >Execution halted
>>> > >ERROR: loading failed
>>> > >* removing ‘/home/erol/R/library/gputools’
>>> > >
>>> > >My apologies if I've missed something or misinterpreted anything.
>>> > >
>>> > >I do want to add that I re-ran *deviceQuery* and it passed, as well as
>>> > >*bandwidthTest*, which also passed.  Let me know if the output of the
>>> two
>>> > >tests would be helpful.
>>> > >
>>> > >Thanks very much for your advice and help.
>>> > >
>>> > >Regards,
>>> > >
>>> > >
>>> > >Erol Biceroglu
>>> > >
>>> > >
>>> > >*erol.biceroglu at alumni.utoronto.ca <erol.biceroglu at alumni.utoronto.ca
>>> >*
>>> > >
>>> > >On Wed, Jul 1, 2015 at 2:37 PM, Charles Determan <
>>> cdetermanjr at gmail.com>
>>> > >wrote:
>>> > >
>>> > >> If you download directory from the github repo you aren't provided
>>> with
>>> > a
>>> > >> 'configure' file but the 'configure.ac' file (which is in the src/
>>> > >> directory of your gputools directory).  As such none of compilation
>>> > >> instructions, which are rather complex for CUDA, are being passed to
>>> > your
>>> > >> compiler which will create the gputools.so file you are looking for.
>>> > You
>>> > >> first need use the 'autoconf' program to create the 'configure'
>>> file.
>>> > >>
>>> > >> autoconf configure.ac > configure
>>> > >> # make executable
>>> > >> chmod +x configure
>>> > >>
>>> > >> Then try it again, report back if you have further problems.
>>> > >>
>>> > >> Regards,
>>> > >>
>>> > >> Charles
>>> > >>
>>> > >> On Wed, Jul 1, 2015 at 12:25 PM, Erol Biceroglu <
>>> > >> erol.biceroglu at alumni.utoronto.ca> wrote:
>>> > >>
>>> > >>> Hello,
>>> > >>>
>>> > >>> I'm trying to install gputools on Ubuntu 14.04LTS and I'm not
>>> having
>>> > much
>>> > >>> luck.  I'm not sure if it helps, but here's the R info that's
>>> output
>>> > when
>>> > >>> I
>>> > >>> run it:
>>> > >>>
>>> > >>>
>>> > >>> R version 3.2.0 (2015-04-16) -- "Full of Ingredients"
>>> > >>> Copyright (C) 2015 The R Foundation for Statistical Computing
>>> > >>> Platform: x86_64-pc-linux-gnu (64-bit)
>>> > >>>
>>> > >>>
>>> > >>> Here are the steps I've taken so far:
>>> > >>> *1) Run the following in the terminal:*
>>> > >>>
>>> > >>> *git clone https://github.com/nullsatz/gputools.git
>>> > >>> <https://github.com/nullsatz/gputools.git>*
>>> > >>>
>>> > >>> -This creates a "gputools" folder in my /home directory
>>> > >>>
>>> > >>> *2)  Then run the following in the terminal:*
>>> > >>>
>>> > >>> *R CMD build gputools*
>>> > >>>
>>> > >>> -This creates the gputools_0.5.tar.gz in my home folder
>>> > >>>
>>> > >>> *3) Then I run the following command (which is causing issues)*
>>> > >>>
>>> > >>> *R CMD INSTALL
>>> --configure-args="--with-nvcc=/usr/local/cuda/bin/nvcc
>>> > >>> --with-r-lib=/usr/lib/R/lib --with-r=/usr/lib/R/ "
>>> gputools_0.5.tar.gz*
>>> > >>>
>>> > >>> and I get the following output:
>>> > >>>
>>> > >>> ** installing to library ‘/home/erol/R/library’*
>>> > >>> ** installing *source* package ‘gputools’ ...*
>>> > >>> *** libs*
>>> > >>> *Warning: no source files found*
>>> > >>> *** R*
>>> > >>> *** preparing package for lazy loading*
>>> > >>> *** help*
>>> > >>> **** installing help indices*
>>> > >>> *** building package indices*
>>> > >>> *** testing if installed package can be loaded*
>>> > >>> *Error in library.dynam(lib, package, package.lib) : *
>>> > >>> *  shared object ‘gputools.so’ not found*
>>> > >>> *Error: loading failed*
>>> > >>> *Execution halted*
>>> > >>> *ERROR: loading failed*
>>> > >>> ** removing ‘/home/erol/R/library/gputools’*
>>> > >>> ** restoring previous ‘/home/erol/R/library/gputools’*
>>> > >>>
>>> > >>> I've checked the paths, and found my '*libR.so*' tucked away all
>>> alone
>>> > in
>>> > >>> */usr/lib/R/lib*, but I don't know where *gputools.so* is and I
>>> can't
>>> > find
>>> > >>> it by searching.
>>> > >>>
>>> > >>> Any feedback on how to proceed would be greatly appreciated.
>>> > >>>
>>> > >>> Thanks for your help.
>>> > >>>
>>> > >>> Regards,
>>> > >>>
>>> > >>> Erol Biceroglu
>>> > >>>
>>> > >>>
>>> > >>> *erol.biceroglu at alumni.utoronto.ca <
>>> erol.biceroglu at alumni.utoronto.ca
>>> > >*
>>> > >>>
>>> > >>>         [[alternative HTML version deleted]]
>>> > >>>
>>> > >>> _______________________________________________
>>> > >>> R-sig-hpc mailing list
>>> > >>> R-sig-hpc at r-project.org
>>> > >>> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc
>>> > >>
>>> > >>
>>> > >>
>>> > >
>>> > >       [[alternative HTML version deleted]]
>>> > >
>>> > >_______________________________________________
>>> > >R-sig-hpc mailing list
>>> > >R-sig-hpc at r-project.org
>>> > >https://stat.ethz.ch/mailman/listinfo/r-sig-hpc
>>> >
>>>
>>>         [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-hpc mailing list
>>> R-sig-hpc at r-project.org
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc
>>>
>>
>>
>

	[[alternative HTML version deleted]]



More information about the R-sig-hpc mailing list