[R] [EXT] Mac ARM for lm() ?

Jeff Newmiller jdnewm|| @end|ng |rom dcn@d@v|@@c@@u@
Sat Nov 16 14:55:26 CET 2024


Mostly yes. GPU computing is much less flexible than CPU computing. Sometimes a small algorithmic adjustment such as Martin suggested is enough. Also, if you divide your work into a small-ish number of chunks then you can benefit from using the parallel package built into R.

That said, Google sez [1], but I haven't used it myself. It seems highly specialized and requires that you setup Python (a nontrivial task if you don't already know how to configure Python) and only works with Nvidia hardware.

[1] https://cran.r-project.org/package=GPUmatrix


On November 15, 2024 4:07:15 PM PST, ivo welch <ivo.welch using ucla.edu> wrote:
>Thanks, and all well taken.  But are my beautiful GPUs (with integrated
>memory architecture) really nothing more than a cooling area for the chip?
>
>On Fri, Nov 15, 2024 at 6:06 AM Martin Maechler <maechler using stat.math.ethz.ch>
>wrote:
>
>> >>>>> Andrew Robinson via R-help
>> >>>>>     on Thu, 14 Nov 2024 12:45:44 +0000 writes:
>>
>>     > Not a direct answer but you may find lm.fit worth
>>     > experimenting with.
>>
>> Yes, lm.fit() is already faster, and
>>     .lm.fit() {added to base R by me, when a similar question
>>     was asked years ago ...}
>>     is even an order of magnitude faster  in some cases.
>>
>> See ?lm.fit
>> and notably
>>
>> example(lm.fit)
>>
>> which uses pkg microbenchmark for timing and  after which
>>
>>    png("lmfit-ex.png")
>>    boxplot(mb, notch=TRUE)
>>    dev.off()
>>
>> produces the attached nice image.
>>
>>     > Also try the high-performance computing task view on CRAN
>>
>>     > Cheers,
>>     > Andrew
>>
>>     > --
>>     > Andrew Robinson Chief Executive Officer, CEBRA and
>>     > Professor of Biosecurity, School/s of BioSciences and
>>     > Mathematics & Statistics University of Melbourne, VIC 3010
>>     > Australia Tel: (+61) 0403 138 955 Email:
>>     > apro using unimelb.edu.au Website:
>>     > https://researchers.ms.unimelb.edu.au/~apro@unimelb/
>>
>>     > I acknowledge the Traditional Owners of the land I
>>     > inhabit, and pay my respects to their Elders.  On 14 Nov
>>     > 2024 at 1:13 PM +0100, Ivo Welch <ivo.welch using gmail.com>,
>>     > wrote: External email: Please exercise caution
>>
>>     > I have found more general questions, but I have a specific
>>     > one. I have a few million (independent) short regressions
>>     > that I would like to run (each reg has about 60
>>     > observations, though they can have missing observations
>>     > [yikes]). So, I would like to be running as many `lm` and
>>     > `coef(lm)` in parallel as possible. my hardware is Mac,
>>     > with nice GPUs and integrated memory --- and so far
>>     > completely useless to me. `mclapply` is obviously very
>>     > useful, but I want more, more, more cores.
>>
>>     > is there a recommended plug-in library to speed up just
>>     > `lm` by also using the GPU cores?
>>
>>
>>
>
>	[[alternative HTML version deleted]]
>
>______________________________________________
>R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

-- 
Sent from my phone. Please excuse my brevity.



More information about the R-help mailing list