[R-SIG-Mac] Compiler options for R binary

Amos B. Elberg amos.elberg at gmail.com
Thu Nov 20 17:42:33 CET 2014


I got a speed bump from recompiling on Mac. The Cran version is built with an llvm that doesn't support openmp.  Apart from benefits from choice of BLAS, the benefit from just recompiling is real but not huge; like on the order of 10-15%. I set tune and all of that to "auto." 

Optimization can also be pushed beyond the Cran settings without breaking make check, and the are compiler optimizations that provide a larger boost, break odd parts of make check, but I'm not sure actually break any functional part of R.

Overall, apart from the BLAS, the speed change is modest enough that I really wonder if the book is worth the candle.


> On Nov 20, 2014, at 11:17 AM, Braun, Michael <braunm at mail.smu.edu> wrote:
> 
> I run R on a recent Mac Pro (Ivy Bridge architecture), and before that, on a 2010-version (Nehalem architecture).  For the last few years I have been installing R by compiling from source.  The reason is that I noticed in the etc/Makeconf file that the precompiled binary is compiled with the -mtune=core2 option.  I had thought that since my system uses a processor with a more recent architecture and instruction set, that I would be leaving performance on the table by using the binary.
> 
> My self-compiled R has worked well for me, for the most part. But sometimes little things pop-up, like difficulty using R Studio, an occasional permissions problem related to the Intel BLAS, etc.  And there is a time investment in installing R this way.  So even though I want to exploit as much of the computing power on my desktop that I can, now I am questioning whether self-compiling R is worth the effort.
> 
> My questions are these:
> 
> 1.  Am I correct that the R binary for Mac is tuned to Core2 architecture?  
> 2.  In theory, should tuning the compiler for Sandy Bridge (SSE4.2, AVX instructions, etc) generate a faster R?
> 3.  Has anyone tested the theory in Item 2?
> 4.  Is the reason for setting -mtune=core2 to support older machines?  If so, are enough people still using pre-Nehalem 64-bit Macs to justify this?
> 5.  What would trigger a decision to start tuning the R binary for a more advanced processor?
> 6.  What are some other implications of either self-compiling or using the precompiled binary that I might need to consider?  
> 
> tl;dr:  My Mac Pro has a Ivy Bridge processor.  Is it worthwhile to compile R myself, instead of using the binary?
> 
> Thanks,
> 
> Michael
> 
> 
> --------------------------
> Michael Braun
> Associate Professor of Marketing
> Cox School of Business
> Southern Methodist University
> Dallas, TX 75275
> braunm at smu.edu
> 
> _______________________________________________
> R-SIG-Mac mailing list
> R-SIG-Mac at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-mac



More information about the R-SIG-Mac mailing list