[R-SIG-Mac] Compiler options for R binary

Simon Urbanek simon.urbanek at r-project.org
Fri Nov 21 02:59:21 CET 2014


On Nov 20, 2014, at 11:17 AM, Braun, Michael <braunm at mail.smu.edu> wrote:

> I run R on a recent Mac Pro (Ivy Bridge architecture), and before that, on a 2010-version (Nehalem architecture).  For the last few years I have been installing R by compiling from source.  The reason is that I noticed in the etc/Makeconf file that the precompiled binary is compiled with the -mtune=core2 option.  I had thought that since my system uses a processor with a more recent architecture and instruction set, that I would be leaving performance on the table by using the binary.
> 
> My self-compiled R has worked well for me, for the most part. But sometimes little things pop-up, like difficulty using R Studio, an occasional permissions problem related to the Intel BLAS, etc.  And there is a time investment in installing R this way.  So even though I want to exploit as much of the computing power on my desktop that I can, now I am questioning whether self-compiling R is worth the effort.
> 
> My questions are these:
> 
> 1.  Am I correct that the R binary for Mac is tuned to Core2 architecture?  
> 2.  In theory, should tuning the compiler for Sandy Bridge (SSE4.2, AVX instructions, etc) generate a faster R?

In theory, yes, but often the inverse is true (in particular for AVX).


> 3.  Has anyone tested the theory in Item 2?
> 4.  Is the reason for setting -mtune=core2 to support older machines?  If so, are enough people still using pre-Nehalem 64-bit Macs to justify this?

Only partially. In fact, the flags are there explicitly to increase the tuning level - the default is even lower. Last time I checked there were no significant benefits in compiling with more aggressive flags anyway. (If you want to go there, Jan De Leeuw used to publish most aggressive flags possible). You cannot relax the math ops compatibility which is the only piece that typically yields gain, because you start getting wrong math op results. You have to be very careful with benchmarking, because from experience optimizations often yield speed ups in some areas, but also introduce slowdown in other areas - it's not always a gain (one example on the extreme end is AVX: when enabled some ops can even take twice as long, believe it or not...) and even the gains are typically in single digit percent range.


> 5.  What would trigger a decision to start tuning the R binary for a more advanced processor?
> 6.  What are some other implications of either self-compiling or using the precompiled binary that I might need to consider?  
> 

When you compile from sources, you're entirely on your own and you have to take care of all dependencies (libraries) and compilation yourself. Most Mac users don't want to go there since they typically prefer to spend their time elsewhere ;).

BTW: if you really care about speed, the real gains are with using parallel BLAS, Intel OpenMP runtime and enabling built-in threading support in R.

Cheers,
Simon


> tl;dr:  My Mac Pro has a Ivy Bridge processor.  Is it worthwhile to compile R myself, instead of using the binary?
> 
> Thanks,
> 
> Michael
> 
> 
> --------------------------
> Michael Braun
> Associate Professor of Marketing
> Cox School of Business
> Southern Methodist University
> Dallas, TX 75275
> braunm at smu.edu
> 
> _______________________________________________
> R-SIG-Mac mailing list
> R-SIG-Mac at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-mac
> 



More information about the R-SIG-Mac mailing list