[R-SIG-Mac] Compiler options for R binary

Rainer M Krug Rainer at krugs.de
Sat Nov 22 16:57:57 CET 2014


Simon Urbanek <simon.urbanek at r-project.org> writes:

> On Nov 21, 2014, at 3:47 AM, Rainer M Krug <Rainer at krugs.de> wrote:
>> 
>> Simon Urbanek <simon.urbanek at r-project.org> writes:
>> 
>>> On Nov 20, 2014, at 11:17 AM, Braun, Michael <braunm at mail.smu.edu> wrote:
>>> 
>>>> I run R on a recent Mac Pro (Ivy Bridge architecture), and before
>>>> that, on a 2010-version (Nehalem architecture).  For the last few
>>>> years I have been installing R by compiling from source.  The reason
>>>> is that I noticed in the etc/Makeconf file that the precompiled
>>>> binary is compiled with the -mtune=core2 option.  I had thought that
>>>> since my system uses a processor with a more recent architecture and
>>>> instruction set, that I would be leaving performance on the table by
>>>> using the binary.
>>>> 
>>>> My self-compiled R has worked well for me, for the most part. But
>>>> sometimes little things pop-up, like difficulty using R Studio, an
>>>> occasional permissions problem related to the Intel BLAS, etc.  And
>>>> there is a time investment in installing R this way.  So even though
>>>> I want to exploit as much of the computing power on my desktop that
>>>> I can, now I am questioning whether self-compiling R is worth the
>>>> effort.
>>>> 
>>>> My questions are these:
>>>> 
>>>> 1.  Am I correct that the R binary for Mac is tuned to Core2 architecture?  
>>>> 2.  In theory, should tuning the compiler for Sandy Bridge (SSE4.2, AVX instructions, etc) generate a faster R?
>>> 
>>> In theory, yes, but often the inverse is true (in particular for AVX).
>>> 
>>> 
>>>> 3.  Has anyone tested the theory in Item 2?
>>>> 4.  Is the reason for setting -mtune=core2 to support older
>>>> machines?  If so, are enough people still using pre-Nehalem 64-bit
>>>> Macs to justify this?
>>> 
>>> Only partially. In fact, the flags are there explicitly to increase
>>> the tuning level - the default is even lower. Last time I checked
>>> there were no significant benefits in compiling with more aggressive
>>> flags anyway. (If you want to go there, Jan De Leeuw used to publish
>>> most aggressive flags possible). You cannot relax the math ops
>>> compatibility which is the only piece that typically yields gain,
>>> because you start getting wrong math op results. You have to be very
>>> careful with benchmarking, because from experience optimizations often
>>> yield speed ups in some areas, but also introduce slowdown in other
>>> areas - it's not always a gain (one example on the extreme end is AVX:
>>> when enabled some ops can even take twice as long, believe it or
>>> not...) and even the gains are typically in single digi
>>> t percent range.
>>> 
>>> 
>>>> 5.  What would trigger a decision to start tuning the R binary for a more advanced processor?
>>>> 6.  What are some other implications of either self-compiling or
>>>> using the precompiled binary that I might need to consider?
>>>> 
>>> 
>>> When you compile from sources, you're entirely on your own and you
>>> have to take care of all dependencies (libraries) and compilation
>>> yourself. Most Mac users don't want to go there since they typically
>>> prefer to spend their time elsewhere ;).
>> 
>> I have to mention homebrew [1]here - by tuning the recipe used to install R,
>> one could (I guess) tune compiler options and recompile without any
>> fuss. The R installation with homebrew worked for me out-of-the-box and
>> the re-compilation and installation is one command.
>> 
>> The recipes are simple ruby scripts and can easily be changed.
>> 
>> OK - I come from a Linux background, but I like the homebrew approach
>> and it works flawless for me.
>> 
>
> As others have said - if you don't mind the crashes, then it's ok.

Well - I am using R via ESS and nearly never the GUI, so I can't say
anything from that side, but I never had crashes of R after switching to
homebrew - but I might be lucky.

> I actually like homebrew, it's good for small tools when you're in the
> pinch, but it doesn't tend to work well for complex things like R (or
> package that has many options). Also like I said, you'll have to take
> care of packages and dependencies yourself - not impossible, but
> certainly extra work.



> However, if you don't mind to get your hands dirty, then I would
> recommend Homebrew over the alternatives.

As I said - I am coming from the Linux side of things (but always used
the binaries there...) so I don't mind compiling and prefer the better
control / understanding homebrew gives me. And my hands never got as
dirty as trying to compile under Linux :-)

Cheers,

Rainer


>
> Cheers,
> Simon
>
>
>  
>
>> Cheers,
>> 
>> Rainer
>> 
>>> 
>>> BTW: if you really care about speed, the real gains are with using
>>> parallel BLAS, Intel OpenMP runtime and enabling built-in threading
>>> support in R.
>>> 
>>> Cheers,
>>> Simon
>>> 
>>> 
>>>> tl;dr:  My Mac Pro has a Ivy Bridge processor.  Is it worthwhile to compile R myself, instead of using the binary?
>>>> 
>>>> Thanks,
>>>> 
>>>> Michael
>>>> 
>>>> 
>>>> --------------------------
>>>> Michael Braun
>>>> Associate Professor of Marketing
>>>> Cox School of Business
>>>> Southern Methodist University
>>>> Dallas, TX 75275
>>>> braunm at smu.edu
>>>> 
>>>> _______________________________________________
>>>> R-SIG-Mac mailing list
>>>> R-SIG-Mac at r-project.org
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mac
>>>> 
>> 
>> 
>> Footnotes: 
>> [1]  http://brew.sh
>> 
>> -- 
>> Rainer M. Krug
>> email: Rainer<at>krugs<dot>de
>> PGP: 0x0F52F982

-- 
Rainer M. Krug
email: Rainer<at>krugs<dot>de
PGP: 0x0F52F982
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 494 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-mac/attachments/20141122/613550e0/attachment.bin>


More information about the R-SIG-Mac mailing list