[R-SIG-Mac] Compiler options for R binary
Braun, Michael
braunm at mail.smu.edu
Thu Nov 20 17:17:03 CET 2014
I run R on a recent Mac Pro (Ivy Bridge architecture), and before that, on a 2010-version (Nehalem architecture). For the last few years I have been installing R by compiling from source. The reason is that I noticed in the etc/Makeconf file that the precompiled binary is compiled with the -mtune=core2 option. I had thought that since my system uses a processor with a more recent architecture and instruction set, that I would be leaving performance on the table by using the binary.
My self-compiled R has worked well for me, for the most part. But sometimes little things pop-up, like difficulty using R Studio, an occasional permissions problem related to the Intel BLAS, etc. And there is a time investment in installing R this way. So even though I want to exploit as much of the computing power on my desktop that I can, now I am questioning whether self-compiling R is worth the effort.
My questions are these:
1. Am I correct that the R binary for Mac is tuned to Core2 architecture?
2. In theory, should tuning the compiler for Sandy Bridge (SSE4.2, AVX instructions, etc) generate a faster R?
3. Has anyone tested the theory in Item 2?
4. Is the reason for setting -mtune=core2 to support older machines? If so, are enough people still using pre-Nehalem 64-bit Macs to justify this?
5. What would trigger a decision to start tuning the R binary for a more advanced processor?
6. What are some other implications of either self-compiling or using the precompiled binary that I might need to consider?
tl;dr: My Mac Pro has a Ivy Bridge processor. Is it worthwhile to compile R myself, instead of using the binary?
Thanks,
Michael
--------------------------
Michael Braun
Associate Professor of Marketing
Cox School of Business
Southern Methodist University
Dallas, TX 75275
braunm at smu.edu
More information about the R-SIG-Mac
mailing list