[Rd] 32 vs 64 bit difference?

Prof Brian Ripley ripley at stats.ox.ac.uk
Sat Nov 26 12:37:09 CET 2011


On 26/11/2011 09:23, peter dalgaard wrote:
>
> On Nov 26, 2011, at 05:20 , Terry Therneau wrote:
>
>> I've spent the last few hours baffled by a test suite inconsistency.
>>
>> The exact same library code gives slightly different answers on the home
>> and work machines  - found in my R CMD check run.  I've recopied the entire
>> directory to make sure it's really identical code.
>>   The data set and fit in question has a pretty flat "top" to the likelihood.
>> I put print statements in to the "f()" function called by optim, and the
>> two parameters and the likelihood track perfectly for 48 iterations, then
>> start to drift ever so slightly:
>> <  theta= -3.254176 -6.201119 ilik= -16.64806
>>> theta= -3.254176 -6.201118 ilik= -16.64806
>>
>> And at the end of the iteration:
>> <  theta= -3.207488 -8.583329 ilik= -16.70139
>>> theta= -3.207488 -8.583333 ilik= -16.70139
>>
>> As you can see, they get to the same max, but with just a slightly
>> different path.
>>
>>   The work machine is running 64 bit Unix (CentOS) and the home one 32 bit
>> Ubuntu.
>> Could this be enough to cause the difference?  Most of my tests are
>> based on all.equal, but I also print out 1 or 2 full solutions; perhaps
>> I'll have to modify that?
>
> We do see quite a lot of that, yes; even running 32 and 64 bit builds on the same machine, an sometimes to the extent that an algorithm diverges on one architecture and diverges on the other (just peek over on R-sig-ME). The comparisons by "make check" on R itself also give off quite a bit of "last decimal chatter" when the architecture is switched. For some reason, OSX builds seem more consistent than Windows and Linux, although I have only anecdotal evidence of that.
>
> However, the basic point is that compilers don't define the sequence of FPU operations down to the last detail, an internal extended-precision register may or may not be used, the order of terms in a sum may be changed, etc. Since 64 bit code has different performance characteristics from 32 bit code (since you shift more data around for pointers), the FPU instructions may be differently optimized too.

However, the main difference is that all x86_64 chips have SSE2 
registers, and so gcc makes use of them.  Not all i686 chips do, so 
32-bit builds on Linux and Windows only use the FPU registers.

This matters at ABI level: arguments get passed and values returned in 
SSE registers: so we can't decide to only support later i686 cpus and 
make use of SSE2 without re-compiling all the system libraries (but a 
Linux distributor could).

And the FPU registers are 80-bit and use extended precision (the way we 
set up Windows and on every Linux system I have seen): the SSE* 
registers are 2x64-bit.

I believe that all Intel Macs are 'Core' or later and so do have SSE2, 
although I don't know how much Apple relies on that.

(The reason I know that this is the 'main difference' is that you can 
often turn off the use of SSE2 on x86_64 and reproduce the i686 results. 
  But because of the ABI differences, you may get crashes: in R this 
matters most often for complex numbers which are 128-bit C99 double 
complex and passed around in an SSE register.)

>>
>> Terry Therneau
>>
>> ______________________________________________
>> R-devel at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>


-- 
Brian D. Ripley,                  ripley at stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,             Tel:  +44 1865 272861 (self)
1 South Parks Road,                     +44 1865 272866 (PA)
Oxford OX1 3TG, UK                Fax:  +44 1865 272595



More information about the R-devel mailing list