[R-sig-ME] non-deterministic behaviour of lme4 on Mac OS X R 32bit [Was: Failure to load lme4 on Mac]

Daniel Myall daniel.lists at zeno.co.nz
Mon Jul 19 23:16:48 CEST 2010


Hi Jochen,

Thanks for the extended example.  Another example below is showing that 
the differences in fits aren't always small, and actually correspond to 
different number of iterations (in this case 53 versus 41 on R 32 bit, R 
64 bit fit also shown) with slight differences observable on the first 
iteration in the verbose output.

My main concern was that this issue may have influenced several fits of 
real data that I have done lately with lme4 on Mac OS X R 32 bit. 
However,  in comparison to these slightly-pathological examples, I don't 
observe this non-deterministic behaviour with real data.

(Mac OS X 10.6.4, R 2.11.1, using vecLib BLAS - problem disappears with 
R BLAS on 10.6.4 but not with R BLAS on 10.5.8).

Daniel


library(lme4)
set.seed(555)
D <-  data.frame(y= rnorm(20,10), ff = gl(4,5), x1=rnorm(20,3), 
x2=rnorm(20,7))

# Run several times on Mac OS X with R 32 bit to observe two different 
fits. Fine on R 64 bit.
lmer(y ~ (x1 + x2)|ff, data = D,control=list(msVerbose=TRUE))


 > lmer(y ~ (x1 + x2)|ff, data = D,control=list(msVerbose=TRUE))
##SOLUTION 1 - R 32 bit
Linear mixed model fit by REML
Formula: y ~ (x1 + x2) | ff
    Data: D
    AIC   BIC logLik deviance REMLdev
  66.97 74.94 -25.48     50.1   50.97
Random effects:
  Groups   Name        Variance Std.Dev. Corr
  ff       (Intercept) 6.795131 2.60675
           x1          0.225402 0.47477  -0.812
           x2          0.029834 0.17272  -0.906  0.488
  Residual             0.539207 0.73431
Number of obs: 20, groups: ff, 4

Fixed effects:
             Estimate Std. Error t value
(Intercept)   9.9852     0.2616   38.16

##SOLUTION 2 - R 32 bit
 > lmer(y ~ (x1 + x2)|ff, data = D,control=list(msVerbose=TRUE))
Linear mixed model fit by REML
Formula: y ~ (x1 + x2) | ff
    Data: D
    AIC   BIC logLik deviance REMLdev
  67.07 75.04 -25.54    49.89   51.07
Random effects:
  Groups   Name        Variance  Std.Dev. Corr
  ff       (Intercept) 0.5087343 0.713256
           x1          0.2099050 0.458154 -1.000
           x2          0.0015519 0.039395  1.000 -1.000
  Residual             0.6031291 0.776614
Number of obs: 20, groups: ff, 4

Fixed effects:
             Estimate Std. Error t value
(Intercept)   9.8047     0.2239   43.79

##SOLUTION - R 64 bit
Linear mixed model fit by REML
Formula: y ~ (x1 + x2) | ff
    Data: D
    AIC   BIC logLik deviance REMLdev
  67.07 75.04 -25.54    49.89   51.07
Random effects:
  Groups   Name        Variance  Std.Dev. Corr
  ff       (Intercept) 0.5087061 0.71324
           x1          0.2099093 0.45816  -1.000
           x2          0.0015523 0.03940   1.000 -1.000
  Residual             0.6031284 0.77661
Number of obs: 20, groups: ff, 4

Fixed effects:
             Estimate Std. Error t value
(Intercept)   9.8047     0.2239   43.79




##SOLUTION 1 R 32 bit verbose output
   0:     54.397032: 0.730297 0.237125 0.101177  0.00000  0.00000  0.00000
   1:     53.424891: 0.694582 0.208795  0.00000 -0.0845121 -0.225232 
-0.0647230
   2:     52.309724: 0.525523 0.173934  0.00000 -0.0956795 -0.0435683 
-0.245102
   3:     51.796132: 0.531816 0.275691 2.50806e-09 -0.0723777 -0.0177521 
-0.193844
   4:     51.510783: 0.501953 0.339499 0.0185560 -0.102619 -0.0999522 
-0.229227
   5:     51.340266: 0.495931 0.417092  0.00000 -0.108591 -0.0380971 
-0.270879
   6:     51.314044: 0.495003 0.441668  0.00000 -0.119973 -0.0710709 
-0.245133
   7:     51.279576: 0.495433 0.439788  0.00000 -0.121056 -0.0677952 
-0.267080
   8:     51.266170: 0.494992 0.453146  0.00000 -0.137109 -0.0606157 
-0.264009
   9:     51.261334: 0.497575 0.462626 0.00215758 -0.143005 -0.0579236 
-0.273609
  10:     51.257651: 0.500835 0.470608  0.00000 -0.149210 -0.0580369 
-0.264854
  11:     51.247426: 0.512961 0.479519  0.00000 -0.171565 -0.0572242 
-0.271917
  12:     51.243462: 0.514494 0.482739 7.97212e-12 -0.172317 -0.0470170 
-0.273164
  13:     51.242221: 0.517730 0.489447 1.14978e-09 -0.179539 -0.0492034 
-0.270588
  14:     51.238508: 0.522108 0.489827 0.00134272 -0.188105 -0.0461837 
-0.274515
  15:     51.236056: 0.526621 0.491497  0.00000 -0.196288 -0.0424196 
-0.271936
  16:     51.197261: 0.682790 0.501074  0.00000 -0.489111 0.0294043 
-0.264454
  17:     51.189778: 0.787526 0.487157  0.00000 -0.480229 0.0535125 
-0.268740
  18:     51.185330: 0.690942 0.426891  0.00000 -0.320089 -0.0126689 
-0.262818
  19:     51.129963: 0.860167 0.405773  0.00000 -0.525289 0.0349497 
-0.249824
  20:     51.108450: 0.858513 0.399743  0.00000 -0.441274 0.0241085 
-0.249921
  21:     51.085509: 0.960111 0.361481  0.00000 -0.465470 0.0255299 
-0.241661
  22:     51.076072:  1.02659 0.333386  0.00000 -0.462479 0.0181507 
-0.236373
  23:     51.075348:  1.03803 0.329761 0.00171915 -0.459560 0.0180648 
-0.235031
  24:     51.074205:  1.04897 0.324847  0.00000 -0.458212 0.0173642 
-0.234406
  25:     51.073789:  1.06032 0.320322  0.00000 -0.456404 0.0155610 
-0.233229
  26:     51.073094:  1.07187 0.315634 0.000162410 -0.455188 0.0154937 
-0.232546
  27:     51.072409:  1.10979 0.300717 0.000274572 -0.451556 0.0131734 
-0.229526
  28:     51.072379:  1.11039 0.300508 2.31012e-05 -0.449857 0.0128583 
-0.229280
  29:     51.072368:  1.11129 0.300351  0.00000 -0.451022 0.0133420 
-0.228826
  30:     51.072355:  1.11173 0.300610 1.04130e-05 -0.449943 0.0130218 
-0.228551
  31:     51.072344:  1.11328 0.301363 5.89851e-06 -0.448036 0.0125698 
-0.228082
  32:     51.072337:  1.11511 0.302273 5.99925e-06 -0.446401 0.0122070 
-0.227862
  33:     51.072309:  1.12266 0.306077 5.60258e-06 -0.440161 0.0108298 
-0.227340
  34:     51.072241:  1.15331 0.321328 4.37217e-06 -0.415740 0.00543811 
-0.225849
  35:     51.072028:  1.18520 0.332126  0.00000 -0.395151 0.000770566 
-0.225396
  36:     51.071488:  1.31516 0.374814  0.00000 -0.316139 -0.0169819 
-0.225801
  37:     51.068467:  1.45961 0.395095  0.00000 -0.254872 -0.0315079 
-0.231321
  38:     51.066090:  1.48207 0.389322  0.00000 -0.256881 -0.0303985 
-0.235516
  39:     51.062974:  1.57032 0.408708 8.11318e-05 -0.229093 -0.0348309 
-0.240098
  40:     51.057825:  1.66297 0.416006 0.000186021 -0.211495 -0.0397993 
-0.242146
  41:     51.038652:  2.03831 0.430554 0.000524841 -0.187182 -0.0497647 
-0.245336
  42:     51.030401:  2.16630 0.435868 0.000426435 -0.172095 -0.0514469 
-0.247531
  43:     51.024146:  2.22436 0.432647 0.000255846 -0.182843 -0.0474585 
-0.247490
  44:     51.005388:  2.84789 0.426666  0.00000 -0.150432 -0.0571510 
-0.253099
  45:     50.995550:  3.24173 0.389308  0.00000 -0.151364 -0.0610219 
-0.249748
  46:     50.977018:  3.15752 0.389444  0.00000 -0.157012 -0.0565075 
-0.257137
  47:     50.973292:  3.28034 0.387458  0.00000 -0.150868 -0.0583208 
-0.260005
  48:     50.971023:  3.40321 0.381883  0.00000 -0.148687 -0.0593800 
-0.261335
  49:     50.969882:  3.54757 0.376528  0.00000 -0.147883 -0.0601091 
-0.263958
  50:     50.969844:  3.55395 0.376877  0.00000 -0.147903 -0.0600074 
-0.264721
  51:     50.969841:  3.54867 0.377061  0.00000 -0.148001 -0.0599892 
-0.264719
  52:     50.969841:  3.54994 0.376990  0.00000 -0.147962 -0.0600032 
-0.264723
  53:     50.969841:  3.54994 0.376985 9.92829e-09 -0.147966 -0.0600016 
-0.264720

##SOLUTION 2 R 32 bit verbose output
   0:     54.397032: 0.730297 0.237125 0.101177  0.00000  0.00000  0.00000
   1:     53.424876: 0.694582 0.208795  0.00000 -0.0845112 -0.225231 
-0.0647248
   2:     52.309714: 0.525524 0.173936  0.00000 -0.0956772 -0.0435674 
-0.245105
   3:     51.796133: 0.531818 0.275692  0.00000 -0.0723750 -0.0177522 
-0.193847
   4:     51.510778: 0.501954 0.339501 0.0185563 -0.102616 -0.0999511 
-0.229228
   5:     51.340266: 0.495933 0.417093  0.00000 -0.108587 -0.0380969 
-0.270878
   6:     51.314066: 0.495004 0.441675  0.00000 -0.119974 -0.0710830 
-0.245129
   7:     51.279573: 0.495434 0.439793  0.00000 -0.121055 -0.0678019 
-0.267079
   8:     51.266170: 0.494987 0.453150  0.00000 -0.137106 -0.0606079 
-0.264006
   9:     51.261310: 0.497595 0.462675 0.00218985 -0.143053 -0.0579286 
-0.273610
  10:     51.257629: 0.500873 0.470682  0.00000 -0.149297 -0.0580353 
-0.264852
  11:     51.247359: 0.513031 0.479574  0.00000 -0.171743 -0.0571635 
-0.271925
  12:     51.243406: 0.514565 0.482778 1.08518e-09 -0.172496 -0.0469748 
-0.273173
  13:     51.243392: 0.517517 0.489117  0.00000 -0.179386 -0.0500068 
-0.269610
  14:     51.240453: 0.518420 0.488706 0.000131212 -0.180792 -0.0482817 
-0.274468
  15:     51.237928: 0.523090 0.491233 0.000456325 -0.188991 -0.0439936 
-0.272385
  16:     51.190208: 0.621321 0.496717  0.00000 -0.391327 0.00657277 
-0.266122
  17:     51.167675: 0.675696 0.492156  0.00000 -0.454025 0.0277453 
-0.267639
  18:     51.135392: 0.842086 0.420798  0.00000 -0.575192 0.0601573 
-0.259493
  19:     51.116745: 0.836737 0.395554 1.10233e-09 -0.563068 0.0497940 
-0.267516
  20:     51.093702: 0.891244 0.324336  0.00000 -0.559277 0.0469394 
-0.258530
  21:     51.088675: 0.891291 0.324441 3.90944e-12 -0.560389 0.0429698 
-0.256607
  22:     51.087235: 0.893191 0.320990 0.000470701 -0.559711 0.0442643 
-0.254926
  23:     51.078996: 0.936686 0.263380 0.00268585 -0.567754 0.0411852 
-0.250071
  24:     51.072813: 0.975622 0.204411  0.00000 -0.565440 0.0387248 
-0.239708
  25:     51.072273: 0.985859 0.187342  0.00000 -0.570844 0.0390460 
-0.237792
  26:     51.071812: 0.989314 0.167512  0.00000 -0.573474 0.0389454 
-0.233659
  27:     51.071496: 0.983768 0.151969  0.00000 -0.581341 0.0409441 
-0.229549
  28:     51.071256: 0.972827 0.137479  0.00000 -0.590418 0.0429409 
-0.226161
  29:     51.070957: 0.947293 0.108716  0.00000 -0.612238 0.0482764 
-0.219427
  30:     51.070952: 0.927248 0.0855910 0.000952352 -0.628362 0.0522636 
-0.214125
  31:     51.070821: 0.916369 0.0748359 0.000262673 -0.636437 0.0544307 
-0.211681
  32:     51.070780: 0.909515 0.0610305 6.27899e-05 -0.644316 0.0562519 
-0.208965
  33:     51.070734: 0.924013 0.0656458  0.00000 -0.635847 0.0541799 
-0.210599
  34:     51.070620: 0.920001 0.0492322  0.00000 -0.639701 0.0548962 
-0.207517
  35:     51.070588: 0.916679  0.00000  0.00000 -0.644959 0.0554623 
-0.198822
  36:     51.070430: 0.923618  0.00000  0.00000 -0.637526 0.0535969 
-0.199476
  37:     51.070401: 0.916820 0.00470934 6.27813e-05 -0.643440 0.0555798 
-0.199608
  38:     51.070397: 0.918316 0.00160787 9.36880e-06 -0.642337 0.0552253 
-0.199194
  39:     51.070397: 0.918458 0.000122311 2.95739e-06 -0.642295 
0.0552150 -0.198938
  40:     51.070397: 0.918418 3.27547e-05 9.93605e-08 -0.642341 
0.0552323 -0.198916
  41:     51.070397: 0.918418 3.27547e-05 9.93605e-08 -0.642341 
0.0552323 -0.198916

## SOLUTION R 64 bit verbose output
   0:     54.397032: 0.730297 0.237125 0.101177  0.00000  0.00000  0.00000
   1:     53.424870: 0.694584 0.208795  0.00000 -0.0845130 -0.225230 
-0.0647244
   2:     52.309727: 0.525529 0.173935  0.00000 -0.0956826 -0.0435657 
-0.245105
   3:     51.796134: 0.531823 0.275692  0.00000 -0.0723803 -0.0177510 
-0.193847
   4:     51.450927: 0.500902 0.342360  0.00000 -0.102403 -0.0972944 
-0.233869
   5:     51.371013: 0.481020 0.438866 0.0243591 -0.123910 -0.0480737 
-0.266176
   6:     51.277917: 0.481373 0.442344  0.00000 -0.129746 -0.0638011 
-0.269659
   7:     51.271522: 0.481853 0.448398  0.00000 -0.130622 -0.0622441 
-0.263198
   8:     51.267093: 0.482735 0.454038  0.00000 -0.132903 -0.0607989 
-0.269674
   9:     51.260552: 0.485842 0.469711  0.00000 -0.136772 -0.0539558 
-0.266486
  10:     51.253821: 0.491137 0.477787  0.00000 -0.148551 -0.0495027 
-0.275170
  11:     51.247634: 0.496346 0.480217  0.00000 -0.165182 -0.0515957 
-0.271526
  12:     51.206567: 0.595026 0.516835  0.00000 -0.331950 -0.00372166 
-0.270981
  13:     51.193179: 0.619994 0.514666  0.00000 -0.366758 0.00374120 
-0.276707
  14:     51.181583: 0.645429 0.494263  0.00000 -0.389967 0.0206792 
-0.270429
  15:     51.165924: 0.672599 0.486667 0.000763299 -0.423409 0.0201332 
-0.266818
  16:     51.149622: 0.766528 0.462641 0.00204744 -0.527999 0.0511090 
-0.259707
  17:     51.095452: 0.840507 0.344529 0.00241554 -0.567718 0.0502609 
-0.241032
  18:     51.090315: 0.858861 0.304546  0.00000 -0.553297 0.0476001 
-0.238051
  19:     51.077540: 0.899093 0.277922  0.00000 -0.555000 0.0405539 
-0.234436
  20:     51.074770: 0.940670 0.253043 0.00144442 -0.560290 0.0400183 
-0.230554
  21:     51.072690: 0.979609 0.226943  0.00000 -0.550970 0.0351800 
-0.227689
  22:     51.072236:  1.00996 0.213705  0.00000 -0.539170 0.0326290 
-0.225695
  23:     51.071959:  1.00403 0.213148  0.00000 -0.546726 0.0343060 
-0.225637
  24:     51.071943:  1.00239 0.212546  0.00000 -0.545324 0.0336297 
-0.225716
  25:     51.071926:  1.00223 0.211238  0.00000 -0.546573 0.0340303 
-0.225548
  26:     51.071884: 0.999479 0.205777  0.00000 -0.551286 0.0352392 
-0.225004
  27:     51.071748: 0.986381 0.185445  0.00000 -0.568493 0.0393525 
-0.223204
  28:     51.071738: 0.940895 0.119931  0.00000 -0.622375 0.0519225 
-0.217755
  29:     51.071431: 0.945604 0.146434  0.00000 -0.604402 0.0480598 
-0.220346
  30:     51.071239: 0.934417 0.127923  0.00000 -0.616544 0.0506460 
-0.218960
  31:     51.071110: 0.925195 0.111673  0.00000 -0.626599 0.0527393 
-0.217760
  32:     51.070996: 0.918666 0.0929372  0.00000 -0.634293 0.0539966 
-0.216502
  33:     51.070890: 0.924688 0.0907819  0.00000 -0.628645 0.0521726 
-0.216661
  34:     51.070801: 0.918347 0.0775752  0.00000 -0.633475 0.0531133 
-0.216173
  35:     51.070673: 0.917184 0.0611515  0.00000 -0.636366 0.0536126 
-0.215674
  36:     51.070514: 0.914608 0.0308008  0.00000 -0.643298 0.0552587 
-0.214750
  37:     51.070469: 0.920407 0.0253583  0.00000 -0.639986 0.0549125 
-0.215139
  38:     51.070416: 0.919351 0.0170298  0.00000 -0.641940 0.0552652 
-0.214860
  39:     51.070408: 0.918746 0.00749333 1.81821e-05 -0.643165 0.0554947 
-0.214674
  40:     51.070402: 0.918715 0.00987454  0.00000 -0.642126 0.0551905 
-0.214811
  41:     51.070399: 0.918283 0.00617101  0.00000 -0.642457 0.0552644 
-0.214784
  42:     51.070397: 0.918260 0.000523215  0.00000 -0.642508 0.0552671 
-0.214764
  43:     51.070397: 0.918309 0.000283276  0.00000 -0.642415 0.0552520 
-0.214779
  44:     51.070397: 0.918393 1.04493e-05  0.00000 -0.642366 0.0552410 
-0.214780
  45:     51.070397: 0.918393 1.04493e-05  0.00000 -0.642366 0.0552410 
-0.214780





On 20/07/10 3:44 AM, jochen laubrock wrote:
> Hi,
>
> talking about deterministic behavior, what puzzles me even more is why a call to identical sometimes seems to the eliminate the (small) numerical differences between coefficients, and why this depends on the order in which identical and coef/ranef are called. See example (and lengthy output, will probably differ each time called) below.
>
> In TEST1 and TEST2, the difference is always zero although identical often returns FALSE. TEST3, which should do exactly the same as TEST2 up to the second call of identical often produces a difference larger than zero, as does TEST4, in which identical is called after the computation of the difference in coefficients. This does not seem to depend on the order in which TEST1 to TEST4 are performed (not shown).
>
> Well, at least identical never returns TRUE when the difference is not in zero…
>
> This is running in 32-bit mode on Mac OS X, R -arch=i386; no worries if run in 64-bit mode.
>
>
> cheers, jochen
>
>
> # script
> library(lme4)
> y<- (1:20)*pi
> x<- (1:20)^2
> group<- gl(2,10)
>
>
> ntestsPerRun<- 10
> cat("\n--------TEST1--------\n")
> for (i in 1:ntestsPerRun) {
> 	cat(paste("--", "run", i, "--\n"))
> 	M1<- lmer (y ~     x + (x | group))
> 	M2<- lmer (y ~     x + (x | group))
> 	print(identical(ranef(M1),ranef(M2)))
> 	print(identical(coef(M1),coef(M2)))
> 	print(ranef(M1)$group - ranef(M1)$group)
> 	print(coef(M1)$group - coef(M1)$group)
> }
>
> cat("\n--------TEST2--------\n")
> for (i in 1:ntestsPerRun) {
> 	cat(paste("--", "run", i, "--\n"))
> 	M1<- lmer (y ~     x + (x | group))
> 	M2<- lmer (y ~     x + (x | group))
> 	print(identical(ranef(M1),ranef(M2)))
> 	print(ranef(M1)$group - ranef(M1)$group)
> 	print(identical(coef(M1),coef(M2)))
> 	print(coef(M1)$group - coef(M1)$group)
> }
>
> cat("\n--------TEST3--------\n")
> for (i in 1:ntestsPerRun) {
> 	cat(paste("--", "run", i, "--\n"))
> 	M1<- lmer (y ~     x + (    x | group))
> 	M2<- lmer (y ~     x + (    x | group))
> 	print(identical(ranef(M1),ranef(M2)))
> 	print(ranef(M1)$group-ranef(M2)$group)
> }
>
> cat("\n--------TEST4--------\n")
> for (i in 1:ntestsPerRun) {
> 	cat(paste("--", "run", i, "--\n"))
> 	M1<- lmer (y ~     x + (x | group))
> 	M2<- lmer (y ~     x + (x | group))
> 	print(ranef(M1)$group - ranef(M2)$group)
> 	print(coef(M1)$group - coef(M2)$group)
> 	print(identical(ranef(M1),ranef(M2)))
> 	print(identical(coef(M1),ranef(M2)))
> }
> # output
>
>
> --------TEST1--------
> -- run 1 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 2 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 3 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 4 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 5 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 6 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 7 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 8 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 9 --
> [1] TRUE
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 10 --
> [1] FALSE
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
>
> --------TEST2--------
> -- run 1 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 2 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 3 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 4 --
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 5 --
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 6 --
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 7 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 8 --
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 9 --
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 10 --
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] FALSE
>    (Intercept) x
> 1           0 0
> 2           0 0
>
> --------TEST3--------
> -- run 1 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 2 --
> [1] FALSE
>      (Intercept)             x
> 1  6.613450e-06 -6.898626e-08
> 2 -6.613462e-06  6.898637e-08
> -- run 3 --
> [1] FALSE
>      (Intercept)             x
> 1  6.613450e-06 -6.898626e-08
> 2 -6.613462e-06  6.898637e-08
> -- run 4 --
> [1] FALSE
>      (Intercept)             x
> 1  6.613450e-06 -6.898626e-08
> 2 -6.613462e-06  6.898637e-08
> -- run 5 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 6 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 7 --
> [1] TRUE
>    (Intercept) x
> 1           0 0
> 2           0 0
> -- run 8 --
> [1] FALSE
>      (Intercept)             x
> 1  6.613450e-06 -6.898626e-08
> 2 -6.613462e-06  6.898637e-08
> -- run 9 --
> [1] FALSE
>      (Intercept)             x
> 1  6.613450e-06 -6.898626e-08
> 2 -6.613462e-06  6.898637e-08
> -- run 10 --
> [1] FALSE
>      (Intercept)             x
> 1  6.613450e-06 -6.898626e-08
> 2 -6.613462e-06  6.898637e-08
>
> --------TEST4--------
> -- run 1 --
>      (Intercept)             x
> 1 -6.613450e-06  6.898626e-08
> 2  6.613462e-06 -6.898637e-08
>     (Intercept)             x
> 1 3.719195e-06 -4.640498e-08
> 2 1.694611e-05 -1.843776e-07
> [1] FALSE
> [1] FALSE
> -- run 2 --
>      (Intercept)             x
> 1 -6.613450e-06  6.898626e-08
> 2  6.613462e-06 -6.898637e-08
>     (Intercept)             x
> 1 3.719195e-06 -4.640498e-08
> 2 1.694611e-05 -1.843776e-07
> [1] FALSE
> [1] FALSE
> -- run 3 --
>      (Intercept)             x
> 1 -6.613450e-06  6.898626e-08
> 2  6.613462e-06 -6.898637e-08
>     (Intercept)             x
> 1 3.719195e-06 -4.640498e-08
> 2 1.694611e-05 -1.843776e-07
> [1] FALSE
> [1] FALSE
> -- run 4 --
>      (Intercept)             x
> 1 -6.613450e-06  6.898626e-08
> 2  6.613462e-06 -6.898637e-08
>     (Intercept)             x
> 1 3.719195e-06 -4.640498e-08
> 2 1.694611e-05 -1.843776e-07
> [1] FALSE
> [1] FALSE
> -- run 5 --
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
> [1] FALSE
> -- run 6 --
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
> [1] FALSE
> -- run 7 --
>      (Intercept)             x
> 1 -6.613450e-06  6.898626e-08
> 2  6.613462e-06 -6.898637e-08
>     (Intercept)             x
> 1 3.719195e-06 -4.640498e-08
> 2 1.694611e-05 -1.843776e-07
> [1] FALSE
> [1] FALSE
> -- run 8 --
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
> [1] FALSE
> -- run 9 --
>      (Intercept)             x
> 1 -6.613450e-06  6.898626e-08
> 2  6.613462e-06 -6.898637e-08
>     (Intercept)             x
> 1 3.719195e-06 -4.640498e-08
> 2 1.694611e-05 -1.843776e-07
> [1] FALSE
> [1] FALSE
> -- run 10 --
>    (Intercept) x
> 1           0 0
> 2           0 0
>    (Intercept) x
> 1           0 0
> 2           0 0
> [1] TRUE
> [1] FALSE
>
>
>    
>> sessionInfo()
>>      
> R version 2.11.1 Patched (2010-06-14 r52281)
> i386-apple-darwin9.8.0
>
> locale:
> [1] de_DE.UTF-8/de_DE.UTF-8/C/C/de_DE.UTF-8/de_DE.UTF-8
>
> attached base packages:
> [1] stats     graphics  grDevices utils     datasets  methods   base
>
> other attached packages:
> [1] lme4_0.999375-33   Matrix_0.999375-40 lattice_0.18-8
>
> loaded via a namespace (and not attached):
> [1] grid_2.11.1  nlme_3.1-96  tools_2.11.1
>
>
>
>
> On Jul 18, 2010, at 13:20 , Reinhold Kliegl wrote:
>
>    
>> Actually, what makes a difference on my machine for this test example
>> is switching to
>> `R --arch x86_64'
>>
>> After this switch linking either R BLAS and vecLIB BLAS checked out
>> fine--at least for 100 tests of the example.
>> ( I think ...).
>>
>> Reinhold Kliegl
>>
>> On Sun, Jul 18, 2010 at 12:28 PM, Reinhold Kliegl
>> <reinhold.kliegl at gmail.com>  wrote:
>>      
>>> First of all, I really appreciate these efforts.
>>>
>>> Changing to R BLAS did not do the trick for me on Mac OS X (10.5.8).
>>> Here is the protocol.
>>>
>>> ###
>>> Last login: Sun Jul 18 12:04:10 on ttys000
>>> Reinhold:~ kliegl$ bash
>>>
>>> ~>  cd /Library/Frameworks/R.framework/Resources/lib
>>> /Library/Frameworks/R.framework/Resources/lib>  ls
>>> i386                    libRblas.vecLib.dylib   libreadline.5.2.dylib
>>> libR.dylib              libRlapack.dylib        libreadline.dylib
>>> libRblas.0.dylib        libgcc_s.1.dylib        ppc
>>> libRblas.dylib          libgfortran.2.dylib     x86_64
>>>
>>> /Library/Frameworks/R.framework/Resources/lib>  ln -sf
>>> libRblas.0.dylib libRblas.dylib
>>> /Library/Frameworks/R.framework/Resources/lib>  R
>>>
>>> R version 2.11.1 Patched (2010-07-16 r52550)
>>> Copyright (C) 2010 The R Foundation for Statistical Computing
>>> ISBN 3-900051-07-0
>>>
>>> <  clipped>
>>>
>>>        
>>>> library(lme4)
>>>>          
>>> <clipped>
>>>
>>>        
>>>> y<- (1:6)*pi # 3.14
>>>> x<- (1:6)^2
>>>> group<- gl(2,3)
>>>> for (i in 1:10) {
>>>>          
>>> + M1<- lmer (y ~     x + (x | group))
>>> + M2<- lmer (y ~     x + (x | group))
>>> + print(identical(ranef(M1),ranef(M2)))
>>> + print(identical(coef(M1),coef(M2)))
>>> +  }
>>> [1] TRUE
>>> [1] TRUE
>>> [1] TRUE
>>> [1] TRUE
>>> [1] FALSE
>>> [1] FALSE
>>> [1] TRUE
>>> [1] TRUE
>>> [1] FALSE
>>> [1] FALSE
>>> [1] FALSE
>>> [1] FALSE
>>> [1] TRUE
>>> [1] TRUE
>>> [1] FALSE
>>> [1] FALSE
>>> [1] TRUE
>>> [1] TRUE
>>> [1] TRUE
>>> [1] TRUE
>>>
>>>        
>>>> sessionInfo()
>>>>          
>>> R version 2.11.1 Patched (2010-07-16 r52550)
>>> i386-apple-darwin9.8.0
>>>
>>> locale:
>>> [1] de_DE/de_DE/C/C/de_DE/de_DE
>>>
>>> attached base packages:
>>> [1] stats     graphics  grDevices utils     datasets  methods   base
>>>
>>> other attached packages:
>>> [1] lme4_0.999375-34   Matrix_0.999375-41 lattice_0.18-8
>>>
>>> loaded via a namespace (and not attached):
>>> [1] grid_2.11.1   nlme_3.1-96   stats4_2.11.1
>>>        
>>>>          
>>> #####
>>>
>>> Did I do something wrong in getting the correct library? Is there a
>>> command to check whether R BLAS is used?
>>>
>>> Using require(lme4) instead of library(lme4) did not make a difference either.
>>>
>>> Many thanks again,
>>> Reinhold Kliegl
>>>
>>>
>>> On Sun, Jul 18, 2010 at 11:35 AM, Daniel Myall<daniel.lists at zeno.co.nz>  wrote:
>>>        
>>>> Hi Simon,
>>>>
>>>> The Apple vecLib BLAS appears to be the cause of the strangeness (at
>>>> least on my machine).
>>>>
>>>> Following:
>>>> http://cran.r-project.org/bin/macosx/RMacOSX-FAQ.html#Which-BLAS-is-used-and-how-can-it-be-changed_003f
>>>>
>>>>
>>>> I changed to the R BLAS and everything now works as it should (i.e., for
>>>> certain source data, lme4 now gives the same solution on multiple runs
>>>> instead of randomly giving one of two different "solutions" on each
>>>> run). The FAQ does note in relation to vecLib BLAS " Although fast, it
>>>> is not under our control and may possibly deliver inaccurate results."
>>>> which appears to be true, at least with lme4.
>>>>
>>>> What BLAS does the CRAN build machine use? If it does use the vecLib
>>>> BLAS is there a case for changing to R BLAS (maybe only when
>>>> building/checking lme4)?
>>>>
>>>> Cheers,
>>>> Daniel
>>>>
>>>> On 18/07/10 9:50 AM, Daniel Myall wrote:
>>>>          
>>>>> Hi Simon,
>>>>>
>>>>> Unfortunately I don't think is the full story (there are actual small
>>>>> differences in the fits).
>>>>>
>>>>> To match the lme4 tests the example should actually be:
>>>>>
>>>>> library(lme4)
>>>>> y<- (1:20)*pi
>>>>> x<- (1:20)^2
>>>>> group<- gl(2,10)
>>>>> for (i in 1:10) {
>>>>> M1<- lmer (y ~     x + (    x | group))
>>>>> M2<- lmer (y ~     x + (    x | group))
>>>>> print(identical(ranef(M1),ranef(M2)))
>>>>> print(ranef(M1)$group-ranef(M2)$group)
>>>>> }
>>>>>
>>>>> Which gives me (the number of TRUE/FALSE and the order change on each
>>>>> run, even if restarting R):
>>>>>
>>>>> [1] FALSE
>>>>>      (Intercept)             x
>>>>> 1  6.613450e-06 -6.898626e-08
>>>>> 2 -6.613462e-06  6.898637e-08
>>>>> [1] TRUE
>>>>>    (Intercept) x
>>>>> 1           0 0
>>>>> 2           0 0
>>>>> [1] FALSE
>>>>>      (Intercept)             x
>>>>> 1 -6.613450e-06  6.898626e-08
>>>>> 2  6.613462e-06 -6.898637e-08
>>>>> [1] FALSE
>>>>>      (Intercept)             x
>>>>> 1  6.613450e-06 -6.898626e-08
>>>>> 2 -6.613462e-06  6.898637e-08
>>>>> [1] TRUE
>>>>>    (Intercept) x
>>>>> 1           0 0
>>>>> 2           0 0
>>>>> [1] FALSE
>>>>>      (Intercept)             x
>>>>> 1 -6.613450e-06  6.898626e-08
>>>>> 2  6.613462e-06 -6.898637e-08
>>>>> [1] FALSE
>>>>>      (Intercept)             x
>>>>> 1 -6.613450e-06  6.898626e-08
>>>>> 2  6.613462e-06 -6.898637e-08
>>>>> [1] FALSE
>>>>>      (Intercept)             x
>>>>> 1 -6.613450e-06  6.898626e-08
>>>>> 2  6.613462e-06 -6.898637e-08
>>>>> [1] TRUE
>>>>>    (Intercept) x
>>>>> 1           0 0
>>>>> 2           0 0
>>>>> [1] TRUE
>>>>>    (Intercept) x
>>>>> 1           0 0
>>>>> 2           0 0
>>>>>
>>>>>
>>>>> Although only small differences, I assume lme4 should be deterministic?
>>>>>
>>>>> Cheers,
>>>>> Daniel
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 18/07/10 8:23 AM, Simon Urbanek wrote:
>>>>>            
>>>>>> Ok, I think I found the issue. I'm not sure why this varies by
>>>>>> platform but the mismatch is due to the @env slot. Two environments
>>>>>> are only identical if it is *the* same environment (i.e. the same
>>>>>> pointer). However, M1 and M2 have different environments. The content
>>>>>> of those environments is identical, but that is irrelevant as it's
>>>>>> not the same pointer. Hence identical(M1, M2) fails (and serialize
>>>>>> comparison succeeds as it cares only about the content).
>>>>>>
>>>>>> So the short story is don't use identical() to compare the models
>>>>>> (unless you remove @env first). The long story raises the question
>>>>>> whether identical() should really return FALSE for environments like
>>>>>>              
>>>>>>> identical(new.env(),new.env())
>>>>>>>                
>>>>>> [1] FALSE
>>>>>> I can see arguments both ways but for the purposes of comparing
>>>>>> values there should be an option that the above is TRUE.
>>>>>>
>>>>>> To be honest I don't see why this has not shown up on other platforms
>>>>>> as that is a global issue... (I hope this is the full story - I
>>>>>> didn't try all the combinations to see if setting @env to the same
>>>>>> environment will appease identical() for all the models)
>>>>>>
>>>>>> Cheers,
>>>>>> Simon
>>>>>>
>>>>>>
>>>>>> On Jul 17, 2010, at 3:49 PM, Simon Urbanek wrote:
>>>>>>
>>>>>>              
>>>>>>> Daniel,
>>>>>>>
>>>>>>> thanks for the test case. I did run it in valgrind but nothing
>>>>>>> showed up, however ...
>>>>>>>
>>>>>>> I'm starting to have a suspicion that this has something to do with
>>>>>>> identical() - look at this:
>>>>>>>
>>>>>>>                
>>>>>>>> identical(M1,M2)
>>>>>>>>                  
>>>>>>> [1] FALSE
>>>>>>>                
>>>>>>>> all(serialize(M1,NULL)==serialize(M2,NULL))
>>>>>>>>                  
>>>>>>> [1] TRUE
>>>>>>>                
>>>>>>>> identical(unserialize(serialize(M1,NULL)),unserialize(serialize(M2,NULL)))
>>>>>>>>
>>>>>>>>                  
>>>>>>> [1] FALSE
>>>>>>>                
>>>>>>>> identical(unserialize(serialize(M1,NULL)),unserialize(serialize(M1,NULL)))
>>>>>>>>
>>>>>>>>                  
>>>>>>> [1] FALSE
>>>>>>>
>>>>>>> So I think this may be a bug in identical() mainly because of the
>>>>>>> last one. I'll need to take identical() apart to see where it fails
>>>>>>> ... I'm CCing this to R-devel as the current issue seems more like
>>>>>>> an R issue so more eyes can have a look ...
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Simon
>>>>>>>
>>>>>>>
>>>>>>> [FWIW this is tested in today's R-devel (with valgrind level 2) on
>>>>>>> x86_64 OS X 10.6.4 with lme4 from CRAN and Matrix form R-devel
>>>>>>> Recommended]
>>>>>>>
>>>>>>>
>>>>>>> On Jul 17, 2010, at 4:50 AM, Daniel Myall wrote:
>>>>>>>
>>>>>>>                
>>>>>>>> I've done some further testing (R 2.11.1) and the issue is not
>>>>>>>> limited to Leopard.
>>>>>>>>
>>>>>>>> Using the test:
>>>>>>>>
>>>>>>>> library(lme4)
>>>>>>>> y<- (1:20)*pi
>>>>>>>> x<- (1:20)^2
>>>>>>>> group<- gl(2,10)
>>>>>>>> for (i in 1:10) {
>>>>>>>> M1<- lmer (y ~     x + (    x | group))
>>>>>>>> M2<- lmer (y ~     x + (    x | group))
>>>>>>>> print(identical(M1,M2))
>>>>>>>> }
>>>>>>>>
>>>>>>>> For CRAN lme4 and Matrix:
>>>>>>>>
>>>>>>>> 32 bit on Leopard: R CMD check fails; different results (on most runs)
>>>>>>>> 32 bit on Snow Leopard: R CMD check passes; different results (on
>>>>>>>> some runs).
>>>>>>>> 64 bit on Snow Leopard: R CMD check passes; identical results
>>>>>>>>
>>>>>>>> For SVN version of Matrix with CRAN lme4:
>>>>>>>>
>>>>>>>> 32 bit on Snow Leopard: different results (on all runs).
>>>>>>>> 64 bit on Snow Leopard: different results (on all runs)
>>>>>>>>
>>>>>>>> For SVN version of Matrix with SVN lme4a:
>>>>>>>>
>>>>>>>> 32 bit on Snow Leopard: different results (on all runs).
>>>>>>>> 64 bit on Snow Leopard: identical results
>>>>>>>>
>>>>>>>> I couldn't reproduce on Linux 32/64bit. Is it time to jump into
>>>>>>>> valgrind to try and find the cause?
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Daniel
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 17/07/10 5:51 PM, John Maindonald wrote:
>>>>>>>>                  
>>>>>>>>> In principle, maybe a Snow Leopard version might be posted
>>>>>>>>> as an alternative, if someone can provide one.  But I take it
>>>>>>>>> that the issue is now a bit wider than tests that fail on Leopard
>>>>>>>>> vs passing on Snow Leopard?
>>>>>>>>>
>>>>>>>>>                    
>>>>>>>>                  
>>>>>>              
>>>>> _______________________________________________
>>>>> R-sig-mixed-models at r-project.org mailing list
>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>>            
>>>>
>>>>         [[alternative HTML version deleted]]
>>>>
>>>> _______________________________________________
>>>> R-sig-mixed-models at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>
>>>>          
>>>        
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>      
>




More information about the R-sig-mixed-models mailing list