[R-sig-ME] Implausible estimate of random effects variance in lmer (lme4 devel version), sensible estimate in CRAN version

Marko Bachl marko.bachl at uni-hohenheim.de
Thu Sep 19 21:16:54 CEST 2013


Dear Kevin,
Thank you for your fast reply. Setting the optimizer to bobyqa (i.e. ,
control = lmerControl(optimizer = "bobyqa")) does solve the problem. I
can't get the "verbose = T" setting to work with bobyqa , but this
seems to be less relevant now.

More generally speaking: Can we learn something from this example
about which optimizer to use in which settings? I really never thought
about using another optimizer, and I don't know anything about
statistical computing. Is there a paper or another resource out there
on this topic that can be recommended for advanced applied researchers
who have no background in statistical computing (like myself)? Or is
it easier just to accept that some models work with some optimizers
and not with others?

Thanks again!
Marko


2013/9/19 Kevin Wright <kw.stat at gmail.com>:
> Try using bobyqa method and see what happens.  This was used by old lme4.
> New version uses nelder_mead.
>
> Kevin
>
>
>
> On Thu, Sep 19, 2013 at 1:25 PM, Marko Bachl <marko.bachl at uni-hohenheim.de>
> wrote:
>>
>> Dear list, dear lme4-Developers,
>> first of all, thanks a lot for the terrific work developing lme4 and
>> explaining it here on this list.
>>
>> Recently, I installed the most recent version of lme4 from github. I
>> re-ran an older model that worked well with lme4_0.999999-0 from CRAN.
>> With the development version lme4_1.1-0 I get an implausibly large
>> estimate for one of the random effects variances.
>>
>> Short version:
>> My model is: m0 = lmer(rtr2 ~ turnsec + (turnsec | kombiid) + (turnsec
>> | turnid) + (turnsec | idnr), verbose = T, d1)
>> The R data file can be downloaded from
>> https://dl.dropboxusercontent.com/u/3262123/data.RData (1,3 MB).
>>
>> The CRAN version gives 2.11 (Intercept) and 0.026 (turnsec) as
>> estimates for the random effects in "turnid". These estimates are
>> sensible and in line with the estimates from a model with only random
>> intercepts. The devel version gives 102.9740 (Intercept) and 81.0018
>> (turnsec), which is implausibly large. All other estimates are
>> approximately equal in both versions.
>>
>> Do you have any suggestions why the one variance estimate of the devel
>> version differs so drastically from the CRAN version? And can the
>> sensible result of the CRAN version be trusted?
>>
>>
>> More in detail: I analyze how respondents continuously rate a
>> politician during 34 answers of a televised debate using a response
>> dial on a scale from -50 to 50. The rating is recorded every second
>> for the approx. 30 seconds of each answer.
>>
>> My model is: m0 = lmer(rtr2 ~ turnsec + (turnsec | kombiid) + (turnsec
>> | turnid) + (turnsec | idnr), verbose = T, d1)
>>
>> rtr2 is the rating, turnsec is the second of the answer, starting with
>> 0. kombiid is an unique identifier for each combination of respondents
>> and answers (n = 4762). turnid is an unique identifier for each answer
>> (n = 34). idnr is an unique identifier for each respondent (n = 172).
>> As every respondent rates every answer, turnid and idnr are crossed
>> (but not balanced due to missing data for some combinations of answers
>> and respondents). The R data file can be downloaded from
>> https://dl.dropboxusercontent.com/u/3262123/data.RData (1,3 MB).
>>
>> The variance estimates from the lme4_0.999999-0 version are
>> theoretically sensible and in line with the estimates from a model
>> with only random intercepts.
>>
>> Random effects:
>>  Groups   Name        Variance  Std.Dev. Corr
>>  kombiid  (Intercept) 45.821653 6.76917
>>           turnsec      0.494588 0.70327  -0.250
>>  idnr     (Intercept)  8.307388 2.88225
>>           turnsec      0.138710 0.37244  0.272
>>  turnid   (Intercept)  2.110807 1.45286
>>           turnsec      0.026125 0.16163  -0.062
>>  Residual             40.675410 6.37773
>>
>>
>> The same model using the same data with lme4_1.1-0 gives these estimates:
>>
>> Random effects:
>>  Groups   Name        Variance Std.Dev. Corr
>>  kombiid  (Intercept)  45.6992  6.7601
>>           turnsec       0.4946  0.7033  -0.25
>>  idnr     (Intercept)   8.8504  2.9750
>>           turnsec       0.1386  0.3723  0.27
>>  turnid   (Intercept) 102.9740 10.1476
>>           turnsec      81.0018  9.0001  0.18
>>  Residual              40.6540  6.3760
>>
>> The variance estimates for the groups kombiid and idnr are almost the
>> same, but the estimate for turnid is implausibly large.
>>
>> Do you have any suggestions why the one variance estimate of the devel
>> version differs so drastically from the CRAN version? And can the
>> sensible result of the CRAN version be trusted?
>>
>> Thanks a lot for any advice
>> Marko
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>
>
>
> --
> Kevin Wright



-- 
www.komm.uni-hohenheim.de/bachl



More information about the R-sig-mixed-models mailing list