[R-sig-ME] Fw: lme4

Ebi Safaie safaie124 at yahoo.com
Thu Oct 16 17:41:47 CEST 2014



Dear Ben Bolker, 
Thank you very much for your informative reply.
Yes, I followed Barr et al (2013).

I did what you kindly sent me. I'm not sure I've done it correctly but it came to false

It would be a good idea to check for a singular fit, i.e.

  t <- getME(mod.15,"theta")
  lwr <- getME(mod.15,"lower")
  any(t[lwr==0]< 1e-6)



t <- getME(mod.15,"theta") > lwr <- getME(mod.15,"lower") > any(t[lwr==0]< 1e-6) [1] FALSE


I increased the number of iterations as you suggested

summary(mod.15<-glmer(ErrorRate~1 +cgroup*cgrammaticality*cHeadNoun*cVerbType+(1|itemF)+(1+grammaticality*HeadNoun*VerbType|participantF),data =e3, + family="binomial",na.action=na.exclude,control=glmerControl(optCtrl=list(maxfun=1e6))))

but it came to the following message

Warning messages: 1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  : Model failed to converge with max|grad| = 0.113924 (tol = 0.001, component 29) 2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  : Model failed to converge: degenerate  Hessian with 1 negative eigenvalues   
> 
Actually the following two interactions are important for me
 because they are representing two hypothesis
2 way 

cgroup*cgrammaticality


4 way interaction
cgroup*cgrammaticality*cHeadNoun*cVerbType

Earlier, I used odds ratio to calculate the effect sizes (Table below) and I was able to
dissociate between these two interactions (i.e., two hypotheses) via their effect sizes. 
Due to wider range of the lower and upper limits of 95% CI I supported the 2 way interaction. 
Am I on the right track? 
Given that I want to use the newer version of lme4 (as you recommended) 
I would really appreciate your help to let me know what to do you with this really 
complex design.


Thanks for your help in advance
Kind regards,
Ebrahim




  
  Table 9.Experiment 1a: Fixed-effects from mixed-effects logistic regression model fit to data from both NSs and the NNSs for S-V agreement  Main analysis 
Fixed effects:           Odds Ratio (OR) 95% CI
For OR 
  Estimate Std. Error z value Pr(>|z|)  LL UL 
(Intercept) -1.9745 0.1274 -15.494 < 2e-16 *** 0.14 0.11 0.18 
Group (NNSs) 1.5843 0.1789 8.854 < 2e-16 *** 4.88 3.43 6.92 
Grammaticality (Ungrammatical) 0.5245 0.2182 2.404 0.0162 * 1.69 1.1 2.59 
Head Noun (SG) -0.272 0.1853 -1.468 0.1422   0.76 0.53 1.1 
Verb Type (THEMA)  0.7591 0.2326 3.263 0.0011 ** 2.14 1.35 3.37 
Group (NNSs)× Grammaticality (Ungrammatical) 1.5796 0.3586 4.404 1.06e-05 *** 4.85 2.4 9.8 
Group (NNSs)× Head Noun (SG) 0.0475 0.3537 0.134 0.8932   1.05 0.52 2.1 
Grammaticality (Ungrammatical)  × Head Noun (SG) 0.5368 0.4338 1.237 0.2159   1.71 0.73 4 
Group (NNSs) × Verb Type (THEMA) -0.2441 0.3472 -0.703 0.4821   0.78 0.4 1.55 
Grammaticality (Ungrammatical)  × Verb Type (THEMA) -0.4861 0.4185 -1.162 0.2454   0.61 0.27 1.4 
 Head Noun (SG) ×Verb Type (THEMA) -0.1563 0.3969 -0.394 0.6936   0.86 0.39 1.86 
Group (NNSs) ×Grammaticality (Ungrammatical)  × Head Noun (SG) 0.2659 0.7161 0.371 0.7104   1.3 0.32 5.31 
Group (NNSs)× Grammaticality (Ungrammatical) × Verb Type (THEMA) -0.4691 0.6945 -0.675 0.4994   0.63 0.16 2.44 
Group (NNSs)× Head Noun (SG) × Verb Type (THEMA) 0.7661 0.6916 1.108 0.2679   2.15 0.55 8.34 
Grammaticality (Ungrammatical)  × Head Noun (SG) × Verb Type (THEMA) 0.9104 0.9147 0.995 0.3196   2.49 0.41 14.93 
Group (NNSs)× Grammaticality (Ungrammatical)  × Head Noun (SG) × Verb Type (THEMA) 3.1326 1.3994 2.239 0.0252 * 22.93 1.48 356.16 
summary(mod.15<-glmer(ErrorRate~1 +cgroup*cgrammaticality*cHeadNoun*cVerbType+(1|itemF)+(1+grammaticality*HeadNoun*VerbType|participantF),data =e3,+ family="binomial",na.action=na.exclude)) 





 
On Wednesday, October 15, 2014 8:30 PM, Ben Bolker <bbolker at gmail.com> wrote:
  



> Dear Ben Bolker,
> 
> Thanks for the quick answer. Yes, I admit
 that I did not mention my
> problem clearly and do apologize for that. It was just because I came
> across the same error messages in the Qs & As that I did when my
> package was updated.  I was using the earlier version of lme4 and did
> not have any problems with it. For instance, for the following code I
> came to the following calculation without any error messages. (my
> dependent variable is binary and fixed factors are categorical)
> 
> summary(mod.15<-glmer(ErrorRate~1+
>    cgroup*cgrammaticality*cHeadNoun*cVerbType+(1|itemF)+
>     (1+grammaticality*HeadNoun*VerbType|participantF),data=e3,
>        family="binomial",na.action=na.exclude))

   Note
 that this is a very large (15*15) random-effects
variance-covariance matrix to estimate: I know that this is
recommended by Barr et al 2013, but see recent
 discussion
on this list, e.g.
http://article.gmane.org/gmane.comp.lang.r.lme4.devel/12492/

  It would be a good idea to check for a singular fit, i.e.

  t <- getME(mod.15,"theta")
  lwr <- getME(mod.15,"lower")
  any(t[lwr==0]< 1e-6)

> Fixed effects:
>                                            Estimate Std. Error z value Pr(>|z|)
> (Intercept)                                 -1.9745     0.1274 -15.494 
 < 2e-16 ***
> cgroup                                       1.5843    
 0.1789   8.854  < 2e-16 ***
> cgrammaticality                              0.5245     0.2182   2.404   0.0162 *
> cHeadNoun                                   -0.2720     0.1853  -1.468   0.1422
> cVerbType                                    0.7591     0.2326   3.263   0.0011 **
> cgroup:cgrammaticality                       1.5796     0.3586   4.404 1.06e-05 ***
> cgroup:cHeadNoun                         
    0.0475     0.3537   0.134   0.8932
>
 cgrammaticality:cHeadNoun                    0.5368     0.4338   1.237   0.2159
> cgroup:cVerbType                            -0.2441     0.3472  -0.703   0.4821
> cgrammaticality:cVerbType                   -0.4861     0.4185  -1.162   0.2454
> cHeadNoun:cVerbType                         -0.1563     0.3969  -0.394   0.6936
> cgroup:cgrammaticality:cHeadNoun             0.2659     0.7161   0.371   0.7104
> cgroup:cgrammaticality:cVerbType            -0.4691    
 0.6945  -0.675   0.4994
> cgroup:cHeadNoun:cVerbType 
                  0.7661     0.6916   1.108   0.2679
> cgrammaticality:cHeadNoun:cVerbType          0.9104     0.9147   0.995   0.3196
> cgroup:cgrammaticality:cHeadNoun:cVerbType   3.1326     1.3994   2.239   0.0252 *
> 
   These estimated effects look only very slightly different to me
than the ones below (i.e., only a few percent differences in point
estimates, always much smaller than the estimated standard error, and
no qualitative differences in Z/P values).  Can you specify whether
there are any differences that particularly concern you?

> 
> But as soon as I updated the package to a new version , for the same
> code I got the following error message and some calculations are not
> matched with the those with the earlier version (as shown below).
 I
> don't know exactly which version was the previous one, but I guess I
> was using 2013 packages
> 
> Warning messages:
> 1: In commonArgs(par, fn, control, environment()) :
>   maxfun < 10 * length(par)^2 is not recommended.
> 
>   Relatively harmless
> 
> 2: In optwrap(optimizer, devfun, start, rho$lower, control = control,  :
>   convergence code 1 from bobyqa: bobyqa -- maximum number of function evaluations exceeded
> 3: In (function (fn, par, lower = rep.int(-Inf, n), upper = rep.int(Inf,  :
>   failure to converge in 10000 evaluations
> 
   You definitely need to increase the number of iterations: see
?lmerControl,
specifically the "optCtrl" setting (e.g.
control=lmerControl(optCtrl=list(maxfun=1e6)))

> 4: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
>   Model
 failed to converge with max|grad| = 0.0928109 (tol = 0.001, component 28)
> 5: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
>   Model failed to converge: degenerate  Hessian with 1 negative eigenvalues
> 
  These are convergence *warnings*.  They do not indicate that your fit
is actually any worse than previously, just that we have increased the
sensitivity of the tests.  Can you specify what version you are using?

   I wouldn't recommend moving back to an earlier version of lme4,
but you could check out https://github.com/lme4/lme4/blob/master/README.md
for instructions on how to install the lme4.0 package if you
 really
want ...

> Fixed effects:
>                             
               Estimate Std. Error z value Pr(>|z|)
> (Intercept)                                -1.97217    0.13810 -14.281  < 2e-16 ***
> cgroup                                      1.58614    0.18262   8.686  < 2e-16 ***
> cgrammaticality                             0.52725    0.24544   2.148   0.0317 *
> cHeadNoun                                  -0.28061    0.21350  -1.314   0.1887
> cVerbType     
                  
            0.75503    0.25615   2.948   0.0032 **
> cgroup:cgrammaticality                      1.57010    0.36695   4.279 1.88e-05 ***
> cgroup:cHeadNoun                            0.05736    0.36138   0.159   0.8739
> cgrammaticality:cHeadNoun                   0.55238    0.47616   1.160   0.2460
> cgroup:cVerbType                           -0.24665    0.35618  -0.692   0.4886
> cgrammaticality:cVerbType                  -0.49272    0.45732 
 -1.077   0.2813
> cHeadNoun:cVerbType     
                   -0.14235    0.44553  -0.319   0.7493
> cgroup:cgrammaticality:cHeadNoun            0.24468    0.73223   0.334   0.7383
> cgroup:cgrammaticality:cVerbType           -0.45695    0.70627  -0.647   0.5176
> cgroup:cHeadNoun:cVerbType                  0.75837    0.70763   1.072   0.2839
> cgrammaticality:cHeadNoun:cVerbType         0.88375    0.98856   0.894   0.3713
> cgroup:cgrammaticality:cHeadNoun:cVerbType  3.15344    1.42351   2.215   0.0267 *
> 
> Because I am using Rstudio I have just two options when I want to
> install or to update packages. When I use CRAN and let Rstudio install
> lme4
 automatically it installs the most recent one. As such, it
> downloads the new package of lme4 which is problematic as I understand
> (sorry I might be wrong for that because I don't have any expertise
> but I'm talking from my observations) So my suggestion is that let the
> earlier version of lme4 be on the CRAN such that when users are
> installing they automatically install the one which was not
> problematic.  Another option for me to download the earlier version
> and to install from my pc. But when I use this option from Rstudio,
> lme4 does not install and come with the following message.
> 
> install.packages("~/lme4_1.0-4.tar.gz", repos = NULL, type = "source")
> Installing package into 'C:/Users/Azad/Documents/R/win-library/3.0'
> (as 'lib' is unspecified)
> * installing *source* package 'lme4' ...
> ** package 'lme4' successfully unpacked and MD5 sums
 checked
> ** libs

  If you want to install 1.0-4 you can either get the tarball from here:
http://cran.r-project.org/src/contrib/Archive/lme4/lme4_1.0-4.tar.gz

but you will either need to be able to install it from source (i.e.
have compilers etc. installed) or modify the DESCRIPTION file to
make yourself the maintainer and ship it off to
ftp://win-builder.r-project.org.

*OR* (possibly a better idea) you can retrieve a binary/.zip file from

http://lme4.r-forge.r-project.org/repos/bin/windows/contrib/3.0/lme4_1.0-4.zip

and install it.

  (You didn't specify your actual error messages from the attempted
lme4 installation.)

> Sorry for the inconvenience and hope
 that  I've made things clear now.
> Best wishes
> Ebrahim
>

_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list