[R-sig-ME] Question about inclusion of a random effect

Chad Newbolt newboch at auburn.edu
Tue Aug 8 20:06:48 CEST 2017


When I include  (1|Question) I receive the dreaded convergence warning...

In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge with max|grad| = 0.00303355 (tol = 0.001, component 1)

If I remove (1|Question) there is no convergence warning.  Is this an indication that the variance of this random effect is 0 thereby creating a problem with the optimizer?  Does this warrant removing this random effect?  If not, any suggestions on how to proceed with the convergence issues?  

 

Chad Newbolt

Research Associate

School of Forestry And Wildlife Sciences

Auburn University

334-332-4864

________________________________________
From: R-sig-mixed-models <r-sig-mixed-models-bounces at r-project.org> on behalf of Chad Newbolt <newboch at auburn.edu>
Sent: Tuesday, August 8, 2017 12:50 PM
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Question about inclusion of a random effect

Thanks to everyone for the clarification and quick responses!!!

________________________________
From: Alday, Phillip <Phillip.Alday at mpi.nl>
Sent: Tuesday, August 8, 2017 12:43 PM
To: Chad Newbolt; r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Question about inclusion of a random effect


Yes, it makes sense. This is what is often called an "item" in the discussion on crossed random effects and leaving it out can distort inferences - see Clark 1974 "Language as a fixed effect fallacy" and more recent work by  Westfall and Judd (I'm thinking of their 2012 paper on this, but I can't think of the title or author order and I'm not at my desk to look it up).

Phillip
________________________________
From: Chad Newbolt <newboch at auburn.edu>
Sent: Aug 8, 2017 7:25 PM
To: r-sig-mixed-models at r-project.org
Subject: [R-sig-ME] Question about inclusion of a random effect


All,



I'm working on analyzing a data set from a survey.  In the survey, I asked a group of respondents to view a series of 94 images, or test questions, and I'm in process of evaluating the influence of various factors on their ability to correctly identify an item in an image.  The test questions likely show a considerable amount of variation in difficulty, with some being harder to correctly answer than others.  I understand that I clearly should include a random effect for each respondent (ID), however, I'm not sure if it is appropriate to include a random effect for question (1|Question) to account for variation.  I may be overthinking this one, but, including and removing (1|Question) dramatically changes my results so I want to make sure to get this one right.



My basic model is shown below for reference:



  results=glmer(Y~X1+X2+X3+X4+X5+X6+(1|ID)+(1|Question),data=datum,na.action = na.omit,family=binomial)



Thanks in advance for the help

[[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



More information about the R-sig-mixed-models mailing list