[R-sig-ME] interpretation of categorical crossed effect in lme4

Ken Beath ken.beath at mq.edu.au
Sat Dec 6 23:57:25 CET 2014


Interpretation of x2 random effect is correct,   and the models  are
identical.

These models can do some strange things. I would try coding x2 as -0.5,0.5
and see what happens, and possibly also centering x1.

If all else fails simulating some data sets and seeing what happens can be
very illuminating.

On 7 December 2014 at 08:04, Andrew McAleavey <andrew.mcaleavey at gmail.com>
wrote:

> Thanks for the reply,
>
> I take from your message that my interpretation of the x2 random effect is
> correct - it is the variance of the deviation from the group mean with x2=1
> compared with x2=0. So the total effect of group when x2=1 would be the sum
> of both random effects, yes?
>
> The model you suggest:
> y ~ x1 + x2 +  (1 +x2 | group);
> is identical to:
> y ~ x1 + x2 +  (x2 | group),
> right? It seems to be based on my testing and understanding of lmer
> defaults. In any case that model does not improve model fit according to
> AIC/BIC and LRT, so I went with the one I described in my first email.
>
> My problem is that these variances should not be correlated (because there
> is no covariance between them, right?), though the estimates (pulled from
> ranef() ) seem to be meaningfully correlated. Is this just a chance
> occurrence or artifact, like when factor scores from uncorrelated factors
> are highly correlated? Should I not interpret the high observed correlation
> due to the lack of formal modeling and nonsignificant improvement? Am I
> just capitalizing on chance?
>
> Thanks!
> Andrew
>
> On Sat, Dec 6, 2014 at 3:23 PM, Ken Beath <ken.beath at mq.edu.au> wrote:
>
>> The random effect for x2 is giving the variation in the effect of x2,
>> that is the difference in levels (from x2=0 to x2=1), with id.
>>
>> I would first try the model, and see if it improves AIC.
>>
>> y ~ x1 + x2 +  (1 +x2 | group)
>>
>> This now allows for the random effects for the intercept and x2 to be
>> correlated
>>
>> On 7 December 2014 at 02:12, Andrew McAleavey <andrew.mcaleavey at gmail.com
>> > wrote:
>>
>>> Hi,
>>>
>>> I have a lmer model of the form:
>>> y ~ x1 + x2 + (1 | group) + (0 +x2 | group) ;
>>> where x1 is continuous, x2 is dichotomous and dummy-coded, and group has
>>> about 250 levels (each with minimum 3 observations in each x2 level, but
>>> the average is more like 7 per x2 level, and over 15 observations per
>>> group
>>> on average, ignoring x2). My understanding is that this model separately
>>> estimates variance components for each level of x2 across groups, and
>>> does
>>> not model any correlation between them.
>>>
>>> This was a better fit to the data than  the structure:
>>> y ~ x1 + x2 + (x2 | group) ;
>>> and I came to this model based on a series of threads on this list. Note
>>> that under this model the correlation between random effects for x2 and
>>> the
>>> intercept was .67, and as far as I can tell convergence was not a problem
>>> in either model as it might be in some cases with smaller group numbers.
>>>
>>> However, I would like to interpret, at least tentatively, the random
>>> effects, and especially the relationship between them. My central
>>> substantive question is whether groups vary with respect to differential
>>> effectiveness with x2 levels (e.g., some groups were effective with x2=0
>>> but not x2=1 while others were highly effective with both). Extracting
>>> the
>>> random effects and plotting them suggests that even though the model does
>>> not explicitly include correlations, the two random effects are
>>> correlated
>>> at about r = .56.
>>>
>>> My questions are these:
>>> a) is a significant correlation like r = .56 common under conditions of
>>> my
>>> model in which these effects were not modeled?
>>> b) to interpret the random effects, I think I may need to treat them as
>>> additive and correlate u1 with (u1 + u2), which leads to an even higher
>>> correlation (r > .8). Am I correct in this? My thinking is that u2, as a
>>> dummy coded variable, represents the deviation for x2 = 1 from x2 = 0,
>>> but
>>> is that incorrect?
>>>
>>> Thanks very much,
>>> Andrew
>>>
>>> --
>>> Andrew McAleavey, M.S.
>>> Department of Psychology
>>> The Pennsylvania State University
>>> 346 Moore Building
>>> University Park, PA 16802
>>> aam239 at psu.edu
>>>
>>>         [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>
>>
>>
>> --
>>
>> *Ken Beath*
>> Lecturer
>> Statistics Department
>> MACQUARIE UNIVERSITY NSW 2109, Australia
>>
>> Phone: +61 (0)2 9850 8516
>>
>> Building E4A, room 526
>> http://stat.mq.edu.au/our_staff/staff_-_alphabetical/staff/beath,_ken/
>>
>> CRICOS Provider No 00002J
>> This message is intended for the addressee named and may contain
>> confidential information.  If you are not the intended recipient, please
>> delete it and notify the sender.  Views expressed in this message are those
>> of the individual sender, and are not necessarily the views of the Faculty
>> of Science, Department of Statistics or Macquarie University.
>>
>>
>
>
> --
> Andrew McAleavey, M.S.
> Department of Psychology
> The Pennsylvania State University
> 346 Moore Building
> University Park, PA 16802
> aam239 at psu.edu
>



-- 

*Ken Beath*
Lecturer
Statistics Department
MACQUARIE UNIVERSITY NSW 2109, Australia

Phone: +61 (0)2 9850 8516

Building E4A, room 526
http://stat.mq.edu.au/our_staff/staff_-_alphabetical/staff/beath,_ken/

CRICOS Provider No 00002J
This message is intended for the addressee named and may...{{dropped:9}}



More information about the R-sig-mixed-models mailing list