[R-sig-ME] Removing random intercepts before random slopes

Maarten Jung M@@rten@Jung @ending from m@ilbox@tu-dre@den@de
Tue Sep 4 01:31:03 CEST 2018


Thank you, Jake, this makes total sense to me and reminds me to choose
contrasts where the intercept corresponds to the overall mean (which seems
to be handy not only in this case..)

On Sat, Sep 1, 2018 at 4:52 PM Jake Westfall <jake.a.westfall using gmail.com>
wrote:

> Hi Maarten,
>
> I should point out that all of this, both what I said and what the Barr et
> al. paper said, is contingent on the fixed predictor being
> contrast/deviation coded, NOT treatment/dummy coded. This is sort of
> mentioned in the Barr paper in footnote 12 (attached to the paragraph you
> cited on p. 276), but it's not 100% clear, and I probably should have
> reminded about it too.
>
> If you add `options(contrasts=c("contr.helmert", "contr.poly"))` to the
> top of your script you'll see the expected results.
>
> The reason the coding matters in this way is that iff we're using
> contrast/deviation codes and the design is approximately balanced, then
> removing the random intercepts is the same as constraining all units to
> have the same overall mean response -- visually, this just vertically
> shifts each of the unit-specific regression lines (actually planes in this
> case) so that they all intersect X=0 at the same Y -- but this shift
> doesn't have much impact on any of the unit-specific slopes, and thus
> doesn't change much the random slope variance. Since the random slope
> variance enters the standard errors of the fixed slopes while the random
> intercept variance does not (because the fixed slopes are effectively
> difference scores that subtract out the unit-specific means), this means
> that the standard errors are mostly unchanged by this shift.
>
> As you point you, the residual variance does expand a little bit to soak
> up some of the ignored random intercept variance. But this has very little
> impact on the standard errors because, in the standard error expression,
> the residual variance is divided by the total number of observations, so
> its contribution to the entire expression is negligible except for tiny
> data sets (which to some extent is true of the Machines dataset).
>
> Now on the other hand, if we use treatment/dummy codes, removing the
> random intercepts corresponds to a completely different constraint,
> specifically we constrain all units to have the same response *in one of
> the experimental conditions*, and the random slopes are left to be whatever
> they now need to be to fit the other experimental conditions. This can have
> a big impact on the unit-specific regression lines, generally increasing
> their variance, possibly by a lot, which has a big impact on the standard
> errors of the fixed slopes.
>
> This is a lot easier to understand using pictures (and maybe a few
> equations) rather than quickly typed words, but it's Saturday morning and I
> want to do fun stuff, so... anyway, I hope this helps a little.
>
> Finally, in answer to your follow-up question of why it might be
> conceptually strange to have random slopes but not random intercepts, maybe
> this image from the Gelman & Hill textbook of what that implies for the
> unit-specific regression lines will make that more clear. I hope you agree
> that the middle panel is strange. The image is specifically in a dummy
> coding context, but it's not much less strange even if we use
> contrast/deviation codes.
>
>
> Jake
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list