[R-sig-ME] A conceptual question regarding fixed effects

sree datta @reedt@8 @end|ng |rom gm@||@com
Fri Aug 13 05:55:20 CEST 2021


Thanks for sharing this Phillip, a fabulous and a very helpful document!

Sree

On Thu, Aug 12, 2021 at 9:19 PM Phillip Alday <me using phillipalday.com> wrote:

> Here's a  second try with the link:
>
> http://www.drizopoulos.com/courses/EMC/CE08.pdf
>
> On 12/8/21 6:17 pm, Simon Harmel wrote:
> > Dear Phillip,
> >
> > Thank you very much. Unfortunately, I couldn't open the link you shared
> > (I get: This site can’t be reached). So, I mainly want to focus on the
> > second paragraph of your answer. My focus is only on LMMs.
> >
> > To be clear, I gather that you believe it is correct to think that a
> > fixed-effect coef. is some kind of (weighted) average of its individual
> > regression counterparts fit to each level of a grouping variable and
> > that this issue is helpful in preventing Simpson's paradox-type
> conclusions?
> >
> > Now, suppose X is a continuous predictor. It can vary across levels of
> > ID1 and ID2; where ID2 is nested in ID1.
> >
> > I fit three models with X:
> >
> > 1) y ~ X + (X | ID1)
> > 2) y ~ X + (X | ID1 / ID2)
> > 3) y ~ X
> >
> > Can I interpret X in (1) as: Change in y for 1 unit of change in X
> > averaged across levels of ID1 disregarding combination of ID1-ID2 levels?
> > Can I interpret X in (2) as: Change in y for 1 unit of change in X
> > averaged across levels of ID1 and combination of ID1-ID2 levels?
> > Can I interpret X in (3) as: Change in y for 1 unit of change in X
> > disregarding levels of ID1 and combination of ID1-ID2 levels?
> >
> > Thanks,
> > Simon
> >
> > On Thu, Aug 12, 2021 at 5:46 PM Phillip Alday <me using phillipalday.com
> > <mailto:me using phillipalday.com>> wrote:
> >
> >     This differs somewhat depending on whether you're assuming an
> identity
> >     link (as in linear mixed models) or a non-identity link (as in
> >     generalized linear mixed models), see e.g. Dimitris Rizopoulos'
> >     explanation of conditional vs. marginal effects on pdf-page 346 /
> slide
> >     321 of his course notes http://drizopoulos.com/courses/EMC/CE08.pdf.
> >
> >     For LMM, once you've added a by-group intercept term, the biggest
> change
> >     you'll generally see with adding addition by-group slopes is in the
> >     standard errors of the fixed effects. The by-group intercept term
> >     matters a lot because it begins to separate within vs. between/across
> >     group effects and thus 'overcomes' Simpson's paradox. More directly,
> >     introducing a by-group intercept allows the groups to have individual
> >     lines instead of sharing one line, and thus you have a separation of
> >     within vs between group effects. (Actually, this matters for any
> first
> >     term, whether the intercept or not, but the first RE term is usually
> the
> >     intercept.)
> >
> >     In Statistical Rethinking, Richard McElreath introduces random
> effects
> >     as being a type of interaction, which is actually a fair intuition
> >     (although there are substantial differences in estimation and formal
> >     details). If you add in higher order effects, you also change the
> >     precise interpretation of the lower-level effects, potentially along
> >     with their estimates and standard errors. The same holds
> approximately
> >     for adding in random effects.
> >
> >     Note that for the linked example, both the LM and LMM offer
> coefficients
> >     with potentially meaningfully interpretations. Generally for a bigger
> >     stimulus, you would expect a bigger response, which is a good
> prediction
> >     if you don't know which subject a given observation came from. And
> thus
> >     the LM tells you just that because it doesn't know which subject each
> >     observation came from. But if you want to how a given subject will
> >     respond to a larger stimulus, then the effect is paradoxically
> reversed.
> >     And that's what the mixed model captures.
> >
> >     Or in yet other words, the LM assumes that there are no differences
> >     between subjects and thus any differences are due to stimulus alone.
> >     This isn't true, so it doesn't give a good estimate for different
> >     subjects. Your choice of random effects is a statement about where
> you
> >     assume differences to exist (and be measurable / distinguishable from
> >     observation-level variance).
> >
> >     Note that there is one confound in the simulated data there: each
> >     subject only saw stimuli within a relatively small range. If each
> >     subject had seen stimuli across a wider range, then I suspect that
> each
> >     subject's 2 very low response values would have had sufficient
> leverage
> >     to flatten out the LM's slope estimate. (Such confounds of course
> exist
> >     in reality in many practical contexts, but for a repeated-measures
> >     design in biology/psychology/neuroscience, it would be great to have
> a
> >     bit more control....)
> >
> >     Phillip
> >
> >     On 12/8/21 4:52 pm, Simon Harmel wrote:
> >     > Dear Colleagues,
> >     >
> >     > Can we say in mixed-effects models, a fixed-effect coef. is some
> >     kind of
> >     > (weighted) average of its individual regression counterparts fit
> >     to each
> >     > level of a grouping variable and that is why fixed-effect coefs in
> >     > mixed-effects models can prevent things like a Simpson's Paradox
> >     case (
> >     > https://stats.stackexchange.com/a/478580/140365) from happening?
> >     >
> >     > If yes, then, would this also mean that if we fit models with the
> >     exact
> >     > same fixed-effects specification but differing random-effect
> >     > specifications, then the fixed coefs can be expected to be
> >     different in
> >     > value but also meaning (i.e., what kind of [weighted] average they
> >     > represent)?
> >     >
> >     > For example, would the meaning of a fixed-effect coef. for
> variable X
> >     > change if it has a corresponding random-effect in the model vs.
> >     when it
> >     > doesn't, or if we allow X to vary across levels of 1 grouping
> >     variable (X |
> >     > ID1) vs. those of 2 nested grouping variables (X | ID1/ID2)?
> >     >
> >     > Many thanks for helping me understand this better,
> >     > Simon
> >     >
> >     >       [[alternative HTML version deleted]]
> >     >
> >     > _______________________________________________
> >     > R-sig-mixed-models using r-project.org
> >     <mailto:R-sig-mixed-models using r-project.org> mailing list
> >     > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >     >
> >
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list