[R-sig-ME] Removing random intercepts before random slopes

Maarten Jung M@@rten@Jung @ending from m@ilbox@tu-dre@den@de
Wed Aug 29 17:56:28 CEST 2018


Dear Jake,

thanks for your answer, makes sense to me. I think removing the random
intercepts should mostly increase the residual error and thus even
increase the SEs for the fixed effects. Is this correct?

Why exactely would it be conceptually strange to have random slopes but not
random intercepts?
Because intercepts often represent some kind of baseline and, say subjects,
will probably have different baselines (and thus a corresponding variance
component estimated as > 0) if their slopes (i.e. effects) vary, or is
there any other statistical reason why most people remove the random slopes
first?

Best,
Maarten

On Wed, Aug 29, 2018 at 3:34 PM Jake Westfall <jake.a.westfall using gmail.com>
wrote:

> Maarten,
>
> Regarding whether it makes conceptual sense to have a model with random
> slopes but not random intercepts. I believe the context of this
> recommendation is an experiment where the goal is to do a confirmatory test
> of whether the associated fixed slope = 0. In that case, as long as the
> experiment is fairly balanced, the random slope variance appears in (and
> expands) the standard error for the fixed effect of interest, while the
> random intercept variance has little or no effect on the standard error
> (again, assuming the experiment is close to balanced). So we'd like to keep
> the random slopes in the model if possible so that the type 1 error rate
> won't exceed the nominal alpha level by too much. But keeping the random
> intercepts in the model is less important because it should have little or
> no impact on the type 1 error rate either way, albeit it would be
> conceptually strange to have random slopes but not random intercepts. So,
> anyway, that's the line of thinking as I understand it, and I don't think
> it's crazy.
>
> Jake
>
> On Wed, Aug 29, 2018 at 7:18 AM Maarten Jung <
> Maarten.Jung using mailbox.tu-dresden.de> wrote:
>
> > Sorry, hit the send button too fast:
> >
> > # here  c1 and c2 represent the two contrasts/numeric covariates defined
> > for the three levels of a categorical predictor
> > m1 <- y ~ 1 + c1 + c2 + (1 + c1 + c2 || group)
> >
> > On Wed, Aug 29, 2018 at 2:07 PM Maarten Jung <
> > Maarten.Jung using mailbox.tu-dresden.de> wrote:
> >
> > >
> > > On Wed, Aug 29, 2018 at 12:41 PM Phillip Alday <phillip.alday using mpi.nl>
> > > wrote:
> > > >
> > > > Focusing on just the last part of your question:
> > > >
> > > > > And, is there any difference between LMMs with categorical and LMMs
> > > > > with continuous predictors regarding this?
> > > >
> > > > Absolutely! Consider the trivial case of only one categorical
> predictor
> > > > with dummy coding and no continuous predictors in a fixed-effect
> model.
> > > >
> > > > Then ~ 0 + cat.pred  and ~ 1 + cat.pred produce identical models in
> > some
> > > > sense, but in the former each level of the predictor is estimated as
> an
> > > > "absolute" value, while in the latter, one predictor is coded as the
> > > > intercept and estimated as an "absolute" value, while the other
> levels
> > > > are coded as offsets from that value.
> > > >
> > > > For a really interesting example, try this:
> > > >
> > > > data(Oats,package="nlme")
> > > > summary(lm(yield ~ 1 + Variety,Oats))
> > > > summary(lm(yield ~ 0 + Variety,Oats))
> > > >
> > > > Note that the residual error is identical, but all of the summary
> > > > statistics -- R2, F -- are different.
> > >
> > > Sorry, I just realized that I didn't make clear what I was talking
> about.
> > > I know that  ~ 0 + cat.pred and ~ 1 + cat.pred in the fixed effects
> part
> > > are just reparameterizations of the same model.
> > > As I'm working with afex::lmer_alt() which converts categorical
> > > predictors to numeric covariates (via model.matrix()) per default, I
> was
> > > talking about removing random intercepts before removing random slopes
> in
> > > such a model, especially one without correlation parameters [e.g. m1],
> > > and whether this is conceptually different from removing random
> > > intercepts before removing random slopes in a LMM with continuous
> > > predictors.
> > > I. e., I would like to know if it makes sense in this case vs. doesn't
> > > make sense in this case but does for continuous predictors vs. does
> never
> > > make sense.
> > >
> > > # here  c1 and c2 represent the two contrasts/numeric covariates
> defined
> > > for the three levels of a categorical predictor
> > > m1 <- y ~ 1 + c1 + c2 + (1 + c1 + c2 ||  cat.pred)
> > >
> > > Best,
> > > Maarten
> > >
> >
> >         [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list