[R-sig-ME] Models and Power Analysis for Within-subject Design in LMER

Phillip Alday ph||||p@@|d@y @end|ng |rom mp|@n|
Fri Apr 3 23:47:28 CEST 2020


Hi Lei,

It's been a while, but I haven't a response to your email go by and I'm
trying small productivity tasks to get the stats juices flowing, so here
it goes ....

On 14/10/19 5:08 pm, Fan, L. via R-sig-mixed-models wrote:
> Hi,
> This is Lei from VU Amsterdam.
> Recently I am proposing a new within-subject study based on my former results of a between-subject one. As planning for the proposal, I found it is a little bit confuse in creating the model.
> In the first between-subject study, I used a basic model like this with some other random slopes:
> Emotion Value ~ Emotion Type * WTR + (1|Subject) + (1|Scenario)
> Each participant would have one WTR index and finish 2 emotion assessments based on one single scenario from the pool.  The EV and ET were created as repeated measure format originally from the emotion assessments. The effect we focused on is the interaction.
> In the new study, we would like to make the study within-subject, by asking the participants to repeat the procedure for 3 times with different scenarios and WTR conditions (here the conditions are as a manipulation for maximum the rage of the WTRs, not as an IV). Then, here comes the questions:
> 1. I tried to draw the model for the new study, but it seemed to be the same as the between-subject one. Does it mean that I make mistakes in creating the model?

In both cases, each subject and each scenario is measured multiple
times, so it makes sense to have each as a blocking variable. The
nesting/crossing structure of subjects and scenarios  don't have to be
specified explicitly, so this model won't change. Note that if you had
changed between- and within-subject manipulations (i.e. random slopes),
then the model would usually change.

In other words, your model didn't change because you didn't have the
manipulation encoded in the random effects and lme4 doesn't care about
nesting/crossing of random effects in the model specification.

> 2. Using the current model for calculating the sample size, the result should be the same for the required observations as for the between-subject design (model never changed). Should not it be smaller as the design is within-subject? The only possibility is the model did not count the within part. How can I make it work?

The between-subject and between-scenario variability didn't change, so
why would the power change? :) In real data, I suspect you will see a
change because you'll have a better estimate of the variability
introduced by subjects vs. the residual variability.

> 3. As for the between-subject study, we used a data simulation approach to calculate the sample size. Here, as it is a follow-up study, is there some other more convincing way to conduct the a-prior power analysis?
>
Nope, simulation is considered the standard for power analysis in
mixed-effects models.

Best,

Phillip


> Thanks a lot in advance!
>
> Best,
> Lei Fan
>
> 	[[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



More information about the R-sig-mixed-models mailing list