[R-sig-ME] Help with determining effect sizes
Maarten Jung
M@@rten@Jung @end|ng |rom m@||box@tu-dre@den@de
Sat Oct 5 14:10:26 CEST 2019
As far as I remember, the formulas in Westfall, Kenny, and Judd (2014) (and
thus probably the calculations in the web app) are based on models with
contrast-coded predictors only.
Best, Maarten
On Sat, 5 Oct 2019, 13:10 João Veríssimo <jl.verissimo using gmail.com> wrote:
> See the web app by Jake Westfall:
> https://jakewestfall.shinyapps.io/crossedpower/
>
> And their JEP:General paper:
> http://doi.org/10.1037/xge0000014
>
> If I'm not mistaken, you would standardise the estimates of differences
> by the sum of all variances (random intercepts and slopes + residual),
> but you'll need to make sure that's the right formula (given your
> desgin).
>
> João
>
> On Sat, 2019-10-05 at 12:13 +0200, Maarten Jung wrote:
> > Dear Francesco,
> >
> > I don't think there is a "standard" way to calculate effect sizes for
> > linear mixed models due to the way the variance is partitioned (see
> > e.g. [1]).
> > One way to compute something similar to Cohen's d would be to divide
> > the difference between the estimated means of two conditions by a
> > rough estimate of the standard deviation of the response variable
> > which you can get by
> > sd(predict(your_model_name))
> >
> > Best,
> > Maarten
> >
> > [1]
> >
> https://afex.singmann.science/forums/topic/compute-effect-sizes-for-mixed-objects#post-295
> >
> >
> > On Sat, Oct 5, 2019 at 10:01 AM Francesco Romano <
> > fbromano77 using gmail.com> wrote:
> > >
> > > Dear all,
> > >
> > > A journal has asked that I determine the effect sizes for a series
> > > of
> > > dummy-coded contrasts from the following ME model:
> > >
> > > RT ~ Group * Grammaticality + (1 + Grammaticality | Participant) +
> > > (1 + Group | item)
> > >
> > > Here RT is my continuous outcome variable measured in milliseconds,
> > > Group
> > > is a factor with 3 levels (NS, L2, and HL), and Grammaticality a
> > > factor
> > > with 2 levels (gr and ungr). After relevelling —NOTE: I am
> > > deliberately
> > > omitting the call for each new relevelled model here— I obtained a
> > > series
> > > of contrasts which are tabulated below (not sure you can view this
> > > whole):
> > >
> > >
> > > Reference level
> > >
> > > Contrasts
> > >
> > > Estimate
> > >
> > > (ms)
> > >
> > > Effect size
> > >
> > > (Cohen’s *d*)
> > >
> > > SE
> > >
> > > df
> > >
> > > *t*
> > >
> > > *p*
> > >
> > > HL
> > >
> > > GR vs UNGR
> > >
> > > -213
> > >
> > >
> > >
> > > 89
> > >
> > > 72.13
> > >
> > > -2.399
> > >
> > > < .05*
> > >
> > > L2
> > >
> > > GR vs UNGR
> > >
> > > -408
> > >
> > >
> > >
> > > 90
> > >
> > > 74.18
> > >
> > > -4.513
> > >
> > > < .001***
> > >
> > > L1
> > >
> > > GR vs UNGR
> > >
> > > -111
> > >
> > >
> > >
> > > 73
> > >
> > > 70.02
> > >
> > > -1.520
> > >
> > > > .05
> > >
> > >
> > >
> > > HL > L2
> > >
> > > -25
> > >
> > >
> > >
> > > 191
> > >
> > > 43.48
> > >
> > > -.135
> > >
> > > > .05
> > >
> > > GR
> > >
> > > L1 > HL
> > >
> > > 400
> > >
> > >
> > >
> > > 175
> > >
> > > 43.81
> > >
> > > 2.286
> > >
> > > < .05*
> > >
> > >
> > >
> > > L1 > L2
> > >
> > > 374
> > >
> > >
> > >
> > > 179
> > >
> > > 43.59
> > >
> > > 2.092
> > >
> > > < .05*
> > >
> > >
> > >
> > > HL > L2
> > >
> > > -219
> > >
> > >
> > >
> > > 179
> > >
> > > 42.70
> > >
> > > -1.226
> > >
> > > > .05
> > >
> > > UNGR
> > >
> > > L1 > HL
> > >
> > > 298
> > >
> > >
> > >
> > > 164
> > >
> > > 43
> > >
> > > 1.817
> > >
> > > > .05
> > >
> > >
> > >
> > > L1> L2
> > >
> > > 77
> > >
> > >
> > >
> > > 166
> > >
> > > 42.03
> > >
> > > .469
> > >
> > > > .05
> > >
> > > How would I go about determining the Cohen's *d* for each of the
> > > contrasts?
> > >
> > > The model call is:
> > >
> > > Linear mixed model fit by REML. t-tests use Satterthwaite's method
> > > ['lmerModLmerTest']
> > > Formula: RT ~ Group * Grammaticality + (1 + Grammaticality |
> > > Participant) +
> > >
> > > (1 + Group | item)
> > > Data: RTanalysis
> > >
> > > REML criterion at convergence: 52800
> > >
> > > Scaled residuals:
> > > Min 1Q Median 3Q Max
> > > -2.1696 -0.6536 -0.1654 0.5060 5.0134
> > >
> > > Random effects:
> > > Groups Name Variance Std.Dev. Corr
> > > item (Intercept) 71442 267.29
> > > GroupL2 1144 33.82 0.80
> > > GroupNS 9951 99.76 -0.43 -0.88
> > > Participant (Intercept) 235216 484.99
> > > Grammaticalityungr 50740 225.25 -0.39
> > > Residual 378074 614.88
> > > Number of obs: 3342, groups: item, 144; Participant, 46
> > >
> > > Fixed effects:
> > > Estimate Std. Error df t value
> > > Pr(>|t|)
> > > (Intercept) 2801.98 136.70 48.85 20.498 <2
> > > e-16 ***
> > > GroupL2 -25.86 191.20 43.48 -
> > > 0.135 0.8931
> > > GroupNS -400.63 175.22 43.81 -
> > > 2.286 0.0271 *
> > > Grammaticalityungr -213.87 89.17 72.13 -
> > > 2.399 0.0190 *
> > > GroupL2:Grammaticalityungr -194.57 107.25 42.55 -
> > > 1.814 0.0767 .
> > > GroupNS:Grammaticalityungr 102.31 99.39 43.45 1.029 0.
> > > 3090
> > > ---
> > > Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> > >
> > > Correlation of Fixed Effects:
> > > (Intr) GropL2 GropNS Grmmtc GrL2:G
> > > GroupL2 -0.672
> > > GroupNS -0.744 0.526
> > > Grmmtcltyng -0.404 0.222 0.260
> > > GrpL2:Grmmt 0.259 -0.391 -0.205 -0.589
> > > GrpNS:Grmmt 0.299 -0.202 -0.392 -0.702 0.540
> > > convergence code: 0
> > > Model failed to converge with max|grad| = 0.0477764 (tol = 0.002,
> > > component
> > > 1)
> > >
> > > The distribution of the outcome is fairly normal and the overall
> > > mean,
> > > without considering the two fixed effects, is very close to the
> > > means of
> > > each of the three groups (without considering the effect of
> > > Grammaticality)
> > > as well as the means of the two levels of grammaticality (without
> > > considering the effect of group).
> > >
> > > The package simR can simulate data to determine power, amongst
> > > other things,
> > > but I am not sure how to do this for models with interactions such
> > > as mine.
> > >
> > > Use of simR is recommended in Brysbaert and Stevens (2018)
> > > https://www.journalofcognition.org/articles/10.5334/joc.10/.
> > > Perhaps there
> > > is a simpler way of extracting *d *from the stats I already know?
> > >
> > > Any help would be greatly appreciated,
> > >
> > > Francesco
> > >
> > > [[alternative HTML version deleted]]
> > >
> > > _______________________________________________
> > > R-sig-mixed-models using r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list