[R-sig-ME] [External] Emmeans Effectsizes and Equivalence Tests

Jakob Aschauer j@kob@@@ch@uer @end|ng |rom un|-kon@t@nz@de
Thu Feb 17 09:39:20 CET 2022


Hello Russ,

it’s been a while now, but i finally got back to those tests and realized some remaining issues.

When specifying delta = 0.2 * pi / sqrt(3), as suggested (which equals 0.36 on the logit scale, thus an odds ratio of 1.44), i obtain the following results:



As you can see, the estimate is lower than the threshold, but still the p values are not significant. Is that possible?

I’m using a beta-binomial model:



Since i have a long list of comparisons I calculate the contrast via the following function:




Many thanks for having a look!

Cheers
Jakob Aschauer



> Am 05.02.2022 um 15:51 schrieb Lenth, Russell V <russell-lenth using uiowa.edu>:
> 
> I guess the people who like standardized effect sizes might also go for standardized equivalence thresholds. I'm not one of those people, though I can understand that there are narrow contexts that dictate working in terms of norms.
> 
> Russ Lenth
> 
> Russ Lenth
> 
> Sent from my iPad
> 
>> On Feb 5, 2022, at 5:01 AM, Jakob Aschauer <jakob.aschauer using uni-konstanz.de> wrote:
>> 
>> Hello Russ,
>> 
>> thanks a lot for the response, that clarifies my uncertainties about the equivalence tests. I was already hoping a bit what you said, but wasn’t quite sure.
>> 
>> So in your eyes, does it also make sense to define delta as depending on sigma, following the Cohen’s d calculation used by eff_size()? I wonder wether it applies to GLMs as well. Maybe i’ll just play around with the other formula i mentioned and see what that changes (d = b × √3/𝜋). Only problem is that i think it assumes equal group sizes.
>> 
>> Cheers
>> Jakob 
>> 
>> 
>> 
>>> Am 04.02.2022 um 21:51 schrieb Lenth, Russell V <russell-lenth using uiowa.edu>:
>>> 
>>> 
>>> I will answer the part about equivalence tests.
>>> 
>>> If you do not specify a side, you *do* obtain a two-sided test of equivalence. As documented in the help page for 'summary.emmGrid' (in the section on noninferiority, nonsuperiority, and equivalence, the test statistic is 
>>> 
>>>  t = (|estimate - null| - delta) / SE
>>> 
>>> and that the P value is the left-tailed probability. This is equivalent to the TOST method because the absolute value in there makes it test the less significant of the two tests. You can confirm this by looking at the separate cases where estimate < null and estimate > null.
>>> 
>>> Russ Lenth
>>> 
>>> -----Original Message-----
>>> 
>>> Hello everyone,
>>> 
>>> Here’s another set of emmeans-related problems or uncertainties. I’m using a beta-bionomial model to analyze a set of correctly vs. incorrectly given answers in a psychological experiment. Although the first is more a statistical question, i’ll give it a try here. Guess there should be some experts for all of this among you.
>>> 
>>> To compare experimental conditions, i defined some custom emmeans contrasts (using contrast(method = list(c(…))). For beta-binomial GLMs, emmeans calculates comparisons on the logit scale, thus provides logs of odds ratios. Additionally i’m using the eff_size() function to determine standardized mean differences (with sigma = stats::sigma(model), edf = df.residual(model)). This should give me Cohen’s d as calculated via d = b/σ.  However, i’m a little confused since on the internet i’ve also seen the formula d = b × √3/𝜋 to convert logits to Cohen’s d. So how comes this contradiction? And in general i wonder which makes more sense in this case, reporting OR, or d, or both of them?
>>> For the analysis i also need equivalence tests, and i’m using two one-sided tests for that via emmeans::contrast(delta = 0.3 * stats::sigma(model)), in order to classify all differences with less than a small effectsize (d = 0.3) as equal (using the Cohen’s d to logits conversion formula used by emmeans itself). I found that suggestion somewhere, but in case you’re having different suggestions, please let me know. However, i can’t implement the „equivalence“ side argument since the output always states that p values are left-tailed, not two-sided. I’ve been playing around with placing the side argument within contrast() or summary(), also with method „pairwise“ instead of my custom contrasts, but it won’t work. In the emmeans vignette on p. 70 it says „The misc slot in object may contain default values for by, calc, infer, level, adjust, type, null, side, and delta“, which might be the reason. But i don’t arrive to change anything about that using the update method as suggested. Does anyone know a solution? 
>>> Thanks a lot for helping out!
>>> 
>>> Cheers
>>> Jakob
>>> 
>> 



More information about the R-sig-mixed-models mailing list