[R-meta] [EXT] Re: Interpreting meta-regression results for dummy-coded variables
Viechtbauer, Wolfgang (NP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Jun 20 15:20:58 CEST 2022
I just added a discussion to the metafor package website that discusses the difference (and possible discrepancy) between the omnibus test of a factor as a whole and the individual contrasts. See here:
https://www.metafor-project.org/doku.php/tips:diff_omnibus_vs_indiviual_tests
Maybe this helps to address the question.
Best,
Wolfgang
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Michael Dewey
>Sent: Monday, 20 June, 2022 14:44
>To: Acar, Selcuk; r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] [EXT] Re: Interpreting meta-regression results for dummy-
>coded variables
>
>Please keep the list in all correspondence (I have added it) as someone
>else on the list may understand your reply, which, sadly, I do not and
>be able to answer.
>
>Michael
>
>On 20/06/2022 08:42, Acar, Selcuk wrote:
>> Michael,
>>
>> Thanks for your response--I am glad to have it corrected.
>>
>> I do not do univariate analyses when I do meta-regression, but with this
>> perspective, they could help me figure if a moderator is significant or
>> not rather than a specific pair of categories is significantly from each
>> other. I do not do it because meta-regression provides more stringent
>> evidence than univariate analyses and when they conflict, I would still
>> go with the meta-regression results for interpretation. So, I question
>> the usefulness of such univariate analyses beyond checking if a
>> moderator is significant whose sub-categories turned out significant in
>> meta-regression.
>>
>> Am I correct in my thinking/understanding of it?
>>
>> Selcuk Acar, Ph.D.
>> Associate Professor
>> Department of Educational Psychology
>> University of North Texas
>> ------------------------------------------------------------------------
>> *From:* Michael Dewey <lists using dewey.myzen.co.uk>
>> *Sent:* Sunday, June 19, 2022 7:30 AM
>> *To:* Acar, Selcuk <Selcuk.Acar using unt.edu>;
>> r-sig-meta-analysis using r-project.org <r-sig-meta-analysis using r-project.org>
>> *Subject:* [EXT] Re: [R-meta] Interpreting meta-regression results for
>> dummy-coded variables
>> Coments in-line
>>
>> On 18/06/2022 23:14, Acar, Selcuk wrote:
>>> Hi,
>>>
>>> I ran a meta-regression in metafor package with both continuous and dummy-
>coded moderators. In some of the moderators, when we had only one dummy-code
>significant, we interpreted this as this moderator with several categories being
>significant. For example, we had "participant group" moderator consisting
>of "elementary"
>> "middle" "high" and "undergraduate" categories, and used
>> "undergraduates" as the reference group. We thought this moderator would
>> be significant even when only one of dummy codes "undergraduates vs
>> elementary" is significant without a separate test (linear hypothesis
>> testing).
>>>
>>> One of the reviewers provided the following feedback:
>>>
>>> "Because of the dummy-coding these coefficients are differences in Fisher-z-
>transformed correlations between the coded category and the reference category.
>Hence, this reporting could be more accurately reflect this. In addition, these
>tests of coefficients are not a substitute for an overall test of the moderator.
>In other
>> words, the fact that one coefficient related to a moderator is
>> significant, does not imply that the moderator is significant. For
>> example, an overall test for Index of Creativity can be non-significant
>> even when a single coefficient such as the one for flexibility vs.
>> fluency is significant. Overall, moderator tests could be done by means
>> of linear hypothesis testing, for example."
>>
>> That is correct. an overall test of the moderator is needed.
>>
>>> In my opinion, running separate tests for each moderator kills the point of a
>meta-regression, and meta-regression should be the basis of the interpretation
>including these dummy-coded variables. I thought a categorical moderator would be
>significant even when one of the dummy codes turn out significant.
>>
>> I am afraid your thought is a mis-conception, albeit a common one.
>>
>> Michael
>>
>>> Who is correct here? Is there a good source that I could cite\?
>>>
>>> I would appreciate input on this.
>>>
>>> Selcuk Acar, Ph.D.
>>> Associate Professor
>>> Department of Educational Psychology
>>> University of North Texas
More information about the R-sig-meta-analysis
mailing list