[R-meta] Compiling different design in the same met-analysis
Philippe Tadger
ph|||ppet@dger @end|ng |rom gm@||@com
Wed May 5 16:26:42 CEST 2021
Thank you Wolgang for the clarification
So If we test for the need for 3-level or 2-level MA, in that case we'll
use a mixture of chi-square?
On 04/05/2021 12:59, Viechtbauer, Wolfgang (SP) wrote:
> You can do a likelihood ratio test. Using the same example:
>
> library(metafor)
> dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
> dat$alloc <- ifelse(dat$alloc == "random", "random", "other")
> res1 <- rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct="DIAG", data=dat, digits=3)
> res0 <- rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct="ID", data=dat, digits=3)
> anova(res1, res0)
>
> The issue you mention at the end is not relevant here, since we are testing H0: tau^2_1 = tau^2_2, not something like H0: tau^2 = 0. In the latter case, the parameter is at the boundary of the parameter space under the null hypothesis, which leads to the issue you mention.
>
> Best,
> Wolfgang
>
>> -----Original Message-----
>> From: Philippe Tadger [mailto:philippetadger using gmail.com]
>> Sent: Tuesday, 04 May, 2021 12:52
>> To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
>> Subject: Re: [R-meta] Compiling different design in the same met-analysis
>>
>> Thanks Wolfgang!, simple and elegant explanation in your post.
>>
>> Is it possible to check which assumption fit better the data (same variance vs
>> different variance in each subgroup)?
>>
>> The option "different variance" is more general than "same variance". So, is it
>> possible to say that they are nested and do an ANOVA between them?
>>
>> I wonder if for such "variance test" the usual chi2 distribution doesn't apply,
>> and require a mixture of chi2, as it has been propose previously when the test for
>> RE variance are conducted.
>>
>> On 04/05/2021 12:14, Viechtbauer, Wolfgang (SP) wrote:
>> If one runs separate meta-analyses, one can also test for subgroup differences.
>> This is not a distinguishing characteristic. The main difference is that separate
>> meta-analyses automatically allow all parameters (including any variance
>> components) to differ across analyses, while a single meta-regression model (with
>> a categorical moderator) will by default assume that all parameters except of
>> course for the subgroup means are the same across subgroups. But even this
>> assumption can be relaxed and one can fit a meta-regression model that will give
>> you exactly identical results as fitting separate meta-analyses within subgroups.
>> See here:
>>
>> https://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
>>
>> The same idea generalizes to models such as those that can be fitted with
>> rma.mv().
>>
>> Best,
>> Wolfgang
>>
>> -----Original Message-----
>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>> Behalf Of Philippe Tadger
>> Sent: Tuesday, 04 May, 2021 12:04
>> To: r-sig-meta-analysis using r-project.org
>> Subject: Re: [R-meta] Compiling different design in the same met-analysis
>>
>> Thanks Gerta for such a simple and important reminder.
>>
>> Apart from having test for subgroup differences, which other advantage
>> can have doing a subgroup analysis (with the moderator in
>> meta-regression) vs separate meta-analyses?
>> Just assuming that is a categorical moderator
--
Kind regards
*Philippe Tadger*
Statistician (Msc.). Orthopedic Manual Therapist (Mag.).
Physical Therapist (Bsc.)
+32498774742.
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list