[R-meta] [EXTERNAL] Re: question about effect size estimates using Berkey
Viechtbauer, Wolfgang (SP)
wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Sun Dec 30 16:37:56 CET 2018
See responses below.
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-
>project.org] On Behalf Of Van Meter, Anna
>Sent: Monday, 24 December, 2018 16:25
>To: Michael Dewey; r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] [EXTERNAL] Re: question about effect size estimates
>Thank you - this is very helpful!
>A few follow-up questions for the group:
>*I was using the Berkey method to account for the fact that individuals
>might be represented in multiple prevalence groups. Is that still
>necessary with this approach that allows the heterogeneity to be
>different within subgroup, or is it better to estimate with the
>Konstantopolous (yi, vi) approach?
Whether one should allow for different amounts of heterogeneity across subgroups or not is an empirical question (i.e., it depends on whether the amount of heterogeneity differs across subgroups or not), so without seeing the data, I cannot answer that. However, you could do LRTs (using the anova() function) to compare models where the amount differs versus not and see whether the model with different amounts of heterogeneity gives a significantly better fit.
>*If it is still better to use Berkey (which I suspect is the case), does
>the struct="DIAG" command account for the block diagonal or is
DIAG would assume that the true effects within studies are uncorrelated, which seems not very plausible (but again, this is an empirical question).
>*I had been using ML as the estimator, but this example is using REML.
>I'm sure this topic has been addressed before, so if someone could point
>me to information about which is best in this scenario, I would
In general, I would use REML. ML estimates of variance components are known to have negative bias, so will tend to be too small on average. REML counteracts this and usually gives unbiased estimates.
>*Finally, when I run a simple model with only the prevalence type
>(threegroup) as a moderator:
>resmvberkeyhybrid<-rma.mv(yi, berkeyV, mods = ~ newgroup, random = ~
>newgroup | articleno, struct="DIAG", data=kidtall1, digits=3)
>I get separate coefficients for two of the three groups, but when I
>include other moderators in the model:
>resmvberkeyhybridmods<-rma.mv(yi, berkeyV, mods=cbind(newgroup,
>yearpub_center, USA, multiinformantyn, broad, age_center, lifetime),
>random = ~ newgroup | articleno, struct="DIAG", data=kidtall1, digits=3)
>I get only one coefficient for threegroup. I'm not sure why this would
Using '~ newgroup' makes use of formula syntax, so the entire machinery in R that deals with formulas is used. For example, factors get coded appropriately. If you just pass a matrix to mods (as you do with cbind()), then you have do the coding yourself. In most cases, you want to use formulas. So, use:
mods = ~ newgroup + yearpub_center + USA + multiinformantyn + broad + age_center + lifetime
More information about the R-sig-meta-analysis