[R-meta] rule of thumb miminum number of studies per factor level meta-regression
Lukasz Stasielowicz
|uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Mon Mar 28 19:26:28 CEST 2022
Dear Lena,
as Wolfgang has pointed out, it's a rather complex topic.
In general, power is rather low in moderator analyses and depends on
several factors.
You'll find a short introduction in the following book (e.g. Chapter 6):
Pigott, T. (2012). Advances in meta-analysis. Springer Science &
Business Media. https://doi.org/10.1007/978-1-4614-2278-5
Since you're working in Germany you can probably access Springer ebooks
for free.
Best,
Lukasz
--
Lukasz Stasielowicz
Osnabrück University
Institute for Psychology
Research methods, psychological assessment, and evaluation
Seminarstraße 20
49074 Osnabrück (Germany)
Am 28.03.2022 um 12:58 schrieb r-sig-meta-analysis-request using r-project.org:
> Send R-sig-meta-analysis mailing list submissions to
> r-sig-meta-analysis using r-project.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> r-sig-meta-analysis-request using r-project.org
>
> You can reach the person managing the list at
> r-sig-meta-analysis-owner using r-project.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
>
>
> Today's Topics:
>
> 1. Re: rule of thumb miminum number of studies per factor level
> meta-regression (Viechtbauer, Wolfgang (SP))
> 2. Re: Question on three-level meta-analysis
> (Viechtbauer, Wolfgang (SP))
> 3. Re: Dealing with missing data in bivariate analysis
> (Viechtbauer, Wolfgang (SP))
> 4. Notable difference between treditional and bootstrap 95% CI
> for sigma2: which one is preffered? (towhidi)
> 5. Re: Dealing with missing data in bivariate analysis
> (Olina Ngwenya)
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 28 Mar 2022 10:04:05 +0000
> From: "Viechtbauer, Wolfgang (SP)"
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> To: Lena Pollerhoff <lena using pollerhoff.de>,
> "r-sig-meta-analysis using r-project.org"
> <r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] rule of thumb miminum number of studies per
> factor level meta-regression
> Message-ID: <3a7e1084ce6d4a5db1b4bc0284322f1b using UM-MAIL3214.unimaas.nl>
> Content-Type: text/plain; charset="us-ascii"
>
> Dear Lena,
>
> Just for the record, the '10 studies per covariate' rule comes from here:
>
> https://training.cochrane.org/handbook/current/chapter-10#section-10-11-5-1
>
> where it says:
>
> "It is very unlikely that an investigation of heterogeneity will produce useful findings unless there is a substantial number of studies. Typical advice for undertaking simple regression analyses: that at least ten observations (i.e. ten studies in a meta-analysis) should be available for each characteristic modelled. However, even this will be too few when the covariates are unevenly distributed across studies."
>
> I have no idea where the 10 per covariate rule comes from (there is also no reference in the Cochrane Handbook) and I am not aware of any empirical support for it. I suspect it was just taken over from similar rules that have been formulated in other contexts (e.g., regression models with primary data, prediction models, factor analysis) where these rules have often been formulated without much, if any, empirical support.
>
> Given what it says in the Cochrane Handbook, one could read this to imply that at least 10 studies per covariate are needed to 'produce useful findings'. Without a definition of 'useful findings', I don't even know how to evaluate whether such a rule is sensible or not.
>
> I am not trying to rag on the Cochrane Handbook. The question about 'k per moderator' (or k in general for a meta-analysis) is one of the questions that *always* comes up in any course on meta-analysis I teach. It is a good question and I have no good answer for it, except to mention that such rules exist (e.g., '10 per covariate'), but that they lack empirical support.
>
> Analogously, I am not aware of any evidence-based guidelines with respect to your 'k per level' question.
>
> So, in the end, I am doing again the same thing as I always do when I get this question, which is to provide no good answer.
>
> Best,
> Wolfgang
>
>> -----Original Message-----
>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>> Behalf Of Lena Pollerhoff
>> Sent: Monday, 28 March, 2022 10:40
>> To: r-sig-meta-analysis using r-project.org
>> Subject: [R-meta] rule of thumb miminum number of studies per factor level meta-
>> regression
>>
>> Dear list member,
>>
>> I am conducting meta-regressions in metafor at the moment and have a short
>> question regarding rule of thumbs with respect to categorical predictors in meta-
>> regression. While we are aware of one rule of thumb that meta-regressions should
>> not be considered for fewer than ten studies per covariate (e.g., Cochrane
>> Handbook), we were wondering whether such a rule of thumb also exists with
>> respect to the minimum number of studies per factor level of a categorical
>> variable?
>>
>> In my case, I am conducting meta-regressions, where the number of studies per
>> factor level are sometimes unevenly distributed: For example, k = 22, and I have
>> one categorical predictor with three factor levels, with the first one
>> represented by only one study, the second one by three studies, and the third one
>> including 18 studies.
>>
>> Thanks in advance and have a nice day!
>> Lena Pollerhoff
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 28 Mar 2022 10:08:16 +0000
> From: "Viechtbauer, Wolfgang (SP)"
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> To: David Pedrosa <pedrosac using staff.uni-marburg.de>,
> "r-sig-meta-analysis using r-project.org"
> <r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] Question on three-level meta-analysis
> Message-ID: <7c36dc18db5d4397b760368d494fb655 using UM-MAIL3214.unimaas.nl>
> Content-Type: text/plain; charset="utf-8"
>
> Dear David,
>
> I don't quite understand your question. What variance-covariance-matrices are you referring to and how would you stick them into the model?
>
> Best,
> Wolfgang
>
>> -----Original Message-----
>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>> Behalf Of David Pedrosa
>> Sent: Monday, 28 March, 2022 10:02
>> To: r-sig-meta-analysis using r-project.org
>> Subject: [R-meta] Question on three-level meta-analysis
>>
>> Dear list,
>>
>> there is one question I have not been able to get my head around and
>> it's about whether if estimation of variance-covariance-matrices in a
>> nested/multlevel hierarchical model make sense. To put things in a
>> context, we have ~60 studies for which we could estimate a standardised
>> mean difference and these studies are with minor exceptions all
>> independent. Yet, there are 6 categories of interventions with something
>> between 2 and 30 studies nested within, so that we have individuals,
>> studies and category_of_intervention. We also added two moderators in
>> the model; quality of studies and whether it's a RCT or a NRCT which
>> resulted in the following:
>>
>> res <- rma.mv(yi, vi,
>> random = ~ 1 | category/study_id,
>> mods= ~ qualsyst*factor(study_type),
>> data=dat)
>>
>> If there were studies in which some participants received different
>> treatments (i.e. many of them were not independent), I guess the
>> estimation of a different vcov should make sense. But I think it's
>> possibly only 3-5 studies within all 60 of them. So is it conceptually
>> correct to estimate the vcov for the level 'category' and stick it into
>> the model or is that already included as I use category as random
>> effect? I don't think it makes a huge difference, but I'm not sure about it.
>>
>> Thanks for your help,
>>
>> David
>>
>> --
>>
>> ;
>> <http://www.ukgm.de>;
>>
>> PD Dr. David Pedrosa
>> Leitender Oberarzt der Klinik für Neurologie,
>> Leiter der Sektion Bewegungsstörungen, Universitätsklinikum Gießen und
>> Marburg
>>
>> Tel.: (+49) 6421-58 65299 Fax: (+49) 6421-58 67055
>>
>> ;
>> Adresse: Baldingerstr., 35043 Marburg
>>
>> Web: https://www.ukgm.de/ugm_2/deu/umr_neu/index.html
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 28 Mar 2022 10:06:09 +0000
> From: "Viechtbauer, Wolfgang (SP)"
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> To: Olina Ngwenya <olina.ngwenya using manchester.ac.uk>,
> "r-sig-meta-analysis using r-project.org"
> <r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] Dealing with missing data in bivariate analysis
> Message-ID: <50d80e6cdac2422b9b05dd52a5cb3fae using UM-MAIL3214.unimaas.nl>
> Content-Type: text/plain; charset="us-ascii"
>
> Dear Olina,
>
> Depends a bit on what is missing. If values of predictor/moderator variables are missing, then one possibility is to use some kind of imputation technique. See, for example:
>
> https://www.metafor-project.org/doku.php/tips:multiple_imputation_with_mice_and_metafor
>
> Best,
> Wolfgang
>
>> -----Original Message-----
>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>> Behalf Of Olina Ngwenya
>> Sent: Monday, 28 March, 2022 11:29
>> To: r-sig-meta-analysis using r-project.org
>> Subject: [R-meta] Dealing with missing data in bivariate analysis
>>
>> Dear R-sig-meta-analysts
>>
>> I have been doing bivariate meta-analysis using rma.mv(), but one of my outcomes
>> has missing values and I am getting this message "Rows with NAs omitted from
>> model fitting". My question is "Do we have other ways of dealing with missing
>> data in meta-analysis instead of discarding rows with missing values".
>>
>> Thank you
>>
>> Olina Ngwenya
>> Research Assistant
>> Centre for Biostatistics | School of Health Sciences | Faculty of Biology,
>> Medicine and Health | University of Manchester
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 28 Mar 2022 15:23:41 +0430
> From: towhidi <towhidi using ut.ac.ir>
> To: r sig meta-analysis list <r-sig-meta-analysis using r-project.org>
> Subject: [R-meta] Notable difference between treditional and bootstrap
> 95% CI for sigma2: which one is preffered?
> Message-ID: <4b4739676d9df544101d47427c066d71 using ut.ac.ir>
> Content-Type: text/plain; charset="us-ascii"; Format="flowed"
>
> Dear all,
>
> I am working on a dataset with a multilevel structure: 185 SMDs, nested
> in 108 outcomes, nested in 41 comparisons (to address multiarmed trials)
> nested in 34 studies (random = ~1 |
> stud_id/cont_id/outcome_id/occasion).
>
> For some of the sigma^2 values, the CI from confint() is largely
> different from the bootstrap CI, e.g., for a sigma^2 = .04, the upper
> limit from confint() is .38, while the boot CI upper limit is .21.
>
> (1) What does this difference imply?
>
> (2) When such differences exist between traditional and boot CIs, Which
> one is more reliable?
>
> For calculating boot CI I used the following:
>
> sim <- simulate(res, nsim=300)
> sav <- lapply(sim, function(x) {
> tmp <- try(rma.mv(x, vi, data = dat, random = res$random), silent=TRUE)
> if (inherits(tmp, "try-error")) {
> next
> } else {
> tmp
> }})
>
> sigma2.l4 <- sapply(sav, function(x) x$sigma2[2])
>
> quantile(sigma2.l4, c(0.025, .975))
>
> Of note, I have checked the profile plot and there seemed to be no
> convergence problem.
>
> I also have another related question:
> (3) Is the general formula for I^2 for multilevel models
> (https://www.metafor-project.org/doku.php/tips:i2_multilevel_multivariate)
> can be applied to RVE without any modifications?
>
> Thank you.
>
>
More information about the R-sig-meta-analysis
mailing list