[R-meta] 3-level meta with robust errors
Valeria Ivaniushina
v@|v@n|u@h|n@ @end|ng |rom gm@||@com
Thu Dec 3 13:03:35 CET 2020
Thank you !
On Thu, Dec 3, 2020 at 11:04 AM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> Dear Valeria,
>
> You can use conf_int() from clubSandwich to get CIs.
>
> As for detecting outliers: I am not aware of any such rules (that are
> actually validated).
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: R-sig-meta-analysis [mailto:
> r-sig-meta-analysis-bounces using r-project.org]
> >On Behalf Of Valeria Ivaniushina
> >Sent: Wednesday, 02 December, 2020 17:03
> >To: James Pustejovsky; R meta
> >Subject: Re: [R-meta] 3-level meta with robust errors
> >
> >ATTACHMENT(S) REMOVED: ATT00001.txt | MetaAnallysis avSim.sav | script for
> >avSim_SH.R
> >
> >Dear James,
> >Thank you!
> >
> >Attached are the code and the database.
> >And here is some results
> >> summary(based_inf)
> >Multivariate Meta-Analysis Model (k = 17; method: REML)
> > logLik Deviance AIC BIC AICc
> >-14.2991 28.5983 34.5983 36.9161 36.5983
> >
> >Variance Components:
> > estim sqrt nlvls fixed factor
> >sigma^2.1 0.0000 0.0000 12 no ID_study
> >sigma^2.2 4.1629 2.0403 5 no ID_database
> >
> >Test for Heterogeneity:
> >Q(df = 16) = 48.1411, p-val < .0001
> >
> >Model Results:
> >estimate se tval pval ci.lb ci.ub
> > 2.0905 0.9704 2.1542 0.0468 0.0333 4.1478 *
> >
> >
> >> coef_test(based_inf, vcov = "CR2",
> >+ cluster = wb$ID_database)
> > Coef. Estimate SE t-stat d.f. p-val (Satt) Sig.
> >1 intrcpt 2.09 0.954 2.19 3.91 0.0952 .
> >
> >I think I found out where our mistake was.
> >
> >The sandwich correction doesn't calculate Conf Intervals, so we calculated
> >them using formula: SE*1.96
> >Stupid, I know.
> >Still, even now I am not sure how to correctly calculate CI here - could
> you
> >please explain?
> >
> >And another question
> >There are several methods for outliers detection: Cook distance,
> residuals,
> >hat values. Rather often a study is problematic with one method but OK
> with
> >others. Are there any guidelines which studies should be removed --
> i.e.,
> >when at least two methods indicate it as outliers?
> >
> >Best,
> >Valeria
> >
> >On Tue, Dec 1, 2020 at 9:10 PM James Pustejovsky <jepusto using gmail.com>
> wrote:
> >Valeria,
> >
> >These are indeed perplexing results. Based on the information you've
> >provided, it's hard to say what could be going on. Could you provide
> >examples of the code you're using and the results of your analyses? Doing
> so
> >will help to identify potential problems or coding errors.
> >
> >Kind Regards,
> >James
> >
> >On Tue, Dec 1, 2020 at 10:45 AM Valeria Ivaniushina
> ><v.ivaniushina using gmail.com> wrote:
> >Dear list members,
> >
> >We are conducting several meta-analyses using the metafor package in R
> >(Viechtbauer 2010) because of 3-level data structure, followed by
> >sandwich-type estimator with a small-sample adjustment to get cluster
> >robust standard errors.
> >
> >There are some things that puzzle me, and I hope to get answers from the
> >community.
> >
> >1. We calculate 95% CI for our mean effect size, and p-value is calculated
> >as a part of the output. While CI always indicate significant mean effect
> >size, p-values are often > 0.05
> >- Should I report both CI and p-value?
> >- How to interpret such discrepancy?
> >
> >2. When I draw a forest plot for a meta-analysis of 8 models, I can see
> >that 95% CIs for every coefficient contain zero (for example, -0.40 -
> >0.84). However, the 95% CI for the mean coefficient is well above zero
> >(0,28 - 0,45). How is it possible?
> >
> >3. Theoretically, the data has a 3-level structure (model; article;
> >database). But sometimes I see that there is no variance on one or two of
> >the levels. Should I repeat the analysis with only 2 or 1 level, according
> >to the variance distribution?
> >
> >Best, Valeria
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list