[R-meta] Outlier and influential case analysis for multilevel meta-analysis with RVE

Maximilian Steininger m@x|m|||@n@@te|n|nger @end|ng |rom un|v|e@@c@@t
Mon Aug 26 07:54:19 CEST 2024


Dear James, 

Thanks a lot for your input. Then I will do the diagnostics before applying RVE!

I agree that it’s quite challenging to define what an outlier is in the multilevel context. Since I have very few effect sizes per study (max. 3, mostly 1) and a big between-study (and not so much within-study) heterogeneity, my approach would be to identify outlying effect sizes with respect to overall distribution. Do you have any suggestions for further reading on this topic?

Best and many thanks!

Max
——

Mag. Maximilian Steininger
  PhD candidate

  Social, Cognitive and Affective Neuroscience Unit
  Faculty of Psychology
  University of Vienna

  Liebiggasse 5
  1010 Vienna, Austria

  e: maximilian.steininger using univie.ac.at
  w: http://scan.psy.univie.ac.at

> Am 24.08.2024 um 14:40 schrieb James Pustejovsky via R-sig-meta-analysis <r-sig-meta-analysis using r-project.org>:
> 
> I think it makes sense to do the analysis of outliers and influential cases
> before applying RVE. One way to think about this approach is that you are
> examining the assumptions _of the working model_, to understand the extent
> to which those assumptions are reasonable, even if you will later use RVE
> to protect against model misspecification.
> 
> I think this approach is advantageous because it gives access to a richer
> set of diagnostic tools, whereas the other approach is just a single
> rule-of-thumb (one which I don't think has a strong statistical rationale
> in the first place).
> 
> A further challenge here that I don't think has been addressed thoroughly
> in the meta-analysis methods literature is how to think about outliers in
> the multilevel context. When there is both between-study and within-study
> variation, one could imagine there being outlying studies, outlying effect
> sizes with respect to the overall distribution, or outlying effect sizes
> relative to the distribution of effects within the same study. Perhaps
> others on the list know of guidance about how to diagnose these features.
> 
> Best,
> James
> 
> On Fri, Aug 23, 2024 at 6:09 AM Maximilian Steininger via
> R-sig-meta-analysis <r-sig-meta-analysis using r-project.org> wrote:
> 
>> Dear all,
>> 
>> I am conducting a three-level meta-analysis where I have different
>> dependency structures in my data. I model the dependency by approximating
>> the var-cov matrix, followed by estimating a three-level model and then I
>> apply robust variance estimation to compute my outcome (as suggested e.g.
>> here:
>> https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-for-meta-analyses-involving-complex-dependency-structures
>> <
>> https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-for-meta-analyses-involving-complex-dependency-structures
>>> )
>> 
>> I wanted to do some sensitivity analysis on the model by running outlier
>> and influential diagnostics. However, most of the proposed diagnostics do
>> not work on "robust.rma" objects.
>> 
>> So far I did some model diagnostics by calculating cook's distance and hat
>> values for my robust model (see e.g.,
>> https://wviechtb.github.io/metafor/reference/influence.rma.mv.html <
>> https://wviechtb.github.io/metafor/reference/influence.rma.mv.html>). But
>> as far as I am concerned these "only" give me information on influential
>> cases and not outliers.
>> 
>> What is the best approach to check for outliers when using robust models?
>> Are the two options below a sensible approach to check for outliers?
>> 
>> According to this source a possible but rather conservative approach is to
>> label all studies as outliers that have confidence intervals that do not
>> overlap with the confidence interval of the pooled effect. (see:
>> https://cjvanlissa.github.io/Doing-Meta-Analysis-in-R/detecting-outliers-influential-cases.html
>> <
>> https://cjvanlissa.github.io/Doing-Meta-Analysis-in-R/detecting-outliers-influential-cases.html
>>> ).
>> Is it a feasible option to perform outlier diagnostics for the non-robust
>> model as suggested e.g. by Viechtbauer & Cheung (2010; 10.1002/jrsm.11). My
>> approach here would be to identify outliers based on the non-robust model
>> --> exclude the outliers --> rerun the whole analysis without the outliers
>> (i.e., approximate var-cov matrix, estimate three-level model, apply robust
>> variance estimation for the subset of studies).
>> Or are there other, more elegant ways to do this?
>> 
>> Best and many thanks!
>> ——
>> 
>> Mag. Maximilian Steininger
>>  PhD candidate
>> 
>>  Social, Cognitive and Affective Neuroscience Unit
>>  Faculty of Psychology
>>  University of Vienna
>> 
>>  Liebiggasse 5
>>  1010 Vienna, Austria
>> 
>>  e: maximilian.steininger using univie.ac.at
>>  w: http://scan.psy.univie.ac.at
>> 
>> 
>>        [[alternative HTML version deleted]]
>> 
>> _______________________________________________
>> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
>> To manage your subscription to this mailing list, go to:
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>> 
> 
> 	[[alternative HTML version deleted]]
> 
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list