[R-meta] Prediction intervals for multilevel meta-analysis

Hanel, Paul H P p@h@ne| @end|ng |rom e@@ex@@c@uk
Mon Apr 11 16:50:58 CEST 2022


Hi James,

Thank you, this is very useful.

Best,
Paul

From: James Pustejovsky <jepusto using gmail.com>
Sent: 07 April 2022 15:23
To: Hanel, Paul H P <p.hanel using essex.ac.uk>
Cc: r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] Prediction intervals for multilevel meta-analysis

Hi Paul,

I would suggest reporting a prediction interval for whatever model specification you think is appropriate for the synthesis. Which levels to include in the model is a broader question that you'll need to address for the overall synthesis, considering the structure of the data you're working with and your research aims/questions.

I tend to think of prediction intervals as a tool for model interpretation--they help you (and readers) to interpret the implications of the model estimates, including both the estimated mean effect and the degree of heterogeneity. Simply reporting the percentiles of the ES estimates isn't as useful, in my opinion, because the distribution will be more dispersed due to sampling error of the ES estimates. For instance, consider a fixed (common) effect model where every study contributes an estimate of the same underlying effect size parameter and all the studies are fairly small. Those estimates will still be dispersed because of sampling error, so the percentiles will still span a potentially wide range of estimates. And that would remain true even if you have many many studies, so that there is very little uncertainty about the average effect size. All that said, I don't think it hurts anything to report raw ES percentiles, just as a further point of reference, in addition to a prediction interval from an appropriately specified model.

James

--------------------------
James Pustejovsky
https://jepusto.com<https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fjepusto.com%2f&c=E,1,WCyJv4Fibin2dE8uAlhG12xCGGI4XRGe-Qtgg-WifCwnvqfC09iN-D0kP9_FHhZsNVgTWQlrPhuegFcU5gRMQYqlFhzHRqlvkhjRN-fW&typo=1>


On Wed, Apr 6, 2022 at 4:47 PM Hanel, Paul H P <p.hanel using essex.ac.uk<mailto:p.hanel using essex.ac.uk>> wrote:
Hi James,

Thank you, that is very useful.

Your answer makes me however wonder whether there is much point of reporting one prediction interval, since its width seems quite strongly depend on the number of levels. In the example I used three levels, but I could have added more levels (e.g., papers nested in authors, authors nested in countries) which would presumably have further increased the width of the PI.

Would it be more straight forward and less subjective to report some descriptive statistics for the observed effect sizes, such as the 2.5 and 97.5 percentile?

Paul

From: James Pustejovsky <jepusto using gmail.com<mailto:jepusto using gmail.com>>
Sent: 06 April 2022 14:55
To: Hanel, Paul H P <p.hanel using essex.ac.uk<mailto:p.hanel using essex.ac.uk>>
Cc: r-sig-meta-analysis using r-project.org<mailto:r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] Prediction intervals for multilevel meta-analysis


CAUTION: This email was sent from outside the University of Essex. Please do not click any links or open any attachments unless you recognise and trust the sender. If you are unsure whether the content of the email is safe or have any other queries, please contact the IT Helpdesk.
Hi Paul,

In short, yes. The prediction interval incorporates two sources of uncertainty: uncertainty from the estimate of the mean effect (the center of the PI) and uncertainty from there being a distribution of effects about the mean (measured by the sum of the random effects variance components). In your case, my guess is that the change in the prediction intervals is driven by the first source. When you add in additional levels, you are acknowledging that there is additional dependence in the data structure, and this dependence leads to more uncertainty about the average effect. You should see this in how the standard error of the average effect increases across the three models you described. Particularly if the data include only a small number of top-level units, the increase in SE can lead to fairly big changes in the width of the PI.

In the example you gave, it's interesting (at least to me ::nerdfaceemoji::) to see that the second source of uncertainty doesn't actually change very much. We can see this by comparing the sums of variance components (sigma-squared's rather than sigmas):
- Model 1 total RE variance: 0.41^2 = 0.1681
- Model 2 total RE variance: 0.279^2 + 0.317^2 = 0.1783 (square root = 0.4223)
- Model 3 total RE variance: 0.275^2 + 0.114^2+0.309^2 = 0.1841 (square root = 0.4209)
The total variance increases slightly, but not enough to affect the width of the PI by all that much. The differences between models indicate that they're similar to variance decompositions. Model 1 is estimating the total variance, Model 2 is telling how much of the total is at level 2 versus level 1, Model 3 is further breaking out how much of the total is at level 3, level 2, and level 1.

James

--------------------------
James Pustejovsky
https://jepusto.com<https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fjepusto.com&c=E,1,ZD3U59-zNRsG2euYoAnaP39fQZlhEZI1N0RIiSpOvlvZ5aIj0kDrp5anxIqgvNDsSWmnYg6dV6oJaVl_l2g6F8-_fxVb6b7OJ2ttQ1c8oyvMAPQf&typo=1>

On Wed, Apr 6, 2022 at 6:19 AM Hanel, Paul H P <p.hanel using essex.ac.uk<mailto:p.hanel using essex.ac.uk>> wrote:
Why do prediction intervals get so much wider when a multi-level approach is used?

Prediction intervals are usually computed by +/- tau*1.96. Obtaining tau is straightforward when doing a random-effects meta-analysis (e.g., function rma() with metafor).

When running a multilevel meta-analysis, things are a bit more complicated. According to Wolfgang Viechtbauer, it is possible to take the sum of the taus ô (or sigmas, as the taus are called in the output of the rma.mv<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2frma.mv&c=E,1,3D86f418D6yyXOVVu-ciKZGvG9WuMj8eXOZ3ja0nvO9Nhl6K6yG3CgSiMjfk6fj-kF45kCvkxEX2Y1_1PoGXZrlqWZwtOLG2zRLLcq4FBVII4iyPR-kD4zRDefS9&typo=1>() function). However, this results in even wider prediction intervals. For a random effects meta-analysis with over 300 effect sizes, the width of the prediction interval is 1.60 (tau = 0.41). Command used: rma(yi, vi, data = df)
When I run a multilevel meta-analysis with effect sizes nested in studies, the width of the prediction interval is 2.34 (tau/sigma level 1 = .279, level 2 = .317). Command used: rma.mv<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2frma.mv&c=E,1,hu6Tp31xaJ9qHzvb4q9KxL_W1wYgrOzXjVGBia6e91MK1_CwmXjnzD0rLNm49s3dKa-B0AYZsKaR_OXkRI2z7RRMJMjcaM2z09RYyNi_3C1ynWKJ&typo=1>(yi, vi, random = list(~ 1 | effectID, ~ 1 | StudyID), tdist = TRUE, data=df)
If I add yet another level, articles (i.e., effect sizes nested within studies, studies nested within papers), the width of the prediction interval gets even wider: 2.74 (tau/sigma level 1 = .275, level 2 = .114, level 3 = .309). Command used: rma.mv<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2frma.mv&c=E,1,b_gTFNJBBI02zrLSCRz8WQWFg6iQsQwxEMi4Xfm8bKZjw_D5u2PJecQOFVq6bifGJEnw5AFlYHvdC1YKCwAdgfOgdSwPUHKLWW9JNFuLqMwHl4AbeAgzfg,,&typo=1>(yi, vi, random = list(~ 1 | effectID, ~ 1 | StudyID, ~ 1 | PaperID), tdist = TRUE, data=df)

Is it plausible that the prediction intervals get that much wider?

Thanks,
Paul



        [[alternative HTML version deleted]]

_______________________________________________
R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org<mailto:R-sig-meta-analysis using r-project.org>
To manage your subscription to this mailing list, go to:
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list