From n|ko|ett@@@ymeon|dou @end|ng |rom un|-m@nnhe|m@de Wed Jan 8 11:45:33 2025 From: n|ko|ett@@@ymeon|dou @end|ng |rom un|-m@nnhe|m@de (Nikoletta Symeonidou) Date: Wed, 8 Jan 2025 10:45:33 +0000 Subject: [R-meta] Treating multiple conditions in within-subjects designs Message-ID: Dear list-members, We are currently conducting a meta-analysis using a hierarchical three-level model with the metafor package, where the highest level of the hierarchy is ?study? (i.e., outcomes nested within studies). We face a challenge with studies that include more than one control condition and we are unsure how to address this issue in within-subjects designs (i.e., same participants in all conditions). Specifically, several studies use within-subjects designs with more than two control conditions, which leads to multiple comparisons of interest per study (e.g., experimental condition vs. control condition 1, and experimental condition vs. control condition 2). Our main concern is how to account for the dependence that arises when the same experimental condition is used to calculate multiple effect sizes within a single study. - Does the hierarchical three-level model inherently account for this dependence? - If not, what strategies would you recommend for appropriately handling such cases (with the metafor package)? We greatly appreciate any guidance you can provide. Thank you! Best regards, Nikoletta Symeonidou (University of Mannheim) From wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n| Wed Jan 8 14:09:15 2025 From: wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n| (Viechtbauer, Wolfgang (NP)) Date: Wed, 8 Jan 2025 13:09:15 +0000 Subject: [R-meta] Treating multiple conditions in within-subjects designs In-Reply-To: References: Message-ID: Dear Nikoletta, Please see below for my responses. Best, Wolfgang > -----Original Message----- > From: R-sig-meta-analysis On Behalf > Of Nikoletta Symeonidou via R-sig-meta-analysis > Sent: Wednesday, January 8, 2025 11:46 > To: r-sig-meta-analysis at r-project.org > Cc: Nikoletta Symeonidou > Subject: [R-meta] Treating multiple conditions in within-subjects designs > > Dear list-members, > > We are currently conducting a meta-analysis using a hierarchical three-level > model with the metafor package, where the highest level of the hierarchy is > "study" (i.e., outcomes nested within studies). We face a challenge with studies > that include more than one control condition and we are unsure how to address > this issue in within-subjects designs (i.e., same participants in all > conditions). > Specifically, several studies use within-subjects designs with more than two > control conditions, which leads to multiple comparisons of interest per study > (e.g., experimental condition vs. control condition 1, and experimental > condition vs. control condition 2). Our main concern is how to account for the > dependence that arises when the same experimental condition is used to calculate > multiple effect sizes within a single study. > - Does the hierarchical three-level model inherently account for this > dependence? No, or at least not fully. The three-level model accounts for potential dependence in the underlying true effects (in the example you gave, there is in theory a true effect for experimental condition vs. control condition 1 and a true effect for experimental condition vs. control condition 2). Whether such dependence in present or not is an empirical question. However, the model does not automatically account for the correlation (or covariance) between the sampling errors of the two effect sizes (i.e., using information from the experimental condition to compute the two effect sizes induces such a correlation). Roughly, that correlation is 0.5, but one can compute this more accurately, depending on the specific effect size measure used and the group sizes. The chapter by Gleser and Olkin (2009) in 'The handbook of research synthesis and meta-analysis' (2nd ed.) provides equations for several effect size measures. See also https://www.metafor-project.org/doku.php/analyses:gleser2009 for code to reproduce the calculations from that chapter. The vcalc() function from the metafor package also provides functionality to compute the covariance. This is what goes into the infamous 'V' matrix that can be used as the second argument to rma.mv(). > - If not, what strategies would you recommend for appropriately handling such > cases (with the metafor package)? See above. In addition or alternatively (since constructing the V matrix can be a challenge), one could also skip constructing V, fit the three-level model assuming independent sampling errors as the 'working model', and then use cluster-robust inference methods (robust variance estimation) to fix things up (using robust(..., clubSandwich=TRUE)). > We greatly appreciate any guidance you can provide. Thank you! > > Best regards, > Nikoletta Symeonidou (University of Mannheim) From muetzeh @end|ng |rom un|-bremen@de Thu Jan 9 11:18:57 2025 From: muetzeh @end|ng |rom un|-bremen@de (=?UTF-8?Q?Hanna_M=C3=BCtze?=) Date: Thu, 9 Jan 2025 11:18:57 +0100 Subject: [R-meta] Sigma in Bayesian Multi-Level Meta-Analytic Models (brms) Message-ID: Dear all, I conducted a Bayesian 3-level meta-analysis with |brms| and noticed that ? is consistently estimated as 0, even in 2-level and 4-level models. I observed the same behavior in this tutorial: https://mvuorre.github.io/posts/2016-09-29-bayesian-meta-analysis/#ref-mcelreathStatisticalRethinkingBayesian2020. How should I interpret this result? How should I interpret this result? Am I correct in assuming that Bayesian models do not estimate the sampling error because it is assumed to be known based on the sample size? Unfortunately, I could not find references supporting this interpretation and would appreciate any clarification or guidance. Thank you for your time and help! Best regards, Hanna M?tze [[alternative HTML version deleted]] From i@io m@iii@g oii phys@ii@-courses@org Thu Jan 9 11:43:08 2025 From: i@io m@iii@g oii phys@ii@-courses@org (i@io m@iii@g oii phys@ii@-courses@org) Date: Thu, 9 Jan 2025 11:43:08 +0100 (CET) Subject: [R-meta] =?utf-8?q?Limited_Seats_Available_for_=22Meta-analysis_?= =?utf-8?q?in_R=22_=2810-14_February=29?= Message-ID: <1736419388.402331099@webmail.jimdo.com> Dear all, there are only a few seats left for our upcoming online course, Meta-analysis in R, taking place from 10-14 February 2025. This course offers a comprehensive introduction to modern methods for evidence synthesis, focusing on systematic review and meta-analysis in ecology and evolution. Course website: [ https://www.physalia-courses.org/courses-workshops/metain-r/ ]( https://www.physalia-courses.org/courses-workshops/metain-r/ ) Highlights: Learn how to design and conduct systematic reviews and meta-analyses in R. Hands-on sessions using real meta-analytic datasets. Explore advanced techniques like multilevel meta-analysis, meta-regression, and accounting for phylogeny. Master R packages metafor and orchaRd for your analyses. Adhere to open science principles with guidance on PRISMA EcoEvo standards. This course combines lectures and practical exercises to ensure you gain both theoretical knowledge and hands-on experience. Best regards, Carlo -------------------- Carlo Pecoraro, Ph.D Physalia-courses DIRECTOR info at physalia-courses.org mobile: +49 17645230846 [[alternative HTML version deleted]] From wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n| Thu Jan 9 13:43:14 2025 From: wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n| (Viechtbauer, Wolfgang (NP)) Date: Thu, 9 Jan 2025 12:43:14 +0000 Subject: [R-meta] Sigma in Bayesian Multi-Level Meta-Analytic Models (brms) In-Reply-To: References: Message-ID: Dear Hanna, Correct, when you use something like 'yi | se(sei) ~ ...' in brms, then the known standard errors replace the sigma parameter, which is just set to 0 (and given as such in the output). So sigma here is not an estimate but simply a fixed value. See help(brmsformula). It is possible to use 'yi | se(sei, sigma=TRUE) ~ ...', where sigma will be estimated. However, this just adds a regular error term to the model, which would be redundant with an estimate level random effect (i.e., (1 | estimate)), which should be part of the model anyway. Best, Wolfgang > -----Original Message----- > From: R-sig-meta-analysis On Behalf > Of Hanna M?tze via R-sig-meta-analysis > Sent: Thursday, January 9, 2025 11:19 > To: r-sig-meta-analysis at r-project.org > Cc: Hanna M?tze > Subject: [R-meta] Sigma in Bayesian Multi-Level Meta-Analytic Models (brms) > > Dear all, > > I conducted a Bayesian 3-level meta-analysis with |brms| and noticed > that ? is consistently estimated as 0, even in 2-level and 4-level > models. I observed the same behavior in this tutorial: > https://mvuorre.github.io/posts/2016-09-29-bayesian-meta-analysis/#ref- > mcelreathStatisticalRethinkingBayesian2020. > How should I interpret this result? > > How should I interpret this result? Am I correct in assuming that > Bayesian models do not estimate the sampling error because it is assumed > to be known based on the sample size? Unfortunately, I could not find > references supporting this interpretation and would appreciate any > clarification or guidance. > > Thank you for your time and help! > > Best regards, > Hanna M?tze