[R-meta] Publication bias with multivariate meta analysis

Norman DAURELLE norm@n@d@ure||e @end|ng |rom @grop@r|@tech@|r
Tue Aug 31 00:07:40 CEST 2021


Dear Gerta, dear list members 

thank you for your answer. I read the paper you indicated, and it was really interesting for me to learn about what is discussed in it. 

In a way it gives me more questions than answers : I used to think that heterogeneity was required for a meta-analysis. In my mind, if there was no heterogeneity, there was no point in even using the meta-analytic method (I thought no heterogeneity more or less meant all studies "reported the same result" / that they all "agreed"). 

The experience I have with data is related to one research question (which was something close to "What is, according to scientific literature's published data, and to a dataset that has been gathered between 2013 and 2016 throughout Australia, the relationship between blackleg disease severity and canola crop yield ?"). In that context, I gathered the studies which gave a relationship between a measure of disease severity (which is individual-plant-based but averaged over individuals chosen to account for a cultivated plot) and crop yield in kg.ha^-1 or ton.ha^-1. In some cases I established the relationship myself based on data that had been reported in papers that did not report the relationship even though the data could have been used that way. I ended up having 10 studies plus our own (or 11 total), resulting in 24 observed effects (which were simple yield vs disease severity linear regression slope coefficients). 

I wanted to try and make sure I was not doing things that did not make scientific sense, and I "found out" that funnel plots were (or I percieved them as) a way of getting an indication of the quality / correctness of the analysis. I had read about the "apple and oranges" problem, and I thought this might be a way of finding out whether or not what I thought of as comparable, was in reality comparable. It now seems to me that a funnel plot may not be the easiest nor the best way of determining whether or not the observed effects truly are comparable. 

Nevertheless, thinking of things that way at the time, I established a funnel plot showing effect-sizes (or "observed effects" as far as I understand the vocabulary, but most accurately "slopes" in my case) on the x axis and associated standard errors on the y axis . The resulting plot was asymmetrical. 
I thought, based on what knowledge I had, that this indicated publication bias (apparently this was because I knew / understood too little about meta-analysis), but now it seems to me that this might have been due to all the experimental differences that existed between the studies from which I collected slopes (and their standard errors) or from which I collected data that I then turned into slopes and standard errors ? 

I was fully aware that my model (a simple linear regression) could not accurately account for all the sources of variation of yield, and therefore that it could not either account for all of these sources of variation in the yield-disease relationship, but I could not use more than one quantitative predictor of yield, because no other predictor was common to all the studies I gathered. 

I used a mixed linear model with the data that was specific to our study (gathered by one member of the lab over 4 years with australian crop growers), and even though there might still be nuances that I did not capture that could have been squeezed out of the data, I observed the differences between 1°) using a simple linear regression with our data 2°) using a mixed-effects linear model including two rainfall variables as fixed-effect quantitative predictors of yield (as well as disease severity, which was present in all models) and 3 factors (year, location and cultivar) as random effects, and 3°) establishing a simple linear regression model (yield ~ disease severity) for each subset of the data based on year x location x cultivar (which fully and completely divided the dataset without leaving out spare observations nor artificially duplicating some observations) and then running a meta-analysis on the slopes I obtained that way. 

This (from a narrative, but not entirely logical perspective) leads me to my point : I did not understand why my (literature-based) meta-analysis funnel plot was asymmetrical, given the facts that 1°) I thought I had gathered all the existing literature (or at least I had included everything I had found) and 2°) I thought asymmetry meant publication bias. 

So : after reading the paper you suggested I read, is it possible that what I observed in the asymetry of my funnel plot was actually mainly the heterogeneity of the data (say, differences in slopes due to temperature, or rainfall, or soil, etc... conditions in different studies carried out in different countries) ? 

I still believe that there might be a slight publication bias (because we expect yield to decrease with disease severity, and if the data does not show that we will tend to think that we did not observe "the effect of disease severity, all else being equal"), but I am not so sure anymore that publication bias could be the main reason why the funnel plot of this meta-analysis is asymmetrical. 

It seems that there may be more than one reason why it is, and some of these reasons I might (from what I understand now, but I might still be wrong) apparently never discover. 

However, I would definetly like to be able to offer possible reasons why it is asymmetrical, and what this means. 

Sorry for this long and not very to-the-point e-mail, 

I would very much be grateful for an answer. 

Best wishes, 
Norman 






De: "Dr. Gerta Rücker" <ruecker using imbi.uni-freiburg.de> 
À: "Norman DAURELLE" <norman.daurelle using agroparistech.fr>, "Wolfgang Viechtbauer, SP" <wolfgang.viechtbauer using maastrichtuniversity.nl> 
Cc: "r-sig-meta-analysis" <r-sig-meta-analysis using r-project.org>, "Huang Wu" <huang.wu using wmich.edu> 
Envoyé: Lundi 30 Août 2021 11:40:23 
Objet: Re: [R-meta] Publication bias with multivariate meta analysis 

Dear Norman, 

If there is funnel plot asymmtery, there is always some relation between 
observed effects and their standard errors, the question is what causes 
this relationship. Possible causes are discussed in Sterne et al. 
(2011), see https://www.bmj.com/content/343/bmj.d4002 

Best wishes, 

Gerta 

Am 30.08.2021 um 10:22 schrieb Norman DAURELLE: 
> Dear list members, dear Huang and Wolfgang, 
> 
> thank you for explaining that there is no method for testing for publication bias, or more accurately, for explaining that a relationship between observed effects and their standard errors does not necessarily indicate publication bias (meaning that there are other reasons why one could encounter such a relationship). 
> 
> Outside of Huang's question : does funnel plot asymetry necessarily indicate a relationship between observed effects and their standard error ? 
> 
> I am going to have a deeper read at [ https://www.metafor-project.org/ | https://www.metafor-project.org/ ] but I would be grateful for an answer. 
> 
> Best wishes, 
> Norman 
> 
> 
> De: "Wolfgang Viechtbauer, SP" <wolfgang.viechtbauer using maastrichtuniversity.nl> 
> À: "Huang Wu" <huang.wu using wmich.edu>, "r-sig-meta-analysis" <r-sig-meta-analysis using r-project.org> 
> Envoyé: Samedi 28 Août 2021 15:37:20 
> Objet: Re: [R-meta] Publication bias with multivariate meta analysis 
> 
> Dear Huang, 
> 
> Please find my comments below. 
> 
> Best, 
> Wolfgang 
> 
>> -----Original Message----- 
>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On 
>> Behalf Of Huang Wu 
>> Sent: Saturday, 28 August, 2021 3:19 
>> To: r-sig-meta-analysis using r-project.org 
>> Subject: [R-meta] Publication bias with multivariate meta analysis 
>> 
>> Hi all, 
>> 
>> I am conducting a multivariate meta-analysis using rma.mv. I want to test for 
>> publication bias. 
>> I noticed in a previous post, Dr. Pustejovsky provided the following code for 
>> Egger’s test. 
>> 
>> egger_multi <- rma.mv(yi = yi, V = sei^2, random = ~ 1 | studyID/effectID, 
>> mods = ~ sei, data = dat) 
>> coef_test(egger_multi, vcov = "CR2") 
>> 
>> Because I conducted a multivariate meta-analysis assuming rho = 0.8, I wonder for 
>> the Egger’s test, Do I need to let V equals to the imputed covariance matrix? 
>> Would anyone help me to see if my following code is correct? Thanks. 
>> 
>> V_listm <- impute_covariance_matrix(vi = meta$dv, 
>> cluster = meta$Study.ID, 
>> r = 0.8) 
>> egger_multi <- rma.mv(yi =Cohen.s.d, V = V_listm, random = ~ 1 | Study.ID/IID, 
>> mods = ~ sqrt(dv), data = meta) 
>> coef_test(egger_multi, vcov = "CR2") 
> If you used such an approximate V matrix for your analyses, then I would also use this in this model. 
> 
>> Also, I have tried V = V_listm and V = dv, but it gave me different results. When 
>> I use V = V_Vlistm, my results suggest the effect was no longer statistically 
>> significant but when I use V = dv, my result is still significant. 
>> Does that mean my results were sensitive to the value of rho? Thanks. 
> Yes, although it's not clear to me what exactly you mean by "the effect". The coefficient corresponding to 'sqrt(dv)'? 
> 
>> By the way, does anyone have any suggestions/codes for other methods of testing 
>> publication bias? Many thanks. 
> Just a pedantic note: There are no methods for testing for publication bias. One can for example test if there is a relationship between the observed effects and their standard errors (as done above), which could result from publication bias, but there could be other explanations for such a relationship besides publication bias. 
> 
> This aside, one can also examine if there is a relationship at the study level (not at the level of the individual estimates, as done above). A simple approach for this would be to aggregate the estimates to the study level, using the aggregate() function. In fact, at that point, you could apply all of the methods available in metafor or other packages related to the issue of publication bias (including things like trim-and-fill, selection models, and so on). 
> 
> Best, 
> Wolfgang 
> _______________________________________________ 
> R-sig-meta-analysis mailing list 
> R-sig-meta-analysis using r-project.org 
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis 
> 
> [[alternative HTML version deleted]] 
> 
> _______________________________________________ 
> R-sig-meta-analysis mailing list 
> R-sig-meta-analysis using r-project.org 
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis 

-- 

Dr. rer. nat. Gerta Rücker, Dipl.-Math. 

Institute of Medical Biometry and Statistics, 
Faculty of Medicine and Medical Center - University of Freiburg 

Zinkmattenstr. 6a, D-79108 Freiburg, Germany 

Mail: ruecker using imbi.uni-freiburg.de 
Homepage: https://www.uniklinik-freiburg.de/imbi-en/employees.html?imbiuser=ruecker 

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list