[R-meta] Differences between metafor / robumeta / lme4

Bernard Fernou bern@rd@|ernou @end|ng |rom gm@||@com
Sat Jul 11 13:15:59 CEST 2020

Hi everyone,
I hope this email finds you well.

I am trying to model the relationship between depression and BMI over
several studies conducted in our research lab. We have access to all
individual patient data.

We have 7 seven studies in which BMI is always assessed using a
digital scale and a meter but in which we used various measures to assess
depression (self reported questionnaire / interviews / medical file). Each
study included either 1, 2 or the 3 measures of depression.  The effect
size used is correlation.

We have analyzed our data using 3 methods (2 two-stage approaches: robu
from robumeta / 3 level meta analysis with robust SE using metafor; and 1
one-stage approach: a mixed model using lme4).

A complete reproducible example is provided below.

Results from metafor and lme4 agreed (p values were not significant and
very similar: p = .13, .14). However, results from robumeta are quite
different (the p value is barely significant p = .03). I tried to use
several rho values in robumeta but it did not change anything. Note that
heterogeneity/inconsistency (assessed using tau² / I²) are very low.

Unfortunately, as this meta analysis did not followed any systematic
review, we made the mistake to not pre register it and to directly analyze
the data. Therefore, we have not any proof of what analysis what thought as
primary. Even if the effet sizes are similar across 3 methods, it is
probable that readers will be sensitive to p-values. The choice of the
statistical analysis is thus critical.

I know that one-stage and two stages approaches could differ because of the
model specification (Burke et al., 2017). Thus; I suspect that we missed a
specification differing between robumeta on the one side and both
metafor+lme4 on the other.

I do not know if this mailing list is the right place to ask for help but
if someone could give me a feedback on the code used and on the best
approach to analyze our data, it would be greatly appreciated!



BMI<-rnorm(1000, 23, 4)
Depression<-rnorm(1000, 10, 2)+0.1*BMI
Study<-rep(1:6, times=c(100,100,100,300,200,200))
Outcome<-c(rep(c("Questionnaire", "Interview"), times=c(100,200)),#first 3
           rep(c("Questionnaire", "Interview", "MedicalFile"),
each=100),#4th study
           rep(c("Questionnaire", "Interview"), each=100, times=2))#5&6th


#see the structure of the data
df %>%
  group_by(Study ,Outcome) %>%


df.cor<-split(df, list(df$Study, df$Outcome), drop=TRUE)

df.meta.1<-as.data.frame(do.call(rbind, lapply(df.cor,corFUNC)))

df.meta<-df.meta.1 %>%
  dplyr::mutate(Study.Outcome=row.names(df.meta.1)) %>%
  tidyr::separate(Study.Outcome, into=c("Study", "Outcome")) %>%
  dplyr::rename("Cor"=V1,"N"=V2) %>%
  dplyr::arrange(Study) %>%
    yi=yi<-escalc(measure="ZCOR", ri=as.numeric(Cor), ni=as.numeric(N),
    vi=yi<-escalc(measure="ZCOR", ri=as.numeric(Cor), ni=as.numeric(N),

    formula = yi ~ 1,
    studynum = Study,
    small = TRUE,
    data = df.meta)

  df$BMI<-as.numeric(df$BMI); df$Depression<-as.numeric(df$Depression)


  Approach3=rma.mv(yi, vi, random = ~ 1 |Study/Outcome, data=df.meta)
  robust(Approach3, cluster = df.meta$Study)

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list