[R-meta] Question about a meta-analysis of 2 studies

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Nov 14 15:33:09 CET 2024


Dear Adelina,

I read IntHout et al. (2014) a while ago, but never noticed this part in the appendix 3. To quote:

"The usual approach to perform an HKSJ analysis with metafor is based on study effects combined with fixed effects weights or standard errors."

This doesn't make sense. If you use the HKSJ method in the context of a fixed-effects model, metafor will actually issue a warning about this (as the authors acknowledge a few sentences later).

"In our examples the HKSJ method must be applied on random effects weights instead of fixed effects weights. This can be done by choosing a fixed effects analysis (method="FE") in combination with the HKSJ method."

This is all completely backwards. As the name implies, method="FE" uses a fixed-effects model and hence NOT random-effects model weights.

"This will result in warnings, because in general the HKSJ adjustment is not meant to be used in combination with a fixed effects analysis. In this case, the warnings can be neglected."

So, I don't think you should do what is suggested here.

Leaving this issue aside, as Michael already pointed out, the HKSJ will give extremely wide CIs by construction when k=2.

You might find these articles of interest:

Röver, C., Knapp, G., & Friede, T. (2015). Hartung-Knapp-Sidik-Jonkman approach and its modification for random-effects meta-analysis with few studies. BMC Medical Research Methodology, 15, 99. https://doi.org/10.1186/s12874-015-0091-1

Gonnermann, A., Framke, T., Grosshennig, A., & Koch, A. (2015). No solution yet for combining two independent studies in the presence of heterogeneity. Statistics in Medicine, 34(16), 2476-2480. https://doi.org/10.1002/sim.6473

Bender, R., Friede, T., Koch, A., Kuss, O., Schlattmann, P., Schwarzer, G., & Skipka, G. (2018). Methods for evidence synthesis in the case of very few studies. Research Synthesis Methods, 9(3), 382-392. https://doi.org/10.1002/jrsm.1297

My take on this is: Either use a fixed-effects model (without the HKSJ method), making a conditional inference (see here: https://wviechtb.github.io/metafor/reference/misc-models.html), or consider a Bayesian model where a prior on tau^2 can help to stabilize things. The bayesmeta package is a nice choice for this.

Best,
Wolfgang

> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf
> Of Adelina Artenie via R-sig-meta-analysis
> Sent: Thursday, November 14, 2024 14:57
> To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> project.org>
> Cc: Adelina Artenie <adelina.artenie using bristol.ac.uk>
> Subject: Re: [R-meta] Question about a meta-analysis of 2 studies
>
> Hi Michael,
>
> Thanks for the reply. In my code, I referenced the paper which recommends this
> (counter-intuitive) approach: appendix 3 (I can�t seem to be able to attach
> here).
> There are different ways of implementing the same method. For example, we could
> also do:
> meta_inci <- metagen(TE = ln_inci,
>                          lower = ln_LB,
>                          upper = ln_UB,
>                          studlab = idd_count,
>                          data = df_inci,
>                          sm = "IRLN",
>                          method.tau ="SJ" ,
>                          comb.fixed = FALSE,
>                          comb.random = TRUE, backtransf = TRUE,
>                            hakn = TRUE,
>                          text.random = "Overall")
> summary(meta_inci)
>
> Both approaches produce the same results, so it does not seem to be a coding
> problem.
> Agree the variance is expected to be large but the estimated 95%CI are
> unrealistic (0 - >1000).
> Adelina
>
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> on behalf
> of Michael Dewey via R-sig-meta-analysis <r-sig-meta-analysis using r-project.org>
> Date: Thursday, 14 November 2024 at 13:42
> To: Adelina Artenie via R-sig-meta-analysis <r-sig-meta-analysis using r-project.org>
> Cc: Michael Dewey <lists using dewey.myzen.co.uk>
> Subject: Re: [R-meta] Question about a meta-analysis of 2 studies
> Dear Adelina
>
> You state that you are interested in the HKSG method but I do not see an
> exampe of that in your code. You are also doing something which metafor
> regards as incompatible (knha with FE).
>
> But the main problem is that you are trying to estimate a variance
> (tau^2) based on only two observations. This is in general very imprecise.
>
> If you can clarify what your underlying scientific goal is it may be
> that somebody, quite likely not me< can offer a way forward.
>
> Michael
>
> On 14/11/2024 11:11, Adelina Artenie via R-sig-meta-analysis wrote:
> > Hello,
> >
> > The HKSG <https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-
> 2288-14-25#MOESM1> approach has been proposed to be used when the number of
> studies to pool is small, instead of more traditional meta-analysis methods.
> > I have to pool several estimates in cases where there are only 2 estimates,
> often quite different from each other and with varying levels of precision.
> > In pretty much all cases, the HKSG method seems to break down, leading to
> unrealistic 95%CI (this seems to improve as soon as I have at least 3 estimates
> and gets better with more estimates).
> > Conceptually, I get it: we have only 2 studies and the estimates are very
> different, so a meta-analysis is not ideal. But if I still want to do it, do you
> know of other methods that could better account for heterogeneity than
> traditional methods, even if imperfect?
> > I included some example code below.
> > Thanks
> > Adelina
> >
> >
> > library(meta)
> > library(metafor)
> >
> > idd_count <- c(1, 2)
> >
> > inci <- c(11.1849, 1.484536956)
> > CI95_LB <- c(6.8522, 1.042335486)
> > CI95_UB <- c(18.2571, 1.985159973)
> > df_inci <- data.frame(idd_count, inci, CI95_LB, CI95_UB)
> >
> > # DL estimator for tau
> > df_inci$ln_inci <- log(df_inci$inci)
> > df_inci$ln_LB <-log(df_inci$CI95_LB)
> > df_inci$ln_UB <-log(df_inci$CI95_UB)
> >
> > meta_inci <- metagen(TE = ln_inci,
> >                           lower = ln_LB,
> >                           upper = ln_UB,
> >                           studlab = idd_count,
> >                           data = df_inci,
> >                           sm = "IRLN",
> >                           method.tau = "DL", # switching between estimators
> (eg, REML, PM) gives the same results
> >                           comb.fixed = FALSE,
> >                           comb.random = TRUE, backtransf = TRUE,
> >                           text.random = "Overall")
> > summary(meta_inci)
> >
> >
> > # HKSJ approach:
> https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-14-
> 25#MOESM1
> >
> > df_inci$ln_SE <- (df_inci$ln_inci - df_inci$ln_LB) /  1.96
> >
> > meta_modellll <- rma.uni(yi = ln_inci,
> >                           sei = ln_SE,
> >                           method = "FE",  # intentionally set as FE, following
> recommendations by Inthout et al 2014
> >                           knha=TRUE,
> >                           data = df_inci)
> > summary(meta_modellll)
> >
> > point_estimate <- exp(meta_modellll$b)
> > lower_bound <- exp(meta_modellll$ci.lb)
> > upper_bound <- exp(meta_modellll$ci.ub)
> > cat("Point Estimate:", point_estimate, "\n")
> > cat("95% CI Lower Bound:", lower_bound, "\n")
> > cat("95% CI Upper Bound:", upper_bound, "\n")


More information about the R-sig-meta-analysis mailing list