[R-meta] Question on effect sizes
Lukasz Stasielowicz
|uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Fri Jan 21 17:25:57 CET 2022
Hi,
a couple of ideas that may be obvious to you but the provided
description is rather short, so I don't know whether you have thought
about the following points:
1. Did you try to contact the authors of the studies? Maybe they will be
willing to provide the missing statistics or the data set. The
willingness varies obviously between researchers (and research areas)
but it is often worth the effort.
One could contact the corresponding author and ask for the statistics or
the data set (providing the choice can increase the success rate). If
you don't receive an answer within several days (e.g. one week) thwn one
can try to contact the other authors. Recently I used this strategy for
two different meta-analyses and approximately 80% - 90% of the research
teams wrote back. Obviously, not all of them could provide answers or
data (hard drive failure etc.) but approximately 30% - 50% of the
authors provided additional information.
2. If you have already explored the first strategy and the relevant
information is still missing, then one could try to reconstruct it. It
is something that you were referring to but the description is rather
short, so I cannot infer what is meant by pooled SD etc.
One could try to rearrange the formulas to compute the missing
information manually but if there are two unknowns (e.g. SD and M for
one group is missing) then it is not possible.
Nevertheless, one could try to make some guesstimates (e.g. are the SDs
for both groups in other studies similar? if yes than one could make a
respective guesstimate for the missing information) in order to impute
the data.
One could even make several guesstimates and test these different
scenarios to test the robustness of the findings. Another sensitivity
analysis would be to compare meta-analytic results based on studies with
without missing information and the scenarios with guesstimates.
3. It is probably obvious to you but dropping the studies with missing
information is also a possibility. However, it could bias the results
(if the dropped studies differ significantly from the included studies).
Hope it helps!
Best wishes,
--
Lukasz Stasielowicz
Osnabrück University
Institute for Psychology
Research methods, psychological assessment, and evaluation
Seminarstraße 20
49074 Osnabrück (Germany)
Am 18.01.2022 um 12:00 schrieb r-sig-meta-analysis-request using r-project.org:
> Send R-sig-meta-analysis mailing list submissions to
> r-sig-meta-analysis using r-project.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> r-sig-meta-analysis-request using r-project.org
>
> You can reach the person managing the list at
> r-sig-meta-analysis-owner using r-project.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
>
>
> Today's Topics:
>
> 1. Re: Bivariate generalized linear mixed model with {metafor}
> (Arthur Albuquerque)
> 2. Question on effect sizes (David Pedrosa)
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 17 Jan 2022 23:53:15 -0300
> From: Arthur Albuquerque <arthurcsirio using gmail.com>
> To: "r-sig-meta-analysis using r-project.org"
> <r-sig-meta-analysis using r-project.org>, Michael Dewey
> <lists using dewey.myzen.co.uk>, "Viechtbauer, Wolfgang (SP)"
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Subject: Re: [R-meta] Bivariate generalized linear mixed model with
> {metafor}
> Message-ID: <8132eb7d-78f5-48cf-a81c-535aba9618e1 using Spark>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Wolfgang,
>
> We had this discussion back in October, so you might not remember. In brief, I wanted to fit a Bivariate model and you pointed towards the Model 6 in your excellent article:
>
> Jackson, D., Law, M., Stijnen, T., Viechtbauer, W., & White, I. R. (2018). A comparison of seven random-effects models for meta-analyses that estimate the summary odds ratio. Statistics in Medicine, 37(7), 1059-1085. https://doi.org/10.1002/sim.7588
>
> In this article, you fitted the model using the command:
>
> lme4::glmer(cbind(event,n-event)~factor(treat)+(control+treat-1|study), data=thedata1, family=binomial(link="logit"))
>
> Today, I found a page in your metafor webpage (http://www.metafor-project.org/doku.php/analyses:vanhouwelingen2002), fitting the same Model 6 mentioned above. However, you used metafor, not lme4 (of course), and the random effect structure seems a little bit different:
>
> res <- rma.mv(yi, vi, mods = ~ group - 1, random = ~ group | trial, struct="UN", data=dat.long, method="ML")
>
> Thus, I would like to first confirm if they are indeed the same model. If not, what are their differences and what would be major implications?
>
> Thank you very much,
>
> Arthur M. Albuquerque
>
> Medical student
> Universidade Federal do Rio de Janeiro, Brazil
>
> On Oct 18, 2021, 2:53 PM -0300, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl>, wrote:
>> As far as I can tell, that seems to be Model 6: the "Van Houwelingen bivariate" model as discussed in our paper.
>>
>> Best,
>> Wolfgang
>>
>>> -----Original Message-----
>>> From: Arthur Albuquerque [mailto:arthurcsirio using gmail.com]
>>> Sent: Monday, 18 October, 2021 19:24
>>> To: r-sig-meta-analysis using r-project.org; Viechtbauer, Wolfgang (SP); Michael Dewey
>>> Subject: Re: [R-meta] Bivariate generalized linear mixed model with {metafor}
>>>
>>> Dear Michael,
>>>
>>> I’m sorry, my bad.
>>>
>>> It’s a binomial model with the logit link, in which the average baseline and
>>> treatment risks are treated as fixed effects. Moreover, there are two study-
>>> specific parameters (random-effects), and these are assumed to follow a bivariate
>>> normal distribution with covariance matrix “E”. This matrix includes the between-
>>> study variances for the baseline and treatment odds + the correlation between
>>> the baseline and treatment risks in the logit scale.
>>>
>>> The authors then explain how to estimate marginal and conditional effects from
>>> this model using formulas. I am also not sure how to estimate these using
>>> metafor.
>>>
>>> They suggest using this model “to include the baseline risk and report the
>>> variation in the effect measure with baseline risks in addition to the marginal
>>> effect, regardless of the measure of choice”.
>>>
>>> Sorry for the confusion, it’s my first time asking here and it is a quite
>>> complicated topic (at least for me).
>>>
>>> Best,
>>>
>>> Arthur M. Albuquerque
>>>
>>> Medical student
>>> Universidade Federal do Rio de Janeiro, Brazil
>>>
>>> On Oct 18, 2021, 2:10 PM -0300, Michael Dewey <lists using dewey.myzen.co.uk>, wrote:
>>>
>>> Dear Arthur
>>>
>>> You might get more helpful replies if you summarise the model for us
>>> rather than relying on someone here to do that for you.
>>>
>>> Michael
>>>
>>> On 18/10/2021 17:51, Arthur Albuquerque wrote:
>>>
>>> Dear Wolfgang,
>>>
>>> Thank you for the super quick reply! I wasn’t aware of that article, yet I
>>> believe it does not include the model I mentioned.
>>>
>>> The model is thoroughly described at the end of this article, section "Appendix
>>> B. The bivariate generalized linear mixed model
>>> (BGLMM)”: https://doi.org/10.1016/j.jclinepi.2021.08.004
>>>
>>> Best,
>>>
>>> Arthur M. Albuquerque
>>>
>>> Medical student
>>> Universidade Federal do Rio de Janeiro, Brazil
>>>
>>> On Oct 18, 2021, 1:31 PM -0300, Viechtbauer, Wolfgang (SP)
>>> <wolfgang.viechtbauer using maastrichtuniversity.nl>, wrote:
>>>
>>> Dear Arthur,
>>>
>>> rma() does not fit generalized linear mixed models -- rma.glmm() does. I don't
>>> have the time right now to dig into those papers to figure out what specific
>>> model they are suggesting. In this context, many different models have been
>>> suggested; see, for example:
>>>
>>> Jackson, D., Law, M., Stijnen, T., Viechtbauer, W., & White, I. R. (2018). A
>>> comparison of seven random-effects models for meta-analyses that estimate the
>>> summary odds ratio. Statistics in Medicine, 37(7), 1059-1085.
>>> https://doi.org/10.1002/sim.7588
>>>
>>> (and this is not even an exhaustive list). The paper also indicates how these
>>> models can be fitted, either with metafor::rma.glmm() or one can do this directly
>>> with lme4""glmer().
>>>
>>> Best,
>>> Wolfgang
>>>
>>> -----Original Message-----
>>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>>> Behalf Of Arthur Albuquerque
>>> Sent: Monday, 18 October, 2021 18:15
>>> To: r-sig-meta-analysis using r-project.org
>>> Subject: [R-meta] Bivariate generalized linear mixed model with {metafor}
>>>
>>> Hi all,
>>>
>>> I need some help to figure out how to fit a bivariate generalized linear mixed
>>> model using metafor.
>>>
>>> In the past year, the Journal of Clinical Epidemiology has posted several
>>> articles on a controversy between using risk ratio or odds ratio in meta-
>>> analyses. Summary of the controversy here:
>>>
>>> George A. Wells , Commentary on Controversy and Debate 4 paper series:
>>> Questionable utility of the relative risk in clinical research, Journal of
>>> Clinical Epidemiology (2021), doi: https://doi.org/10.1016/j.jclinepi.2021.09.016
>>>
>>> One of the articles (https://doi.org/10.1016/j.jclinepi.2021.08.004) suggested
>>> fitting a bivariate generalized linear mixed model (BGLMM), which "obtains
>>> effect estimates conditioning on baseline risks with the estimated model
>>> parameters, including the correlation parameter.”
>>>
>>> They fitted this model using the PROC NLMIXED command in SAS. I would like to fit
>>> this model using metafor, could anyone help me by sending the appropriate code of
>>> this model with metafor::rma()?
>>>
>>> Kind regards,
>>>
>>> Arthur M. Albuquerque
>>>
>>> Medical student
>>> Universidade Federal do Rio de Janeiro, Brazil
>
> [[alternative HTML version deleted]]
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 18 Jan 2022 08:56:00 +0100
> From: David Pedrosa <david.pedrosa using staff.uni-marburg.de>
> To: r-sig-meta-analysis using r-project.org
> Subject: [R-meta] Question on effect sizes
> Message-ID: <704e977a-ba10-4fed-bee9-8b4fcc37844e using Spark>
> Content-Type: text/plain; charset="utf-8"
>
> Dear list members,
> I currently have a comprehension question where I would like to ask for an assessment of the list. We are doing a meta-analysis where there are different expressions of outcomes that I am trying to combine using effect sizes.For the pre-post controlled tests, it looks something like this:
>
> +------------+-------------------+-------------+-------------------+-------------+
> | Study | Pre | | Post | |
> | # | Mean | SD | Mean | SD |
> | ===== = | ===========| =====================| =======|
> | 1 | Mean_x_y1 | SD_x_y1 | Mean_x_y2 | SD_x_y2 |
> | 2 | Mean_x1 | SD_x1 | Mean_x_x2 | SD_x2 |
> | 2 | Mean_y1 | SD_y1 | Mean_y2 | SD_y2 |
> +-------+----------------------+-------------------+---------------+-------------+
> My questions would be, if it resonable to assume that the pooled SD of the second study can be somehow estimated (unfortunately there is neither pretest data available nor a correlation between x and y)?
>
> And the other question would be a simple one which I could not find a definite answer for: How do I deal with studies indicating a mean change score, so how do I standardize Mean_change_x_y and SD_change_x_y in the scenario above when I don’t have a baseline score?
>
> Best wishes,
>
> [[alternative HTML version deleted]]
>
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
>
> ------------------------------
>
> End of R-sig-meta-analysis Digest, Vol 56, Issue 16
> ***************************************************
More information about the R-sig-meta-analysis
mailing list