[R-meta] Covariance-variance matrix when studies share multiple treatment x control comparison

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Thu Sep 26 23:39:36 CEST 2019


JU,

To your question about how to calculated the measure of precision: no,
there's no need to create a matrix. Just a vector with the measure of
precision, because it's the vector that will be used as a predictor in the
meta-regression model.

James

On Thu, Sep 26, 2019 at 11:05 AM Ju Lee <juhyung2 using stanford.edu> wrote:

> Dear Wolfgang, James
>
> Thank you both for your considerate suggestions.
>
> First of all, I would like to clarify that I will be sending out another
> thread related to Wolfgang's comment about adding study ID to random
> factors as it has caused some major issues with my current analysis and I
> would really like second feedbacks on this matter (on my very next e-mail).
>
> Related to James's suggestion, I will follow up on your newly published
> paper and apply this to my code. Since I am using variance-covariance
> matrix instead of normal variance (to account for shared control/treatment
> groups) and trying to incorporate this to modified egger's test, I am
> wondering if means I should be creating a diagonal matrix constituted of sqrt(1
> / n1 + 1 / n2) for all inter-dependent effect sizes?
>
> Best regards,
> JU
> ------------------------------
> *From:* James Pustejovsky <jepusto using gmail.com>
> *Sent:* Thursday, September 26, 2019 8:26 AM
> *To:* Viechtbauer, Wolfgang (SP) <
> wolfgang.viechtbauer using maastrichtuniversity.nl>
> *Cc:* Ju Lee <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org <
> r-sig-meta-analysis using r-project.org>
> *Subject:* Re: Covariance-variance matrix when studies share multiple
> treatment x control comparison
>
> Ju,
>
> Following up on Wolfgang's comment: yes, adding a measure of precision as
> a predictor in the multi-level/multi-variate meta-regression model should
> work. Dr. Belen Fernandez-Castilla has a recent paper that reports a
> simulation study evaluating this approach. See
>
> Fernández-Castilla, B., Declercq, L., Jamshidi, L., Beretvas, S. N.,
> Onghena, P., & Van den Noortgate, W. (2019). Detecting selection bias in
> meta-analyses with multiple outcomes: A simulation study. The Journal of
> Experimental Education, 1–20.
>
> However, for standardized mean differences based on simple between-group
> comparisons, it is better to use sqrt(1 / n1 + 1 / n2) as the measure of
> precision, rather than using the usual SE of d. The reason is that the SE
> of d is naturally correlated with d even in the absence of selective
> reporting, and so the type I error rate of Egger's regression test is
> artificially inflated if the SE is used as the predictor. Using the
> modified predictor as given above fixes this issue and yields a correctly
> calibrated test. For all the gory details, see Pustejovsky & Rodgers (2019;
> https://doi.org/10.1002/jrsm.1332).
>
> It's also possible to combine all of the above with robust variance
> estimation, or to use a simplified model plus robust variance estimation to
> account for dependency between effect sizes from the same study. Melissa
> Rodgers and I have a working paper showing that this approach works well
> for meta-analyses that include studies with multiple correlated outcomes.
> We will be posting a pre-print of the paper soon, and I can share it on the
> listserv when it's available.
>
> James
>
> On Thu, Sep 26, 2019 at 3:12 AM Viechtbauer, Wolfgang (SP) <
> wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
> Hi Ju,
>
> Glad to hear that you are making progress. Construction of the V matrix
> can be a rather tedious process and often requires quite a bit of manual
> work.
>
> I have little interested in generalizing fsn() for cases where V is not
> diagonal, because fsn() is more of interest for historical reasons, not
> something I would generally use in applied work.
>
> However, the 'Egger regression' test can be easily generalized to rma.mv()
> models. Simply include a measure of the precision (e.g., the standard
> error) of the estimates in your model as a predictor/moderator and then you
> have essentially a multilevel/multivariate version thereof (you would then
> look at the test of the coefficient for the measure of precision, not the
> intercept).
>
> I also recently heard a talk by Melissa Rodgers and James Pustejovsky (who
> is a frequent contributor to this mailing list) on some work in this area.
> Maybe he can chime in here.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: Ju Lee [mailto:juhyung2 using stanford.edu]
> Sent: Thursday, 26 September, 2019 8:13
> To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
> Subject: Re: Covariance-variance matrix when studies share multiple
> treatment x control comparison
>
> Dear Wolfgang,
>
> I deeply appreciate your time looking into this issue, and this has been
> immensely helpful.
> I was able to incorporate all possible inter-dependence among effect sizes
> by adding different layers of non-independence to our dataframe.
>
> I manually calculated hedges'd based on based on Hedges and Olkin (1985),
> and it generates exactly same value as hedges' g in escalc() "SMD"
> function. So hopefully I am doing everything right using the equation we've
> discussed earlier.
>
> I have been also wondering if it is possible to account of this
> variance-covariance structure that I've constructed when running
> publication bias analysis, for example, when using fsn() function or
> modified egger's regression test (looking at intercept term of residual ~
> precision meta-regression using rma.mv). I had no luck so far finding
> information on this, and I would appreciate if you have any suggestions
> related to this
>
> Thank you for all of your valuable helps!
> Best regards,
> JU
>
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list