[R-meta] ci.ub and ci.lb Interpretation for Categorical Variables

Lukasz Stasielowicz |uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Wed Mar 24 13:42:27 CET 2021


Dear Jake,

you have four methods, right? NIM, PR, R2I and something else.
The intercept corresponds to the effect size for the unnamed reference method.

The coefficients for other methods represent the difference between the reference method and the particular method. To illustrate, 0.61 + 0.26 = 0.87 would be the effect size for NIM. Accordingly, the three confidence intervals also refer to the difference between effect sizes (e.g., 0.26) and not the actual effect size (e.g., 0.87).

If you want to get the confindence interval for the effect size of a particular method you could refit the model with a different reference category, e.g., by using the relevel function within the rma function:

mods=~relevel(method,ref="NIM")

In this case the intercept would correspond to the effect size for NIM and the confidence interval would also refer to this effect size.
You could then repeat this procedure for PR and R2I.


Best,
Lukasz
-- Lukasz Stasielowicz

Osnabrück University

Institute for Psychology

Research methods, psychological assessment, and evaluation

Seminarstraße 20

49074 Osnabrück (Germany)


Am 24.03.2021 um 12:00 schrieb r-sig-meta-analysis-request using r-project.org:
> Send R-sig-meta-analysis mailing list submissions to
> 	r-sig-meta-analysis using r-project.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> 	r-sig-meta-analysis-request using r-project.org
>
> You can reach the person managing the list at
> 	r-sig-meta-analysis-owner using r-project.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
>
>
> Today's Topics:
>
>     1. ci.ub and ci.lb Interpretation for Categorical Variables
>        (Jake Downs)
>     2. Comparing Models Using Anova () Error (Jake Downs)
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 23 Mar 2021 18:21:03 -0600
> From: Jake Downs <jake.downs using aggiemail.usu.edu>
> To: r-sig-meta-analysis using r-project.org
> Subject: [R-meta] ci.ub and ci.lb Interpretation for Categorical
> 	Variables
> Message-ID:
> 	<CA+GBsFtvk_vp7LurcX26zwEOujweJ6RdaAKZzr0izrGnwskYHw using mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> As a novice researcher, I learn so much through following this group.
> Thanks to everyone's contributions.
>
> This is almost a silly question, but I would rather double check here than
> be embarrassed somewhere else.
>
> How do you interpret the confidence interval output for categorical
> variables in metafor? See output below:
>
> ##            estimate    se   zval  pval  ci.lb  ci.ub
> ## intrcpt        0.61  0.29   2.09  0.04   0.04   1.18  *
> ## methodNIM      0.26  0.44   0.59  0.56  -0.61   1.12
> ## methodPR      -0.19  0.39  -0.48  0.63  -0.95   0.57
> ## methodR2I      0.11  0.42   0.25  0.80  -0.73   0.94
>
> Are the confidence intervals:
>
> in reference to the intercept?
>
> in reference to the categorical variable?
>
> true values?
>
>
> For example which is the correct interpretation for the 95% confidence
> interval for methodNIM?
>
>
> a) 0 - 1.73 (added/subtracted  from the intercept estimate)
>
> b) 0.26 - 1.99 (added/subtracted from the NIM estimate of b = 0.87)
>
> c) -0.61 - 1.12 (what is listed in the output)
>
> d)  Something else
>
>
> My guess is 'b', but I would appreciate someone double checking my work.
>
> Thank you in advance,
>
> Jake Downs
> PhD Student, Utah State University
>
> 	[[alternative HTML version deleted]]
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 23 Mar 2021 19:07:39 -0600
> From: Jake Downs <jake.downs using aggiemail.usu.edu>
> To: r-sig-meta-analysis using r-project.org
> Subject: [R-meta] Comparing Models Using Anova () Error
> Message-ID:
> 	<CA+GBsFubt=w2qLGpXQgE8pzfgzjqimshhPWkJma5EEtjb6q6Zw using mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Apparently I have two questions tonight.
>
> I keep receiving an error on two sets of models I'm trying to compare using
> LRT via Anova command. I am confused why these models won't compare when my
> others are.
>
> I'm conducting a 3 level analysis, and as part of my moderator analysis I
> am adding a single fixed variable to the original fit, and then comparing
> the two models with anova. The original models were fit using REML, but for
> comparison purposes I have refit with ML.
>
> There are two sets of models where I get the following error:
>
> Error in anova.rma(rq1.fit2.ml, rq2.age.ml) : Observed outcomes and/or
> sampling variances/covariances not equal in the full and reduced model.
>
> Error in anova.rma(rq1.fit2.ml, rq2.dose.h.ml) : Observed outcomes and/or
> sampling variances/covariances not equal in the full and reduced model.
>
> Why might this be the case? And how can I fix it?
>
> Best (again),
>
> Jake Downs
>
> 	[[alternative HTML version deleted]]
>
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
>
> ------------------------------
>
> End of R-sig-meta-analysis Digest, Vol 46, Issue 33
> ***************************************************

-- 
Lukasz Stasielowicz
Universität Osnabrück
Institut für Psychologie
Fachgebiet Forschungsmethodik, Diagnostik und Evaluation
Seminarstraße 20
49074 Osnabrück



More information about the R-sig-meta-analysis mailing list