[R-meta] How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other
James Pustejovsky
jepu@to @ending from gm@il@com
Mon Aug 13 16:20:57 CEST 2018
Aki,
These are very interesting questions. To answer them more fully, could you
tell us a bit more about the analysis you are conducting? Specifically:
- How many studies are included?
- Are you fitting a meta-regression model with a single predictor or a
joint model with multiple predictors?
- What are the characteristics of the predictors where there is a
discrepancy between model-based and robust SEs (as in, do they vary within
study, between study, or both, and how much of each type of variation is
there)?
- What are the degrees of freedom from the RVE tests where there is a
discrepancy?
I will comment a bit at a general level about your questions:
(1) When cluster robust variance estimation potentially increases Type II
error rates. Compared to model-based inference, RVE necessarily increases
Type II error rates. But this is because there is a fundamental trade-off
between Type I and Type II errors, and so RVE has to increase Type II error
in order to control Type I error at a specified level. In general, it
doesn't really make sense to compare Type II errors of model-based and
robust inference unless it is under conditions where you can ensure that
*both* approaches control Type I error.
(2) How to interpret when the results from model-based standard errors and
robust variance estimation do not corroborate with each other. Generally,
discrepancies arise because (a) the working model is mis-specified in some
meaningful way or (b) the data do not contain sufficient information to get
a good estimate of the robust SE.
Regarding (a), model mis-specification could arise for several reasons,
including:
- that you've got an inaccurate assumption about the degree of correlation
between ES from the same study
- that you've got some sort of heteroskedasticity across levels of the
moderator (typical random effects meta-regression models make strong
assumptions about homoskedasticity of the random effects, although these
can be weakened, as we've discussed on the mailing list before), or
- that you've got between-study heterogeneity in the moderator of interest.
If you can suss out how the model is mis-specified and fit a more
appropriate model, its results will likely align with RVE.
Regarding (b), the degrees of freedom in RVE are diagnostic (and worth
reporting in write-ups, incidentally) in that they tell you how much
information is available to estimate the standard error for a given
moderating effect. Very roughly speaking, you can interpret them as one
less than the number of studies worth of information that go into
estimating a given standard error. If this is quite small, then the
implication is that there is not enough data available to support robust
inferences regarding that moderator. If you really trust the model you've
developed (e.g., you're willing to live with random effects
homoskedasticity assumptions), then go ahead and report model-based SEs and
CIs. Short of that, then the field needs to conduct more studies to
investigate that moderator.
James
On Sun, Aug 12, 2018 at 11:25 PM Akifumi Yanagisawa <ayanagis using uwo.ca> wrote:
> Hello everyone,
>
> I would like to ask about meta-analysis with robust variance estimation. I
> am having difficulty interpreting the predictor variables that are
> significant by model-based standard errors but are not significant after
> applying robust variance estimation (RVE).
>
> I am fitting my dataset with three level meta-regression with the metafor
> package (with study being the clustering variable). In order to deal with
> the dependency of effect sizes within each study (i.e., the same
> participants tested repeatedly), I am applying RVE with the clubSandwich
> package (using coef_test function with the estimator being CR2). [Thank you
> for the previous suggestions and guidance on robust variance estimation,
> Dr. Viechtbauer and Dr. Pustejovsky.]
>
> When conducting moderator analysis, I realized that some of the moderator
> variables that are determined as ‘significant’ by model-based standard
> errors turn out to be ‘not significant’ after applying robust variance
> estimation.
>
> I do understand the conservative nature of the robust variance estimation;
> however, some of the non-significant variances are factors that have been
> strongly supported by a large body of previous literature and are actually
> observed by most of the individual studies. So, in order to carefully
> interpret the results, I would like to know situations when we have to be
> careful about a potential Type II error using cluster robust variance
> estimation. (e.g., potentially difficult to test within study variables
> even when combining with multilevel meta-analysis?)
>
> If a variable is not significant by RVE, does this just indicate that ‘the
> null hypothesis’ was not rejected? Or, can we further interpret this
> discrepancy between RVE and model-based approach in a more informational
> manner? For example, would it be possible to provide concrete suggestions
> for other researchers about things that they should focus on or care about
> as to further testing the potential effect of moderator variables that are
> not significant by the current meta-analysis?
>
> I would really appreciate it if someone would explain (1) when cluster
> robust variance estimation potentially increases Type II error rates, and
> (2) how to interpret when the results from model-based standard errors and
> robust variance estimation do not corroborate with each other.
>
> Thank you very much for your time.
>
> Best regards,
> Aki
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list