[R-meta] How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other

Akifumi Yanagisawa @y@n@gi@ @ending from uwo@c@
Mon Aug 13 20:14:04 CEST 2018


Dear Dr. Pustejovsky,

Thank you very much for your quick and very informative reply. All of your comments are extremely helpful.

Please let me explain a little bit more about my analysis before answering your questions.
One of my moderator analyses examines whether specific types of intervention increase the effect of the intervention compared to an original intervention type. There is 8 level of intervention types (original, type1, type2, type3…). As the Wald_test function sometimes does not provide p-values, I am using model selection approach to test whether this categorical variable is significant by comparing a model that only includes 2 covariates to another model that includes 2 covariates plus this intervention type. After confirming that the model was enhanced by looking at AIC and a likelihood test, I am testing each coefficient (i.e., intervention types) is significantly different from the original intervention type.

- How many studies are included?
25 studies including 205 effect sizes

- Are you fitting a meta-regression model with a single predictor or a joint model with multiple predictors?
A joint model with multiple predictors (with 2 other covariates)

- What are the characteristics of the predictors where there is a discrepancy between model-based and robust SEs (as in, do they vary within study, between study, or both, and how much of each type of variation is there)?
A categorical variable including 8 levels. This predictor variable varies both within and between study.
One of the intervention types that changes the significance between the model-based method and RVE: 4 studies including 12 effect sizes.
Original intervention type was reported by 17 studies (51 effect sizes)

- What are the degrees of freedom from the RVE tests where there is a discrepancy?
Wald_test(): d.f. = -1.17 (p-value was not reported)
anova(): df = 7
One of the coefficients that changes its significance between model-based and RVE: d.f. = 2.27

While writing the answers to your questions, I realized that the degree of freedom is very small and only 4 studies reported the specific intervention type that I focusing on here. So, four studies may not be enough to accurately test this intervention type?

Also, I am not feeding any information about the degree of correlations between ES from the same study to the model (I am aware that this is the ‘dirtiest way’, but none of the previous studies reported the degree of correlation, so I cannot even guesstimate them). Referring to your comment, it seems that my model-based SEs could be quite mis-specified.

Best regards,
Aki

On Aug 13, 2018, at 10:20 AM, James Pustejovsky <jepusto using gmail.com<mailto:jepusto using gmail.com>> wrote:

Aki,

These are very interesting questions. To answer them more fully, could you tell us a bit more about the analysis you are conducting? Specifically:
- How many studies are included?
- Are you fitting a meta-regression model with a single predictor or a joint model with multiple predictors?
- What are the characteristics of the predictors where there is a discrepancy between model-based and robust SEs (as in, do they vary within study, between study, or both, and how much of each type of variation is there)?
- What are the degrees of freedom from the RVE tests where there is a discrepancy?

I will comment a bit at a general level about your questions:

(1) When cluster robust variance estimation potentially increases Type II error rates. Compared to model-based inference, RVE necessarily increases Type II error rates. But this is because there is a fundamental trade-off between Type I and Type II errors, and so RVE has to increase Type II error in order to control Type I error at a specified level. In general, it doesn't really make sense to compare Type II errors of model-based and robust inference unless it is under conditions where you can ensure that *both* approaches control Type I error.

(2) How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other. Generally, discrepancies arise because (a) the working model is mis-specified in some meaningful way or (b) the data do not contain sufficient information to get a good estimate of the robust SE.

Regarding (a), model mis-specification could arise for several reasons, including:
- that you've got an inaccurate assumption about the degree of correlation between ES from the same study
- that you've got some sort of heteroskedasticity across levels of the moderator (typical random effects meta-regression models make strong assumptions about homoskedasticity of the random effects, although these can be weakened, as we've discussed on the mailing list before), or
- that you've got between-study heterogeneity in the moderator of interest.
If you can suss out how the model is mis-specified and fit a more appropriate model, its results will likely align with RVE.

Regarding (b), the degrees of freedom in RVE are diagnostic (and worth reporting in write-ups, incidentally) in that they tell you how much information is available to estimate the standard error for a given moderating effect. Very roughly speaking, you can interpret them as one less than the number of studies worth of information that go into estimating a given standard error. If this is quite small, then the implication is that there is not enough data available to support robust inferences regarding that moderator. If you really trust the model you've developed (e.g., you're willing to live with random effects homoskedasticity assumptions), then go ahead and report model-based SEs and CIs. Short of that, then the field needs to conduct more studies to investigate that moderator.

James

On Sun, Aug 12, 2018 at 11:25 PM Akifumi Yanagisawa <ayanagis using uwo.ca<mailto:ayanagis using uwo.ca>> wrote:
Hello everyone,

I would like to ask about meta-analysis with robust variance estimation. I am having difficulty interpreting the predictor variables that are significant by model-based standard errors but are not significant after applying robust variance estimation (RVE).

I am fitting my dataset with three level meta-regression with the metafor package (with study being the clustering variable). In order to deal with the dependency of effect sizes within each study (i.e., the same participants tested repeatedly), I am applying RVE with the clubSandwich package (using coef_test function with the estimator being CR2). [Thank you for the previous suggestions and guidance on robust variance estimation, Dr. Viechtbauer and Dr. Pustejovsky.]

When conducting moderator analysis, I realized that some of the moderator variables that are determined as ‘significant’ by model-based standard errors turn out to be ‘not significant’ after applying robust variance estimation.

I do understand the conservative nature of the robust variance estimation; however, some of the non-significant variances are factors that have been strongly supported by a large body of previous literature and are actually observed by most of the individual studies. So, in order to carefully interpret the results, I would like to know situations when we have to be careful about a potential Type II error using cluster robust variance estimation. (e.g., potentially difficult to test within study variables even when combining with multilevel meta-analysis?)

If a variable is not significant by RVE, does this just indicate that ‘the null hypothesis’ was not rejected? Or, can we further interpret this discrepancy between RVE and model-based approach in a more informational manner? For example, would it be possible to provide concrete suggestions for other researchers about things that they should focus on or care about as to further testing the potential effect of moderator variables that are not significant by the current meta-analysis?

I would really appreciate it if someone would explain (1) when cluster robust variance estimation potentially increases Type II error rates, and (2) how to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other.

Thank you very much for your time.

Best regards,
Aki
_______________________________________________
R-sig-meta-analysis mailing list
R-sig-meta-analysis using r-project.org<mailto:R-sig-meta-analysis using r-project.org>
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis


	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list