[R-meta] How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other

Akifumi Yanagisawa @y@n@gi@ @ending from uwo@c@
Tue Aug 14 05:57:03 CEST 2018


Dear Dr. Pustejovsky,

Yes, that is exactly the case; I am including both within-study comparisons and between-study comparisons. Now I understand that the different results between the model-based method and RVE comes from the fact that I am not distinguishing within- and between-study comparisons.

As to my original model (i.e., three level meta-analysis with RVE), would it be appropriate to interpret that the model is indicating that the specific type of intervention is not significantly different from the original intervention when tested without distinguishing within- and between-study variances? Could I argue that when comparing the average of this specific type of intervention to the average of the original intervention, there seems to be little difference (or, the difference potentially cannot be detected due to the small sample size)?

Also, thank you very much for further suggesting on how to consider the within-study comparison. I would like to try your suggestions. The second option sounds especially great as I do not have to lose included studies. However, I am not quite following everything you said. I am sorry but I am not familiar with the term “indicator variables”. Do you mean this as dummy coded variables for each treatment type? Would it be possible to centre dummy coded variable? Or, are you suggesting to compute the average effect size for each study and subtract this from each intervention type?

Thank you very much for your time and support.

Best regards,
Aki



> Aki,
>
> Thanks for sharing this further information. Let me suggest one other
> thing. From your description, it sounds like you are interested in
> comparing the relative effectiveness of different approaches to
> modifying an "original" intervention--similar to a network
> meta-analysis. It also sounds like the regression model that you are
> fitting does not distinguish within-study comparisons between
> intervention types (e.g., a study that randomized participants to the
> original intervention, modification A, or modification B) from
> between-study comparisons (e.g., one study compared the original
> intervention to modification A, another study compared the original
> intervention to modification B).
>
> Fitting a model that compares intervention types without
> distinguishing within- from between- will result in estimates that
> pool across both types of variation. This might explain why your null
> findings are at odds with findings from previous single studies, which
> would only examine within-study variation. In a situation like this, I
> think a good thing to do would be to examine whether the within-study
> variation alone. Two somewhat different ways of doing this would be:
> 1. Calculate contrasts between pairs of intervention types within each study, i.e., calculate a new effect size for (B - original) - (A -
> original) for each study that includes both the original intervention,
> modification A, and modification B. (And similarly for (C - original)
> - (A
> - original), (C - original) - (B - original), etc.) Then conduct univariate meta-analyses on each of these contrasts. The results will
> use only within-study variation in the intervention types. The
> downsides of this approach: you'll lose a lot of studies for each
> contrast of interest, and the summary meta-analysis for each contrast
> will be based on potentially a different set of studies.
> 2. Create indicator variables for each intervention type, then *center them by study* (i.e., subtract the mean of each indicator for each
> study). Run the meta-regression on the centered indicator variables.
> This will remove between-study variation in the intervention types.
> The downsides are similar to approach (1), but allow you to keep a
> slightly larger set of studies and conduct everything in one analysis,
> rather than having to run separate analyses for each contrast.
>
> Best, James


On Aug 13, 2018, at 2:14 PM, Akifumi Yanagisawa <ayanagis using uwo.ca<mailto:ayanagis using uwo.ca>> wrote:

Dear Dr. Pustejovsky,

Thank you very much for your quick and very informative reply. All of your comments are extremely helpful.

Please let me explain a little bit more about my analysis before answering your questions.
One of my moderator analyses examines whether specific types of intervention increase the effect of the intervention compared to an original intervention type. There is 8 level of intervention types (original, type1, type2, type3…). As the Wald_test function sometimes does not provide p-values, I am using model selection approach to test whether this categorical variable is significant by comparing a model that only includes 2 covariates to another model that includes 2 covariates plus this intervention type. After confirming that the model was enhanced by looking at AIC and a likelihood test, I am testing each coefficient (i.e., intervention types) is significantly different from the original intervention type.

- How many studies are included?
25 studies including 205 effect sizes

- Are you fitting a meta-regression model with a single predictor or a joint model with multiple predictors?
A joint model with multiple predictors (with 2 other covariates)

- What are the characteristics of the predictors where there is a discrepancy between model-based and robust SEs (as in, do they vary within study, between study, or both, and how much of each type of variation is there)?
A categorical variable including 8 levels. This predictor variable varies both within and between study.
One of the intervention types that changes the significance between the model-based method and RVE: 4 studies including 12 effect sizes.
Original intervention type was reported by 17 studies (51 effect sizes)

- What are the degrees of freedom from the RVE tests where there is a discrepancy?
Wald_test(): d.f. = -1.17 (p-value was not reported)
anova(): df = 7
One of the coefficients that changes its significance between model-based and RVE: d.f. = 2.27

While writing the answers to your questions, I realized that the degree of freedom is very small and only 4 studies reported the specific intervention type that I focusing on here. So, four studies may not be enough to accurately test this intervention type?

Also, I am not feeding any information about the degree of correlations between ES from the same study to the model (I am aware that this is the ‘dirtiest way’, but none of the previous studies reported the degree of correlation, so I cannot even guesstimate them). Referring to your comment, it seems that my model-based SEs could be quite mis-specified.

Best regards,
Aki

On Aug 13, 2018, at 10:20 AM, James Pustejovsky <jepusto using gmail.com<mailto:jepusto using gmail.com>> wrote:

Aki,

These are very interesting questions. To answer them more fully, could you tell us a bit more about the analysis you are conducting? Specifically:
- How many studies are included?
- Are you fitting a meta-regression model with a single predictor or a joint model with multiple predictors?
- What are the characteristics of the predictors where there is a discrepancy between model-based and robust SEs (as in, do they vary within study, between study, or both, and how much of each type of variation is there)?
- What are the degrees of freedom from the RVE tests where there is a discrepancy?

I will comment a bit at a general level about your questions:

(1) When cluster robust variance estimation potentially increases Type II error rates. Compared to model-based inference, RVE necessarily increases Type II error rates. But this is because there is a fundamental trade-off between Type I and Type II errors, and so RVE has to increase Type II error in order to control Type I error at a specified level. In general, it doesn't really make sense to compare Type II errors of model-based and robust inference unless it is under conditions where you can ensure that *both* approaches control Type I error.

(2) How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other. Generally, discrepancies arise because (a) the working model is mis-specified in some meaningful way or (b) the data do not contain sufficient information to get a good estimate of the robust SE.

Regarding (a), model mis-specification could arise for several reasons, including:
- that you've got an inaccurate assumption about the degree of correlation between ES from the same study
- that you've got some sort of heteroskedasticity across levels of the moderator (typical random effects meta-regression models make strong assumptions about homoskedasticity of the random effects, although these can be weakened, as we've discussed on the mailing list before), or
- that you've got between-study heterogeneity in the moderator of interest.
If you can suss out how the model is mis-specified and fit a more appropriate model, its results will likely align with RVE.

Regarding (b), the degrees of freedom in RVE are diagnostic (and worth reporting in write-ups, incidentally) in that they tell you how much information is available to estimate the standard error for a given moderating effect. Very roughly speaking, you can interpret them as one less than the number of studies worth of information that go into estimating a given standard error. If this is quite small, then the implication is that there is not enough data available to support robust inferences regarding that moderator. If you really trust the model you've developed (e.g., you're willing to live with random effects homoskedasticity assumptions), then go ahead and report model-based SEs and CIs. Short of that, then the field needs to conduct more studies to investigate that moderator.

James

On Sun, Aug 12, 2018 at 11:25 PM Akifumi Yanagisawa <ayanagis using uwo.ca<mailto:ayanagis using uwo.ca>> wrote:
Hello everyone,

I would like to ask about meta-analysis with robust variance estimation. I am having difficulty interpreting the predictor variables that are significant by model-based standard errors but are not significant after applying robust variance estimation (RVE).

I am fitting my dataset with three level meta-regression with the metafor package (with study being the clustering variable). In order to deal with the dependency of effect sizes within each study (i.e., the same participants tested repeatedly), I am applying RVE with the clubSandwich package (using coef_test function with the estimator being CR2). [Thank you for the previous suggestions and guidance on robust variance estimation, Dr. Viechtbauer and Dr. Pustejovsky.]

When conducting moderator analysis, I realized that some of the moderator variables that are determined as ‘significant’ by model-based standard errors turn out to be ‘not significant’ after applying robust variance estimation.

I do understand the conservative nature of the robust variance estimation; however, some of the non-significant variances are factors that have been strongly supported by a large body of previous literature and are actually observed by most of the individual studies. So, in order to carefully interpret the results, I would like to know situations when we have to be careful about a potential Type II error using cluster robust variance estimation. (e.g., potentially difficult to test within study variables even when combining with multilevel meta-analysis?)

If a variable is not significant by RVE, does this just indicate that ‘the null hypothesis’ was not rejected? Or, can we further interpret this discrepancy between RVE and model-based approach in a more informational manner? For example, would it be possible to provide concrete suggestions for other researchers about things that they should focus on or care about as to further testing the potential effect of moderator variables that are not significant by the current meta-analysis?

I would really appreciate it if someone would explain (1) when cluster robust variance estimation potentially increases Type II error rates, and (2) how to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other.

Thank you very much for your time.

Best regards,
Aki
_______________________________________________
R-sig-meta-analysis mailing list
R-sig-meta-analysis using r-project.org<mailto:R-sig-meta-analysis using r-project.org>
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list