[R-meta] How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other

Akifumi Yanagisawa @y@n@gi@ @ending from uwo@c@
Tue Aug 14 18:20:23 CEST 2018


Thank you very much for your swift reply in the early morning, Dr. Pustejovsky. 

I really appreciate your further explanation (and I am glad to hear that you are not calling my study dumb. ^^)
I have never thought of centring dummy variables! Recalling the group-mean centring technique that I learned from a multilevel modelling class, it makes sense that this allows us to focus on the within study difference. Thank you so much for such an amazing suggestion. I just quickly tried with my dataset, and realized that the results are somewhat similar to when I used not-centred dummy variables. Furthermore, this time, the results of the model-based method and RVE are more likely to corroborate with each other. 

I also tried your first suggestion, calculating the differences between treatment types for each study [e.g., (typeA - original) – (typeB - original), and so on]. However, because some of the studies provided multiple effect sizes for each intervention type (e.g., sometimes scores measured with different test formats or measured at different time points), selecting one effect size for each treatment type for each study was quite difficult and I will lose so many studies. I think I will stick with the centring indicator variable approach.

Thank you again so much for sharing your knowledge. You have been extremely helpful. Now I feel that I understand how to interpret the results from RVE more deeply. I will continue analyzing my dataset following your guidance. 

Best regards,
Aki


> On Aug 14, 2018, at 10:13 AM, James Pustejovsky <jepusto using gmail.com> wrote:
> 
> Aki,
> 
> See below.
> 
> James
> 
> On Mon, Aug 13, 2018 at 10:57 PM Akifumi Yanagisawa <ayanagis using uwo.ca> wrote:
> 
> Dear Dr. Pustejovsky,
> 
> Yes, that is exactly the case; I am including both within-study comparisons and between-study comparisons. Now I understand that the different results between the model-based method and RVE comes from the fact that I am not distinguishing within- and between-study comparisons.
> 
> As to my original model (i.e., three level meta-analysis with RVE), would it be appropriate to interpret that the model is indicating that the specific type of intervention is not significantly different from the original intervention when tested without distinguishing within- and between-study variances? Could I argue that when comparing the average of this specific type of intervention to the average of the original intervention, there seems to be little difference (or, the difference potentially cannot be detected due to the small sample size)?
> 
> JEP: Your parenthetical interpretation is the most accurate, I think: that differences between the average effects of the original intervention and variants of the intervention are not detectable. There might still be differences there, but the available data does not let you rule out the possibility of no differences.
>  
> 
> Also, thank you very much for further suggesting on how to consider the within-study comparison. I would like to try your suggestions. The second option sounds especially great as I do not have to lose included studies. However, I am not quite following everything you said. I am sorry but I am not familiar with the term “indicator variables”. Do you mean this as dummy coded variables for each treatment type? 
> 
> JEP: Yes. indicator variable = dummy variable. I just didn't want you to get the impression that I was calling your study dumb. ;)
>  
> Would it be possible to centre dummy coded variable? 
> 
> JEP: Yes. The dummy variables will then have values other than 0 or 1, but their interpretation is still the same---as a difference between the indicated category and a reference category. 
>  
> Or, are you suggesting to compute the average effect size for each study and subtract this from each intervention type?
> 
> JEP: This might work too, but I'm not sure.
>  
> 
> Thank you very much for your time and support. 
> 
> Best regards,
> Aki



More information about the R-sig-meta-analysis mailing list