[R-sig-ME] MCMCglmm multivariate multilevel meta-analysis

Nico N nic43614 at gmail.com
Fri Apr 12 05:17:20 CEST 2013


Hi Jarrod,

Thank you very much for your help - and also for writing and sharing
this highly flexible and powerful program (OMG - it's so fast in
comparison to WINBUGS)!

The code with the suggested matrix for the variances works!

You are right, in my case, I do not have information on covariances
and only provide the known variances to the meta-analysis.

Thanks too for the tips regarding the separate predictors for the
traits (this was very helpful information).

Cheers
Nico



On Thu, Apr 11, 2013 at 3:59 PM, Jarrod Hadfield <j.hadfield at ed.ac.uk> wrote:
> Hi Nico,
>
> You should just be able to pass c(dataset$var1, dataset$var2) to mev. If you
> have sampling covariances between the measures then its a bit more
> complicated, but I think you don't?
>
> There's also an in-built function for creating the dummy variables:
> at.level(trait,1):predictor1 fits an effect of predictor1 for trait 1, if
> you want to fit separate predictor1 effects for both traits you can just use
> trait:predictor1.
>
> Cheers,
>
> Jarrod
>
>
>
>
> Quoting Nico N <nic43614 at gmail.com> on Wed, 10 Apr 2013 21:47:09 +1000:
>
>> Hey everyone,
>>
>> Presently, I am trying to conduct a multivariate multilevel
>> meta-analysis using the MCMCglmm package.
>>
>> However, I encountered the following problem and I was hoping to
>> obtain some advice.
>>
>> The following example would describe my situation. The dependent
>> variable is some performance measure, which is captured in two
>> different ways (2 different measures for the same construct).
>>
>> Hence, I have two measures for each student, similar to the
>> multi-response model examples in the MCMCglmm tutorial notes.
>>
>> Moreover, I have a big data set with students nested in schools, and
>> schools nested in countries (3 additional levels).
>>
>> In sum, I require four levels (sorry, social science jargon I think):
>> level 1= measures, level2=students, level3=schools, and
>> level4=country.
>>
>> I only would like to have random-intercepts for each level above the
>> measure-level.
>>
>>
>> Now I think there may be two ways. Originally, the data is in the
>> "wide" format. That is:
>>
>> Columns headers are the following:  student, school, country, measure
>> 1, measure 2, var1 ,  var2, predictor1, predictor 2,…   (for
>> simplicity, let's assume two predictors only - both only fixed
>> effects).
>>
>>
>> Var2 and var2 are the known variances for the two measures of
>> interest. The variances need to be provided to the first level of the
>> model, while the estimated covariance matrix is fixed to 1. As far as
>> I understood, the "mev" command does this.
>>
>> Now, given the "wide" format, I noticed that MCMCglmm seems to have a
>> convenient way to estimate measure-specific intercepts through the
>> "trait"-command (i.e., without changing the data).
>>
>> If I am not wrong, the code would be as follows.
>>
>>
>> prior = list(R = list( V =diag(2), n=2,fix = 1), G = list(
>>
>>                        G1 = list ( V = diag(2), n = 2),
>>
>>                        G2 = list ( V = diag(2), n = 2),
>>
>>                        G3 = list ( V = diag(2), n = 2) ))
>>
>> model1 <- MCMCglmm( cbind(measure1, measure2) ~ -1 + trait +
>>
>>                                predictor1 +
>>
>>                                predictor2 ,
>>
>>                        random = ~us(trait):student +
>>
>>                                us(trait):school +
>>
>>                                idh(trait):country + ,
>>
>>                        rcov = ~idh(trait):units,
>>
>>                        prior=prior,
>>
>>                        data = dataset,
>>
>>                        mev= ?????,
>>
>>                        family = c("gaussian","gaussian"))
>>
>>
>> However, I was wondering how I can provide the variances to this model
>> with two measures? In my case, I would need a 2xn matrix and I am not
>> sure whether the "mev" command can handle this. Has anyone ever tried
>> something like this?
>>
>> I guess there would be an alternative, in case "mev" must be a 1xn
>> vector: I guess I could create the stacked (or long) data myself
>> before running the model. Then, as Hox (2010) recommends, one can
>> create dummy variables indicating to which measure the dependent
>> variable column (DV) refers to.
>>
>> In my case: MD1 for measure 1 and MD2 for measure2. Then, each
>> predictor would have to be multiplied with these two dummies, such
>> that it looks like the following table (pred=predictor):
>>
>> DV      measure var     MD1     MD2     student  school country  pred1
>> pred2   pred1XMD1 pred1XMD2    pred2XMD1   pred2XMD2
>> 3.4     1               0.2     1       0       1               1       1
>> 2       1.3     2                  0                         1.3
>> 0
>> 5.6     2               0.4     0       1       1               1       1
>> 3       1.4     0                  3                         0
>> 1.4
>> 6.7     1               0.5     1       0       2               1       1
>> 4       1.4     4                  0                         1.4
>> 0
>> 4.5     2               0.3     0       1       2               1       1
>> 4       1       0                  4                          0
>> 1
>> 5.5     1               0.5     1       0       3               1       1
>> 2       1.9     2                  0                         1.9
>> 0
>> 4.5     2               0.7     0       1       3               1       1
>> 4       1.6     0                  4                         0
>> 1.6
>> 6.7     1               0.6     1       0       4               2       1
>> 2       1.7     2                  0                         1.7
>> 0
>> 4.5     2               0.6     0       1       4               2       1
>> 4       1.3     0                  4                          0 1.3
>>
>>
>> Then, the following code should work (with same prior):
>>
>>
>> model2 <- MCMCglmm( DV ~ -1 + MD1 + MD2 +
>>
>>                                pred1xMD1 +
>>
>>                                pred1xMD2 +
>>
>>                                pred2xMD1 +
>>
>>                                pred2xMD2 +,
>>
>>                        random = ~us(MD1+MD2):student +
>>
>>                                us(MD1+MD2):school  +
>>
>>                                idh(MD1+MD2):country + ,
>>
>>                        rcov = ~idh(MD1+MD2):units,
>>
>>                        prior=prior,
>>
>>                        data = dataset,
>>
>>                        mev= dataset$var,
>>
>>                        family = "gaussian"))
>>
>>
>>
>> Now I wanted to check whether this seems sensible within the Bayesian
>> framework? ? I generally would prefer a solution for the
>> "model1"-approach above as I have many more predictors than in this
>> example.
>>
>> I would highly appreciate any comment! I can imagine that other people
>> may be interested in conducing a similar hierarchical multiresponse
>> meta-analysis.
>>
>>
>> Best regards
>>
>> Nico
>>
>> PhD-candidate
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>>
>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>



More information about the R-sig-mixed-models mailing list