[R-sig-ME] Meta-analysis in MCMCglmm
Jarrod Hadfield
j.hadfield at ed.ac.uk
Fri Jan 4 12:17:56 CET 2013
Hi Matt,
You're Z matrix after svd is diagonal with entries equal to the
square-root of the standard error. i.e:
diag(sqrt(data$SE))
This is incorrect because it would imply the sampling variance is
equal to the standard error. This is correct:
diag(data$SE)
Note that with idh structure nu=7 is very very informative, because
it is equivalent to 7 degrees of freedom on a single variance.
Multiplying the effect and the SE by a constant should make little
difference if the analysis is set up correctly, but because the
sqrt(SE) was used rather than the SE you should expect large
differences that do not make sense.
Cheers,
Jarrod
Quoting Matthew Robinson <matthew.r.robinson at sheffield.ac.uk> on Fri,
4 Jan 2013 10:38:23 +0000:
> Dear list,
> I am trying to run a random effects meta-analysis in MCMCglmm. I
> have a data set of 960 effect size estimates and their standard
> errors for eight traits (120 estimates per trait).
> What I want to know is if there are any differences among traits in
> both mean effect size and in the variance of their effect sizes.
> So I have created a diagonal matrix of the standard errors (SE) and
> used singular value decomposition to create a model matrix (Z) which
> I have then fit using idv(Z) whilst fixing the variance to 1. I have
> also estimated an effect of trait as random, and then estimated a
> separate residual variance for each trait. My coding is below:
>
> Rmat<-matrix(0,nrow(data),nrow(data))
> diag(Rmat)<-data$SE
> Rsvd<-svd(Rmat)
> Rsvd<-Rsvd$v%*%(t(Rsvd$u)*sqrt(Rsvd$d))
> data$row<-1:nrow(data)
> data$row<-factor(data$row)
> Z<-model.matrix(~row-1, data)%*%Rsvd
> data$Z<-Z
> prior=list(R=list(V=diag(8), nu=7),
> G=list(G1=list(V=diag(1),nu=1,alpha.mu=rep(0,1),alpha.V=diag(1)*1000),G2=list(V=1,
> fix=1)))
> m1<-MCMCglmm(estimate ~ 1, random=~Trait + idv(Z), rcov=~
> idh(Trait):units, data=data, prior=prior, family="gaussian",
> pr=TRUE, burnin=40000, thin=100, nitt=140000)
>
>
> I first wanted to check if this seemed sensible to people?
>
> One issue is that all of the estimates are quite small numbers,
> ranging from 0 to 0.14. I have tried multiplying all of my data
> (effect sizes and their SE) by 10 and then re-running the exact same
> model and this seems to make the posterior traces look much better.
> So secondly I wanted to get an opinion on whether this is a sensible
> thing to do?
>
> Many thanks in advance for any advice anyone can provide.
>
> Best wishes,
> Matt
>
>
> ------------------------------------------------------
> Dr. Matt Robinson
> NERC Research Fellow
> Department of Animal and Plant Science
> University of Sheffield
> Alfred Denny Building, Western Bank
> Sheffield, S10 2TN, UK
>
> matthew.r.robinson at sheffield.ac.uk
>
> tel: +44 (0)114 222 4707
> fax: +44 (0)114 222 0002
> ------------------------------------------------------
>
>
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
More information about the R-sig-mixed-models
mailing list