[R-sig-ME] Parameter estimates and covariance structure in MCMCglmm
Jarrod Hadfield
j.hadfield at ed.ac.uk
Tue Aug 13 08:39:37 CEST 2013
Hi Adam,
Q1. The way the negative slopes are simulated are not consistent with
the model so you shouldn't expect a particularly good fit. The way
they are generated seems so idiosyncratic I think it would be hard to
come up with a general model that would better predict the response in
bands 6, 8 and 17.
Your need to be careful about claiming DIC supports slope variation.
First DIC is focussed at the wrong level for most scientific
inference, so it cannot be generally advocated (unless it can be
focused at a lower level, which is easy for Gaussian data but not
otherwise). Second, the test is also testing whether the band variance
is equal in the two treatments. By simulation they are equal (before
the manipulations of bands 6,8, and 17), but the residual variances
are not (they are 25 and 49). The us(treatment):band is capturing some
of the differences in residual variance (hence my advice to always fit
residual structures of the form idh(treatment):units in these types of
model):
prior1b<-list(R=list(V=diag(2), nu=0.002), G=list(G1=list(V=diag(2),
nu=1.002)))
mcmc.test1b <- MCMCglmm(y~treatment, random = ~us(treatment):band,
rcov=~idh(treatment):units, data=test, prior=prior1b)
prior2b<-list(R=list(V=diag(2), nu=0.002), G=list(G1=list(V=1, nu=0.002)))
mcmc.test2b <- MCMCglmm(y~treatment, random =
~band,rcov=~idh(treatment):units, data=test, prior=prior2b)
Q2: you cannot using MCMCglmm, you would need to use
double-hierarchical models. You would also need a great deal of data.
Q3: you could just apply your function for the conditional variance to
the posterior and look at the credible intervals. It is not possible
to fix arbitrary elements of the covariance matrix to zero in
MCMCglmm. You can do that in asreml if you wish, but I believe the
covariance matrix estimate may not be positive definite if you do
(i.e. your conditional variance may be negative).
Cheers,
Jarrod
Quoting Adam Lendvai <adamlendvai at freemail.hu> on Mon, 12 Aug 2013
18:47:49 -0400:
> Dear List members,
>
> I'm doing analyses in MCMCglmm to investigate individual differences
> in reaction to an experiment (i.e. differences in reaction norms). We
> measured multiple variables at the same time, so we're also interested
> in knowing whether there is a covariance between the reaction norms. I
> have three questions regarding parameter estimates and manipulating
> the variance-covariance matrix that can be found below within a
> minimal reproducible example.
>
> Your feedback would be greatly appreciated.
>
> Best,
> Adam Lendvai
>
> # set up test data: We have 20 individuals (identified by 'band'), two
> treatments, and each individual had twice the control and
> # twice the other treatment during the study
> test <- expand.grid(band = factor(1:20), replicate = 1:2, treatment
> = factor(1:2, labels = c("ctrl", "treat")))
> # response variable:
> set.seed(22)
> test$y <- c(rnorm(40, 5, 5), rnorm(40, 15, 7))
> test$z <- c(rnorm(40, 30, 5), rnorm(40, 20, 5))
> # introduce more variance by defining negative slopes too
> test$y[test$band == 8] <- c(10,13, 0, 5)
> test$y[test$band == 17] <- c(8, 5, 1,4)
> test$y[test$band == 6] <- c(11, 7, 10,6)
>
>
> require(lattice); require(MCMCglmm)
> # Step1: I'd like to estimate the individual differences in the
> reaction to the treatment:
> Prior1= list(R = list(V = diag(1), nu = 0.002), G = list(G1 = list(V =
> diag(2), nu = 1.002)))
> mcmc.test1 <- MCMCglmm(y~treatment, prior = Prior1, random =
> ~us(treatment):band, family = "gaussian", data = test, saveX = T,
> saveZ = T, verbose = T, pr = T)
> # plot the results
> W1 <- cBind(mcmc.test1$X, mcmc.test1$Z)
> pred1 <- W1%*%posterior.mode(mcmc.test1$Sol)
> xyplot(y+pred1 at x~treatment|band, data=test, type = c("p", "a"), pch =
> 19, auto.key = list(text = c("Data", "Prediction"), lines = 1, points
> = F))
>
> # the model fit is OKish, although in some cases there is a mismatch
> between the data and the prediction
> # in the slope parameter (e.g. individual 6,8,17), in the intercept
> (e.g. 2), or both (e.g. 18)
>
> # Question 1: is it possible to improve model fit?
>
> # I'm interested in the between-individual variation in the slopes:
> summary(mcmc.test1)$Gcovariances[4,1:3]
>
> # Now test it against a model that does not include random slopes,
> just random intercept:
> Prior2 = list(R=list(V=1, nu = 0.002), G=list(G1 = list(V = 1, nu = 0.002)))
> mcmc.test2 <- MCMCglmm(y~treatment, prior = Prior2, random = ~band,
> family = "gaussian", data = test, saveX = T, saveZ = T, verbose = T,
> pr = T)
> summary(mcmc.test2)
>
> summary(mcmc.test2)$DIC-summary(mcmc.test1)$DIC
>
> # A substantial difference in DIC supports the random slope model:
> i.e. there are individual differences
> # in the reaction to the treatment
>
> # I would be interested in quantifying the within-individual variance
> for each slope, i.e. how 'consistent'
> # are the individuals in their response? (e.g. individual 7 is
> consistent vs. individual 5, which is less consistent).
> # Howver, if I get this right, the residual error in these models is
> calculated at the population level.
>
> # Question 2: how can I estimate within-individual variance
> (consistency) for each individual?
>
>
> # Step2: I'd like to see if there is covariance in the slopes between
> response 'y' and 'z'
>
> # let's standardize reponse values
> z = function(x) (x - mean(x, na.rm = T)) / sd(x, na.rm = T)
>
> Prior3 = list(R = list(V = diag(2), nu = 1.002), G = list(G1 = list(V
> =diag(4), nu = 3.002)))
>
> mcmc.test3 = MCMCglmm(cbind(z(y), z(z))~(trait-1) * treatment, prior =
> Prior3, random = ~us(trait:treatment):band, rcov = ~us(trait):units,
> family = c("gaussian", "gaussian"), data = test)
> summary(mcmc.test3)
> # variance -covariance matrix:
> (matrix(round(summary(mcmc.test3)$Gcov[,1], 2),4,4) -> mtr)
> mtrnames = c("int.y", "int.z", "sl.y", "sl.z"); dimnames(mtr) =
> list(mtrnames, mtrnames)
>
> # I'd like to see the conditional between-individual variance in 'y'
> response which is independent from variation in 'z':
> (cbivy = mtr["sl.y", "sl.y"] - ((mtr["sl.z", "sl.y"])^2)/mtr["sl.z", "sl.z"])
>
> # Question 3: How can I test this? I can use the ~idh function to
> constrain all covariances to 0, but it would not
> # be appropriate, because it would set all other covariances to zero.
> Is it possible to set to zero just selected parts
> # of the variance-covariance matrix (here: covariance of slopes)?
>
>
> sessionInfo()
> # R version 3.0.1 (2013-05-16)
> # Platform: x86_64-apple-darwin10.8.0 (64-bit)
> #
> # locale:
> # [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
> #
> # attached base packages:
> # [1] stats graphics grDevices utils datasets methods base
> #
> # other attached packages:
> # [1] MuMIn_1.9.5 MCMCglmm_2.17 corpcor_1.6.6 ape_3.0-9
> coda_0.16-1 Matrix_1.0-12 tensorA_0.36
> # [8] lattice_0.20-15
> #
> # loaded via a namespace (and not attached):
> # [1] grid_3.0.1 nlme_3.1-110 tools_3.0.1
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
More information about the R-sig-mixed-models
mailing list