# [R-sig-ME] uninformative priors for a threshold model estimated with MCMCglmm?

Thu Jun 12 10:42:54 CEST 2014

```Hi Malcolm,

It is of course hard to say how data and prior will interact to
generate a posterior (otherwise we wouldn't need MCMC).

However, if you look at the marginal properties of your four priors
(see plot) then you can see that with nu=2.02 in the inverse-Wishart
prior (particularly prior 1), small values of the variance have very
low prior density.

v<-seq(0,1,length=1000)
par(mfrow=c(2,2))

plot(MCMCpack::dinvgamma(v, shape = 1.02/2, scale =(2.02*1)/2)~v,
type="l", ylab="Density", xlab="Variance", main="Prior 1")
plot(MCMCpack::dinvgamma(v, shape = 1.02/2, scale =(2.02*0.1)/2)~v,
type="l", ylab="Density", xlab="Variance", main="Prior 2")
plot(df(v, df1 = 1, df2 = 1.02)~v, type="l", ylab="Density",
xlab="Variance", main="Prior 3")
plot(df(v/25, df1 = 1, df2 = 1.02)~v, type="l", ylab="Density",
xlab="Variance", main="Prior 4")

I tend to use parameter expanded priors. I haven't come across any
papers exploring their properties in the multivariate case (i.e. a
covariance matrix) but some noddy simulations I've done in the past
suggest they have better inferential properties than the
inverse-Wishart. If there are papers out there I would love to hear

Cheers,

Jarrod

Quoting Malcolm Fairbrother <M.Fairbrother at bristol.ac.uk> on Mon, 9
Jun 2014 09:59:38 +0100:

> Dear all,
>
> I'm using MCMCglmm to estimate a two-level threshold model (for an ordinal
> outcome) with random intercepts plus a random slope for one covariate. I've
> tried using a variety of priors, and am finding they can make quite a
> difference to the variance of the posterior distribution for the
> coefficient on the variable whose slope I am allowing to vary.
>
> I would like to use a completely uninformative prior, and I'm wondering if
> I can get some advice on how to do so.
>
> The FOUR priors I've tried are:
>
> (1) list(R = list(V = 1, fix = 1), G = list(G1 = list(V = diag(2), nu =
> 2.02)))
>
> (2) list(R = list(V = 1, fix = 1), G = list(G1 = list(V = diag(2)*0.1, nu =
> 2.02))))
>
> (3) list(R = list(V = 1, fix = 1), G = list(G1 = list(V = diag(2), nu =
> 2.02, alpha.mu = rep(0, 2), alpha.V = diag(2))))
>
> (4) list(R = list(V = 1, fix = 1), G = list(G1 = list(V = diag(2), nu =
> 2.02, alpha.mu = rep(0, 2), alpha.V = diag(25^2, 2, 2))))
>
> With (1), the posterior mean for the random intercept variance is 0.145,
> random slope variance 0.074, and the CI on the relevant fixed effect
> coefficient is -0.059 to 0.142 ( ).
>
> With (2), the same things are 0.074, 0.011, and 0.006 to 0.086 (*).
>
> With (3), 0.070, 0.003, and 0.024 to 0.066 (***).
>
> With (4), 0.072, 0.003, and 0.025 to 0.067 (***).
>
> So, depending on the prior, the posterior distribution for the fixed effect
> coefficient may or may not overlap zero, though my impression is that the
> first prior doesn't make much sense because it suggests the two variances
> should each be around 1 when they're clearly both much smaller... And
> somehow this is affecting the distribution of the estimates of the fixed
> effect coefficient.
>
> The parameter-expanded priors are seemingly not much different depending on
> what I assign as the alpha.V. Am I safe/justified reporting results using
> these priors, and ignoring prior (1)?
>
> Any comments would be appreciated. Maybe this is just a simple/obvious (to
> some) illustration of the superiority of parameter-expanded priors.
>
> - Malcolm
>
> 	[[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

```