Hello,
I am attempting to fit a multinomial logit model (3 categories) with random
effects (across states). I have attempted to follow the advice in the
course notes, but am a little uncertain the reason for the priors on the
residuals, and how this is used to calculate predicted probabilities. I
would like my covariates to predict each outcome -- cov1 predicting option3
versus option1 and cov1 predicting option2 versus option1, rather than a
main effect.
Here is a simple version of my model
j<-length(levels(data$char))
I<-diag<-(j-1) #2x2 identity matrix
J=matrix(rep(1, (j-1)^2), c(j-1, j-1)) #2x2 Unit Matrix
IJ <- (1/3) * (diag(2) + matrix(1, 2, 2)) #Residual covriance matrix
prior = list(R = list(V =IJ, fix = 1), G = list(G1 = list(V = diag(2), nu =
0.002))) #Why can't I use V=diag(2) for prior R?
model<- MCMCglmm(char~-1+trait*(cov1), random=~idh(trait):state, rcov = ~us(
trait):units, data=data, family = "categorical", prior = prior, verbose =
TRUE)
I find that the model converges, and the coefficients look similar to
maximum likelihood, but then I would like to predict probabilities for
being in each category. Typically, I think this is done by
plogis(x/sqrt(1+c2)), so why is it necessary to multiply by the delta
matrix (course notes p. 97)? Alternatively, if I simply use a 2x2 diagonal
matrix for the prior for R, shouldn't I be able to use the same
transformation -- plogis(x/sqrt(1+c2)). In short, I am a little confused
about the IJ matrix and where it comes from. Is there a quick answer, or
another paper that explains this? And (2) is it reasonable to predict
probabilities from my model, based on fixed values of cov1, using this
simple transformation above... plogis(x/sqrt(1+c2))
Many thanks.
Jackson
[[alternative HTML version deleted]]