[R-sig-ME] ordinal regression with MCMCglmm
Malcolm Fairbrother
m.fairbrother at bristol.ac.uk
Tue Jan 8 01:35:24 CET 2013
Dear Jarrod,
Thanks a lot for this -- very helpful.
OK, so since I DO want to marginalise the random effects (I'm interested in predicted probabilities for a hypothetical/typical judge rather than an actually observed one), I think the code I want is:
diff(pnorm(c(-Inf, 0, colMeans(MC1$CP), Inf)-colMeans(MC1$Sol) %*% c(1,0,0), 0, sqrt(1+1))) # MCMCglmm
diff(pnorm(c(-Inf,fm2$Theta, Inf) - fm2$beta %*% c(0,0), 0, 1)) # clmm
And these two approaches do indeed yield very consistent results. I guess I need to set slightly different SDs for pnorm because clmm assumes the residual variance is 0, whereas with MCMCglmm (and given the prior I used, as recommended in the CourseNotes) it's set to 1?
Much appreciated,
Malcolm
On 7 Jan 2013, at 21:07, Jarrod Hadfield wrote:
> Hi Malcolm,
>
> Your way of obtaining the predicted probability of falling into a class from the MCMCglmm output is correct. The method you use for the clmm output is not the expected value over random effects (as you've calculated for the MCMCglmm model), but the expected value for a random effect of zero. If you use probit link in clmm, and marginalise the random effects you will find the clmm and MCMCglmm estimates to be identical:
>
> summary(fm2 <- clmm2(rating ~ temp + contact, random=judge, data=wine, Hess=TRUE, nAGQ=10, link="probit"))
>
> diff(pnorm(c(-Inf,fm2$Theta, Inf) - fm2$beta %*% c(0,0), 0, sqrt(1+fm2$stDev)))
> diff(pnorm(c(-Inf,fm2$Theta, Inf) - fm2$beta %*% c(1,1), 0, sqrt(1+fm2$stDev))) # for tempwarm and contactyes
>
> Cheers,
>
> Jarrod
>
>
>
>
>
>
> Quoting Malcolm Fairbrother <m.fairbrother at bristol.ac.uk> on Mon, 7 Jan 2013 19:56:20 +0000:
>
>> Dear list,
>>
>> I'm picking up on a thread from 2010 (https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q2/003678.html), about ordinal mixed models, fitted with clmm (from the ordinal package) and MCMCglmm.
>>
>> I would like to fit an ordinal mixed model, with random slopes. Since clmm doesn't do random slopes, I'm trying with MCMCglmm, but I'm not sure I understand how MCMCglmm handles ordinal data.
>>
>> To illustrate, I'll use the "wine" data from the ordinal package, and a model with only random intercepts.
>>
>> library(ordinal)
>> library(MCMCglmm)
>>
>> data(wine)
>> str(wine)
>> summary(fm2 <- clmm2(rating ~ temp + contact, random=judge, data=wine, Hess=TRUE, nAGQ=10))
>>
>> # to get predicted probabilities for each possible of the five possible responses, for two combinations of covariates:
>> diff(plogis(c(-Inf,fm2$Theta, Inf) - fm2$beta %*% c(0,0))) # for tempcold and contactno
>> diff(plogis(c(-Inf,fm2$Theta, Inf) - fm2$beta %*% c(1,1))) # for tempwarm and contactyes
>>
>> # compare with the raw data (seems to make sense...):
>> table(wine$rating[wine$temp=="cold" & wine$contact=="no"])
>> table(wine$rating[wine$temp=="warm" & wine$contact=="yes"])
>>
>> # now fit the same model, using MCMCglmm:
>> prior1 <- list(R = list(V = 1, nu = 0.002, fix=1), G = list(G1 = list(V = 1, nu = 0.002)))
>> MC1 <- MCMCglmm(rating ~ temp + contact, random=~judge, data=wine, family="ordinal", prior=prior1, nitt=130000, thin=100, verbose=F)
>>
>> # and get predicted probabilities, using the results from MCMCglmm, and the "hunch" approach Jarrod mentioned in the thread above:
>> diff(pnorm(c(-Inf, 0, colMeans(MC1$CP), Inf)-colMeans(MC1$Sol) %*% c(1,0,0), 0, sqrt(1+sum(colMeans(MC1$VCV))))) # for tempcold and contactno
>> diff(pnorm(c(-Inf, 0, colMeans(MC1$CP), Inf)-colMeans(MC1$Sol) %*% c(1,1,1), 0, sqrt(1+sum(colMeans(MC1$VCV))))) # for tempwarm and contactyes
>>
>> The predicted probabilities are similar, but not similar enough that I'm sure this is right. Are the differences due to MCMC error, inappropriate priors, or the way I'm predicting probabilities based on one (or both?) model(s)?
>>
>> If anyone can offer any clarifications/suggestions, I'd be grateful. (Maybe Thierry figured this out?)
>>
>> Cheers,
>> Malcolm
>>
>>
>>
>>> Date: Wed, 14 Apr 2010 15:41:45 +0100
>>> From: Jarrod Hadfield <j.hadfield at ed.ac.uk>
>>> To: "ONKELINX, Thierry" <Thierry.ONKELINX at inbo.be>
>>> Cc: r-sig-mixed-models at r-project.org
>>> Subject: Re: [R-sig-ME] ordinal regression with MCMCglmm
>>> Message-ID: <2FE797FF-C1EC-4CA2-8275-C3B0AA9243BE at ed.ac.uk>
>>> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>>>
>>> Dear Thierry,
>>>
>>> I think (but you had better check) that it would actually be something
>>> like
>>>
>>> pnorm(-Xb, 0, sqrt(1+v)),
>>> pnorm(cp[1] - Xb,0, sqrt(1+v)) - pnorm(Xb,0, sqrt(1+v))
>>> pnorm(cp[2] - Xb,0, sqrt(1+v)) - pnorm(cp[1] - Xb,0, sqrt(1+v))
>>> 1- pnorm(cp[2] - Xb,0, sqrt(1+v))
>>>
>>> where v is the sum of the variance components for the residual and
>>> random effects.
>>>
>>> Alternatively, if the residual variance is set to one and there are no
>>> random effects
>>>
>>> pnorm(-Xb, 0, sqrt(2)),
>>> pnorm(cp[1] - Xb,0, sqrt(2)) - pnorm(Xb,0, sqrt(2))
>>> pnorm(cp[2] - Xb,0, sqrt(2)) - pnorm(cp[1] - Xb,0, sqrt(2))
>>> 1- pnorm(cp[2] - Xb,0, sqrt(2))
>>>
>>> or if the prediction includes random effects:
>>>
>>> pnorm(-(Xb+Zu), 0, sqrt(2)),
>>> pnorm(cp[1] - (Xb+Zu),0, sqrt(2)) - pnorm((Xb+Zu),0, sqrt(2))
>>> pnorm(cp[2] - (Xb+Zu),0, sqrt(2)) - pnorm(cp[1] - (Xb+Zu),0, sqrt(2))
>>> 1- pnorm(cp[2] - (Xb+Zu),0, sqrt(2))
>>>
>>> Please, do not take this as gospel. I have not got time to check these
>>> results, they are a hunch.
>>>
>>> Cheers,
>>>
>>> Jarrod
>>>
>>>
>>> On 14 Apr 2010, at 15:25, ONKELINX, Thierry wrote:
>>>
>>>> Dear Jarrod,
>>>>
>>>> I'm working on a similar problem. Does it makes sense to calculate
>>>> that
>>>> for the fixed effects only? Something like this:
>>>> pnorm(-Xb),
>>>> pnorm(cp[1] - Xb) - pnorm(Xb)
>>>> pnorm(cp[2] - Xb) - pnorm(cp[1] - Xb)
>>>> 1 - pnorm(cp[2] - Xb)
>>>>
>>>> Best regards,
>>>>
>>>> Thierry
>>>> ------------------------------------------------------------------------
>>>> ----
>>>> ir. Thierry Onkelinx
>>>> Instituut voor natuur- en bosonderzoek
>>>> team Biometrie & Kwaliteitszorg
>>>> Gaverstraat 4
>>>> 9500 Geraardsbergen
>>>> Belgium
>>>>
>>>> Research Institute for Nature and Forest
>>>> team Biometrics & Quality Assurance
>>>> Gaverstraat 4
>>>> 9500 Geraardsbergen
>>>> Belgium
>>>>
>>>> tel. + 32 54/436 185
>>>> Thierry.Onkelinx at inbo.be
>>>> www.inbo.be
>>>>
>>>> To call in the statistician after the experiment is done may be no
>>>> more
>>>> than asking him to perform a post-mortem examination: he may be able
>>>> to
>>>> say what the experiment died of.
>>>> ~ Sir Ronald Aylmer Fisher
>>>>
>>>> The plural of anecdote is not data.
>>>> ~ Roger Brinner
>>>>
>>>> The combination of some data and an aching desire for an answer does
>>>> not
>>>> ensure that a reasonable answer can be extracted from a given body of
>>>> data.
>>>> ~ John Tukey
>>>>
>>>>
>>>>> -----Oorspronkelijk bericht-----
>>>>> Van: r-sig-mixed-models-bounces at r-project.org
>>>>> [mailto:r-sig-mixed-models-bounces at r-project.org] Namens
>>>>> Jarrod Hadfield
>>>>> Verzonden: woensdag 14 april 2010 15:35
>>>>> Aan: Kari Ruohonen
>>>>> CC: Kari Ruohonen; r-sig-mixed-models at r-project.org
>>>>> Onderwerp: Re: [R-sig-ME] ordinal regression with MCMCglmm
>>>>>
>>>>> Hi,
>>>>>
>>>>> Imagine a latent variable (l) that conforms to the standard
>>>>> linear model.
>>>>>
>>>>> l = Xb+Zu+e
>>>>>
>>>>> The probabilities of falling into each of the four categories are:
>>>>>
>>>>>
>>>>> pnorm(-l)
>>>>>
>>>>> pnorm(cp[1]-l)-pnorm(-l)
>>>>>
>>>>> pnorm(cp[2]-l)-pnorm(cp[1]-l)
>>>>>
>>>>> 1-pnorm(cp[2]-l)
>>>>>
>>>>> where cp is the vector of cut-points with 2 elements. A 3
>>>>> cut-point model would be over-parameterised (unless the
>>>>> intercept is zero, which I presume is what polr does (?).
>>>>>
>>>>> The factors don't need to be ordered, the order is obtained
>>>>> from levels(resp). In the future, I may only allowed ordered
>>>>> factors to stop any accidents.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Jarrod
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 14 Apr 2010, at 08:07, Kari Ruohonen wrote:
>>>>>
>>>>>> Hi and thanks for the answer. I tried exactly that model
>>>>> syntax before
>>>>>> posting but the output of the "fixed" part had an unexpected
>>>>>> parameterisation and I thought I misspecified the model
>>>>> somehow. The
>>>>>> parameters I got with the above model are
>>>>>> - two cutpoints
>>>>>> - intercept
>>>>>> - effect of group B
>>>>>>
>>>>>> I would have expected that instead of the intercept and two
>>>>> cutpoints
>>>>>> I would have had three cutpoints as given by polr (MASS
>>>>> package), for
>>>>>> example. Can you explain me the parameterisation in
>>>>> MCMCglmm and how
>>>>>> it connects to the one in polr that uses J-1 ordered cutpoints
>>>>>> (J=number of score classes) without an intercept?
>>>>>>
>>>>>> Also, I am uncertain do I need to convert the "resp" before
>>>>> MCMCglmm
>>>>>> to an ordered factor (with "ordered")?
>>>>>>
>>>>>> Many thanks,
>>>>>>
>>>>>> Kari
>>>>>>
>>>>>> On Tue, 2010-04-13 at 17:41 +0100, Jarrod Hadfield wrote:
>>>>>>> Hi Kari,
>>>>>>>
>>>>>>> The simplest model is
>>>>>>>
>>>>>>>
>>>>>>> m1<-MCMCglmm(resp~treat, random=~group, family="ordinal",
>>>>>>> data=your.data, prior=prior)
>>>>>>>
>>>>>>> as with multinomial data with a single realisation, the residual
>>>>>>> variance cannot be estimated from the data. The best
>>>>> option is to fix
>>>>>>> it at some value. most programs fix it at zero but
>>>>> MCMCglmm will fail
>>>>>>> to mix if this is done, so I usually fix it at 1:
>>>>>>>
>>>>>>>
>>>>>>> prior=list(R=list(V=1, fix=1), G=list(G1=list(V=1, nu=0)))
>>>>>>>
>>>>>>> I have left the default prior for the fixed effects (not
>>>>> explicitly
>>>>>>> specified above), and the default prior random effect variance
>>>>>>> structure (G) which has a zero degree of belief parameter.
>>>>> Often this
>>>>>>> requires some/more thought, especially if there are few groups or
>>>>>>> replication within groups is low. Sections 1.2, 1.5 & 8.2 in the
>>>>>>> CourseNotes cover priors for variances.
>>>>>>>
>>>>>>>
>>>>>>> Currently there is no option for specifying priors on the
>>>>> cut- points
>>>>>>> - the prior is flat and improper. The posterior in virtually all
>>>>>>> cases will be proper though.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> Jarrod
>>>>>>>
>>>>>>> Quoting Kari Ruohonen <kari.ruohonen at utu.fi>:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>> I am trying to figure out how to fit an ordinal regression model
>>>>>>>> with MCMCglmm. The "MCMCglmm Course notes" has a section on
>>>>>>>> multinomial models but no example of ordinal models.
>>>>> Suppose I have
>>>>>>>> the following data
>>>>>>>>
>>>>>>>>> data
>>>>>>>> resp treat group
>>>>>>>> 1 4 A 1
>>>>>>>> 2 4 A 1
>>>>>>>> 3 3 A 2
>>>>>>>> 4 4 A 2
>>>>>>>> 5 2 A 3
>>>>>>>> 6 4 A 3
>>>>>>>> 7 2 A 4
>>>>>>>> 8 2 A 4
>>>>>>>> 9 3 A 5
>>>>>>>> 10 2 A 5
>>>>>>>> 11 1 B 6
>>>>>>>> 12 1 B 6
>>>>>>>> 13 1 B 7
>>>>>>>> 14 2 B 7
>>>>>>>> 15 2 B 8
>>>>>>>> 16 3 B 8
>>>>>>>> 17 2 B 9
>>>>>>>> 18 1 B 9
>>>>>>>> 19 2 B 10
>>>>>>>> 20 2 B 10
>>>>>>>>
>>>>>>>> and the "resp" is an ordinal response, "treat" is a treatment and
>>>>>>>> "group" is membership to a group. Assume I would like to fit an
>>>>>>>> ordinal model between "resp" and "treat" by having
>>>>> "group" effects
>>>>>>>> as random effects. How would I specify such a model in
>>>>> MCMCglmm? And
>>>>>>>> how would I specify the prior distributions?
>>>>>>>>
>>>>>>>> All help is greatly appreciated.
>>>>>>>>
>>>>>>>> regards, Kari
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>>
>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>
More information about the R-sig-mixed-models
mailing list