[R-sig-ME] Specifying a model with a link function in MCMCglmm

Sam H nirvana4lf at gmail.com
Thu Feb 18 18:57:55 CET 2016


Thanks for your response Jarrod! Regarding the prior specification, I
forgot to mention the reason I did that. Originally I specified the model
with no prior, but it kept failing just before the 1000th iteration with
the error message "ill-conditioned G/R structure". I was not very sure what
to do since I don't really have any estimates to use for a prior, so I
figured I would just use estimates produced by lmer. Should I just set the
G structure to an identity matrix, or does it not really matter since nu is
high?

On Thu, Feb 18, 2016 at 12:48 AM, Jarrod Hadfield <j.hadfield at ed.ac.uk>
wrote:

> Hi Sam,
>
> Each distribution has a fixed link function in MCMCglmm, and an inverse
> link function for a Gaussian response would be hard to implement.
>
> Also, using a REML estimate as a prior, and then having quite a strong
> degree of belief parameter is a bit odd. Many, including myself, would
> consider this double-dipping.
>
> Cheers,
>
> Jarrod
>
>
>
> On 17/02/2016 22:26, Sam H wrote:
>
>> Hello all,
>>
>>
>> Before jumping into my question, let me first briefly explain my model to
>> give context. Here is how I currently have specified the model using
>> MCMCglmm, first specifying a model in lmer and extracting the variance
>> estimates for my G prior:
>>
>> full_lmer <- lmer(RT ~ AgeGroup*cue*cong + (cue + cong|PID), data =
>> vsdataset)
>>
>> sigma2 <- sigma(full_lmer)^2
>>
>> Lambda <- getME(full_lmer, "Lambda")
>>
>> Sigma <- sigma2*tcrossprod(Lambda)
>>
>> Gmer <- Sigma[1:4,1:4] #Extracting the VCV parameters from the block
>> diagonalized Sigma
>>
>> full_mcmc <- MCMCglmm(RT ~ AgeGroup*cue*cong, random = ~us(1 + cue +
>> cong):PID, thin = 10, nitt = 20000, data = vsdataset, prior = list(G =
>> list(G1 = list(V = diag(diag(Gmer)), nu = 5))))
>>
>> Note: I used V = diag(diag(Gmer)) due to the fact that Sigma/Gmer was not
>> positive definite, which MCMCglmm would not accept.
>>
>>
>> Quick explanation of model terms:
>>
>> RT --> response time in msec, very positively skewed even after outlier
>> removal, inverse transform seems to center it
>>
>> AgeGroup --> 2 level factorial var (Old and Young, between-subjects)
>>
>> cue --> 3 level factorial var (within-subjects)
>>
>> cong --> 2 level factorial var (within-subjects)
>>
>> PID --> participant ID number
>>
>>
>> I want to specify this model with an inverse/reciprocal link function
>> (Gaussian family). However, I can't figure out how to specify the link
>> function. In the help section for the MCMCglmm function, they mention a
>> "linking.function" for the random effects terms, but it doesn't seem to
>> have anything to with specifying a link function for the response
>> variable.
>> According to the course notes from the MCMCglmm package, "there are many
>> different types of distribution and link functions and those supported by
>> MCMCglmm can be found in Table 7.1." However, Table 7.1 seems to just list
>> the families and their PDFs, there's no column listing "supported" link
>> functions.
>>
>> So, how do you specify a link function using MCMCglmm? If you can't
>> directly specify a link function, is there something else I need to do
>> such
>> as specifying the prior a certain way, or is it valid to just specify the
>> model as having a gaussian response and leave the mode as I've specified
>> it? After plotting the model, I noticed that several of the parameter
>> distributions were extremely skewed (some left, some right).
>>
>>
>> As a side note, I originally tried two alternatives:
>>
>> 1) using lmer with an inverse transform
>>
>> 2) using glmer with family = gaussian(link = "inverse") and family =
>> inverse.gaussian(link = "identity")
>>
>>
>> #1 seems problematic due to the fact that I need to convert the response
>> variable units back to the original units, which not only flips any
>> confidence intervals but also makes them uneven. I'm not sure if
>> converting
>> these CI's is even appropriate as they were computed with different
>> units/distribution. I also don't know of any way to validly convert the
>> standard error back since that is certainly not valid once I
>> back-transform.
>>
>> #2 gave me some issues: first, I had to scale down RT by a factor of 1000
>> (from ms to s) when using gaussian(link = "inverse") otherwise I would get
>> an error about the downdated VtV not being positive definite. But after
>> dividing RT by 1000, it was able to continue, but the model did not fully
>> converge (I think the max abs gradient was approximately .02). I decided
>> to
>> rerun the model after changing the contrasts on my variables from the
>> default dummy coding to effect coding (using contr.sum). The same thing
>> happened, except this time the max gradient was a little higher (about
>> .0375) and in addition, I got the "model is nearly unidentifiable" warning
>> due to a large eigenvalue. When I ran the model with inverse.gaussian(link
>> = "identity"), it worked without scaling down RT by 1000 but I a bunch of
>> optimizer warnings so I scaled it down and this time it wasn't able to
>> converge because the max abs gradient value was about .0247.
>>
>>
>> Any help on this would be greatly appreciated!
>>
>>
>> - Sam
>>
>>         [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list