[R-sig-ME] Sampling methods for MCMCglmm using cengaussian family
Jarrod Hadfield
j.hadfield at ed.ac.uk
Wed Oct 3 13:25:49 CEST 2012
The censored gaussian latent variable is updated using MH steps
(although I'm not sure now why I didn't use Gibbs updates - perhaps I
wrote the truncated normal sampler after I'd implemented censored
gaussian and it was easier just to use the general MH machinery used
for other distributions). For multitrait models that include a
mixture of non-gaussian and censored gaussian traits, or multiple
censored guassian traits, the more general MH updates would be
required in any case.
Candidate values for the latent variable are centered at the current
value and sampled from a normal with variance V. The candidate value
can then be reflected at the censoring points (multiple times if
necessary) and the symmetry required for detailed balance is
maintained (I think and hope). This means that a whole series of
equally spaced proposal distributions can give optimal acceptance
ratios but usually the adaptive MH samplers settles onto one of these
quite quickly. The could get the variance printed to screen:
After
/***************/
/* Adaptive MH */
/***************/
on L1734-6 of MCMCglmm.cc you could put:
if(itt==burnin && AMtuneP[k]==1){
cs_print(propC[k], FALSE);
}
which would print the variance of the proposal distribution at the end
of adaptation. This could be passed to the tune argument in further
runs of MCMCglmm. I will include the adapted proposal distribution in
the output of a MCMCglmm run in future releases.
Cheers,
Jarrod
Quoting Joshua Wiley <jwiley.psych at gmail.com> on Sun, 30 Sep 2012
12:04:25 -0700:
> Hmm, that makes sense, but I am not sure how to go about doing it.
> Okay, I am sure because I can see the code where it is done in C++,
> but I do not know an easy way and really loathe the idea of hacking
> source code, recompiling, finding an error, and cycling through that
> process until it works. I could be missing something because I am not
> the strongest at the theory underpinning these models.
>
> I did edit the R MCMCglmm function so I could look at all the output
> from the call to .C, but at least in the test case I created to try to
> mimic your example, there did not seem to be anything useful there.
>
> The process sounds like what the MICE package does, but in a
> bayesian framework.
>
> I know Jarrod is a busy fellow, but he usually periodically gets to
> emails here, and if he sees this, I am sure he would have a better
> answer/direction for you to take as there is still the real
> possibility I am missing something silly.
>
> Good luck,
>
> Josh
>
> On Sun, Sep 30, 2012 at 11:50 AM, Robin Jeffries <rjeffries at ucla.edu> wrote:
>> Hi Joshua,
>>
>> Thank you for your response. I do have those Course Notes, but only skimmed
>> the technical details b/c I don't have any RE. I'll look further into it.
>>
>> Thank you for looking into extracting the proposal variance, I don't have
>> enough knowledge to look into or understand the guts of most programs,
>> especially if they're in C. I know I can provide a proposal distribution,
>> that's the entire point. I want to run this model for enough iterations such
>> that the proposal distribution is "good" in that the acceptance rate is ~25%
>> or so. Then I want to know what that proposal distribution is, so I can
>> restart the model using this good proposal distribution with no burnin.
>>
>> This probably sounds strange, but this model is only a step in a larger
>> cyclical algorithm (Sequential Regression Multiple Imputation (Raghunathan
>> 2001)) that models multiple variables, one iteration at a time. Y1 is
>> modeled, its results fed into the model for Y2, both those results are fed
>> into Y3.... until the results from Y2-Yp are fed back into a model for Y1.
>> So I need to draw 1 iteration at a time using a a constant proposal
>> distribution that does not adapt.
>>
>> I was just hoping to avoid those re-calculations this time and have MCMCglmm
>> just tell me what a good proposal variance was instead of having to figure
>> it out myself :)
>>
>> FYI, The small priors were not intentional, essentially a typo. Thank you
>> for pointing it out.
>>
>> Anyhow, thank you again for helping me figure out how I'm going to do what i
>> need to do. I appreciate the time you spent on it.
>>
>>
>> -Robin
>>
>>
>
>
>
> --
> Joshua Wiley
> Ph.D. Student, Health Psychology
> Programmer Analyst II, Statistical Consulting Group
> University of California, Los Angeles
> https://joshuawiley.com/
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
More information about the R-sig-mixed-models
mailing list