[R-sig-ME] Combining MCMCglmm estimates
Paul.Johnson at glasgow.ac.uk
Wed Oct 10 12:56:52 CEST 2012
> Why is different starting values important?
> Shouldn't burning make the 10 chains independent enough?
> The idea that different starting points are needed would, if I
> understand the rationale correctly, imply that the chains are
> better in the end than in the beginning. Is that the point?
I think it's a precaution. Assuming that you have ended up with homogenous looking samples from different runs, you'll be more confident (but never certain of course) that they've converged if they started from different points in parameter space. E.g. there might be local optima where chains could get stuck, and this problem would be much more likely to be discovered starting from different values. I don't know how likely local optima are in practice with a typical MCMCglmm model.
However, I don't see that 10 samples of (effective) size 1000 from the same starting values, with sufficient burnin, are any worse than 10,000 samples from a single run. So my feeling is that using different starting values is always worthwhile (given how easy this is), but not strictly essential.
From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Hans Ekbrand
Sent: 10 October 2012 11:19
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Combining MCMCglmm estimates
On Mon, Oct 08, 2012 at 10:45:55AM +0100, Paul Johnson wrote:
> Hi Davina,
> I haven't actually merged runs, so the following isn't based on experience. I'm also not aware of the methods for combining SEs from different imputed data sets that you mention. However I have the feeling that you don't need them if you have MCMC output from each data set.
> Leaving aside imputation for the moment...
> Let's say you've run the same model 10 times from the same data set, giving 10 sets of MCMC output, where each output is a sample from the joint posterior distribution of the model parameters. If these are "good" samples from the posterior, then you can combine them and treat them as a single MCMC sample. By "good" I mean they have started from different starting value sets and burned in for long enough to forget these values, they are large enough (e.g. >=1000 independent samples), and they have converged (you can check this visually by plotting the chains over each other - see plot(mcmc.list(...)) in the coda package).
Why is different starting values important? Shouldn't burning make the 10 chains independent enough?
The reason I'm asking is because I want to know what you _have to_ do in order to use chains that originates from running MCMC on clusters or multicore processors.
Different starting points is not very hard to provide, but it would be good to know if they are important or not.
The idea that different starting points are needed would, if I understand the rationale correctly, imply that the chains are better in the end than in the beginning. Is that the point?
Hans Ekbrand (http://sociologi.cjb.net) <hans at sociologi.cjb.net>
R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
The University of Glasgow, charity number SC004401
More information about the R-sig-mixed-models