[R-sig-ME] Can an uninformative prior be too diffuse?

Iain Stott iainmstott at gmail.com
Wed May 14 13:37:11 CEST 2014


Jarrod, you hit the nail on the head. I had a random term with only a
couple of levels (taxonomic group) which I was planning to replace
with a phylogeny anyway. As soon as I took out that variable from the
models, they're running brilliantly.

Thanks for your help, and to others that emailed me with similar advice!

Iain

- - - - - - - - - - - -
Dr. Iain Stott
Environment and Sustainability Institute
University of Exeter, Cornwall Campus
Tremough, Treliever Road
Penryn, Cornwall, TR10 9EZ, UK.
- - - - - - - - - - - -
http://www.exeter.ac.uk/esi/
http://biosciences.exeter.ac.uk/cec/



On 12 May 2014 16:09, Jarrod Hadfield <j.hadfield at ed.ac.uk> wrote:
> Hi Iain,
>
> It's a bit hard to diagnose without seeing the data. Would it be possible to
> post them? Is it possible you have some random terms with very few levels?
>
> Cheers,
>
> Jarrod
>
>
> Quoting Iain Stott <iainmstott at gmail.com> on Mon, 12 May 2014 11:54:50
> +0100:
>
>> Hi R-users
>>
>> I'm having an interesting problem in using MCMCglmm for a meta analysis.
>>
>> I run models as I would normally run them, with diffuse priors on
>> fixed and random effects (fixed: mu=0, V=10e8; random: V=1, nu=0.001;
>> gaussian model), and the posteriors I'm getting out of the models are
>> not like anything I've seen before. The range is very high but the
>> variance is very low, so that fixed effect posteriors are a spike
>> around the mean and random effects posteriors are highly truncated at
>> 0. A handful of coefficient sets are taking extreme values that seem
>> to make no sense at all.
>>
>> I wonder why this is: running for longer does not fix the problem and
>> the chains aren't autocorrelated. Parameter expansion does not help
>> the random coefficients. I'm always seeing a handful of samples that
>> are orders of magnitude larger than the data themselves (which are
>> real weighted mean differences over about -2 to 2) whilst the vast
>> majority are being taken from a much more sensible range.
>>
>> It seems the only solution to getting better posteriors is to make
>> them less diffuse (decrease V for fixed effects, increase nu for
>> random effects). This makes sense, but I'm not comfortable doing it
>> when the posteriors are so sensitive to variances on the priors, and I
>> don't know what it would mean for interpretation of the model. I'm not
>> convinced that a prior can be "too" diffuse, and I'm not sure why
>> these extreme samples are being accepted. But then, perhaps allowing
>> the model to sample from a range that is so much larger than the data
>> just doesn't make sense... although like I say, I would have expected
>> these extreme values to be ditched based on likelihood.
>>
>> If anyone can shed some light, it would be greatly appreciated. For
>> now I'll have to forego some of the random effects and forge ahead
>> with gls...
>>
>>
>> Iain
>>
>> - - - - - - - - - - - -
>> Dr. Iain Stott
>> Environment and Sustainability Institute
>> University of Exeter, Cornwall Campus
>> Tremough, Treliever Road
>> Penryn, Cornwall, TR10 9FE, UK.
>> - - - - - - - - - - - -
>> http://www.exeter.ac.uk/esi/
>> http://biosciences.exeter.ac.uk/cec/
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>>
>
>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>



More information about the R-sig-mixed-models mailing list