[R-sig-ME] lmer: LRT and mcmcpvalue for fixed effects
rhbc at imm.dtu.dk
Tue Jul 15 10:33:18 CEST 2008
2008/7/15 Simon Blomberg <s.blomberg1 at uq.edu.au>:
> On Tue, 2008-07-15 at 13:43 +0800, Julie Marsh wrote:
>> Given that I am using 2 different tests for two different hypotheses I
>> still would have expected these p-values to be more similar.
> Well, as Pinheiro and Bates say in their book (worth reading!), the LRT
> for mixed effects models is anti-conservative. So your LRT p-value is
> almost certainly too small. The posterior p-value might be more
> accurate, if you accept the usual caveats re: priors and convergence
> etc. Also, when calculating p-values by hand using pchisq, you should
> probably use pchisq(..., lower.tail=FALSE) instead of 1-pchisq(...),
> which is inaccurate. The log.p option might also be useful if you really
> need to compare small probabilities. And why were you using pchisq with
> 0 df (which always == 1)? I don't understand that at all.
Regarding the mixture of a chi-square distribution, this is more
appropriate when testing a single variance component. The ordinary
test with one df is conservative, since the test is on the boundary of
the parameter space. But Julie is not testing a variance component, so
the mixture is not appropriate here.
I can think of three reasons, that the p-values Julie obtains are
different in the likelihood ratio test and the posterior sampling.
1) the extensive use of control parameters indicates that convergence
may be an issue. If one or both the models have not reached
convergence, obviously the likelihood ratio test is based on wrong
likelihoods and will be misleading. I suppose the MCMC sampling will
also be inappropriate in this situation. The need for control
parameters could also indicate that some problems are related to the
data structure such as severe unbalance. Perhaps there is too little
information on some parameters and convergence is hard to achieve?
If the models converged nicely and problems with data or models are
unlikely, I would be inclined to trust the likelihood ratio test. It
is a test on one df with 3000+ observations, so as far as I know,
there should be no problems.
2) the MCMC sampling could be influenced by the priors or the chain
could get stuck in a specific region.
3) which version of lmer are you using? Douglas Bates recently posted
a message to this list reporting problems with MCMCsamp in the most
recent version of lme4.
>> I am so
>> sorry that I can't post the data and the lmer output but I am bound by
>> confidentiality. <big sigh> I understand completely if it is not
>> possible to provide any help given this lack of further information.
>> I have eagerly read and re-read the rwiki help page .........
>> ....... but still am unable to explain why the results should be so
>> different. Much as I would love to argue against the reliance on
>> p-values I'm afraid I am a resigned pragmatist when it comes to trying
>> to get anything published. <sorry!> Needless to say I will swamp the
>> article with far more informative plots and CI's.
>> Any help would be very much appreciated.
>> kindest regards, julie marsh.
>> R-sig-mixed-models at r-project.org mailing list
> Simon Blomberg, BSc (Hons), PhD, MAppStat.
> Lecturer and Consultant Statistician
> Faculty of Biological and Chemical Sciences
> The University of Queensland
> St. Lucia Queensland 4072
> Room 320 Goddard Building (8)
> T: +61 7 3365 2506
> email: S.Blomberg1_at_uq.edu.au
> 1. I will NOT analyse your data for you.
> 2. Your deadline is your problem.
> The combination of some data and an aching desire for
> an answer does not ensure that a reasonable answer can
> be extracted from a given body of data. - John Tukey.
> R-sig-mixed-models at r-project.org mailing list
Rune Haubo Bojesen Christensen
Master Student, M.Sc. Eng.
Phone: (+45) 30 26 45 54
Mail: rhbc at imm.dtu.dk, rune.haubo at gmail.com
DTU Informatics, Section for Statistics
Technical University of Denmark, Build.321, DK-2800 Kgs. Lyngby, Denmark
More information about the R-sig-mixed-models