[R-sig-ME] Version of lme4 on R-forge

Greg Snow Greg.Snow at imail.org
Thu Apr 24 17:45:01 CEST 2008


Doug,

I've been thinking about this for a while and have a possible suggestion
for the p-value issue (maybe you have already considered what I am about
to suggest and decided against it, in that case feel free to ignore it).
This is more along the lines of making the p-value adicts happy (or at
least get them off your back) rather than suggesting this best
statistical practice.

For those of us that started with statistics before computers were
common, we ended up with p-values for t-tests given as somewhere in a
range ( .01 < p < .05 ).  Since you don't have a way to come up with a
p-value for the fixed effects that you are happy with, maybe there is a
way to come up with a range that that you would be comforatable with (my
initial thought was 0 <= p <= 1, that way they could not complain that
nothing was printed).  If I remember correctly, you have said that the
commonly used (SAS) degrees of freedom result in a p-value that is
anti-conservative, so that could be your lower bound.  One possibility
for the upper bound would be treating all the random effects like fixed
effects (subtract 1 for each random group from the previous df).  That
could handle the df problem for the conservative estimate if you are
happy with the F approximation.  However I think I remember you stating
that the F did not fit that well in many cases, so that may not be
conservative enough, in that case I would expect an approximation based
on Chebyshev's inequality would tend to be conservative (and then those
that are not convinced that a t ratio of 10 is unlikely due to chance
without a p-value would have something to point at).

So if the summary method returned instead of a p-value, a range that the
p-value is likely to be in (still an approximation), then a range like
0.001 < p < .01 would imply significance, 0.2 < p < 0.5 would be non
significant, and 0.02 < p < 0.2 would tell the user that they needed to
look further for a better test ("hey, what is the mcmcsamp thing ? ...).

And if people still complain, you can blame it on me :-)

Just a thought, hope it helps,

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
(801) 408-8111
 
 

> -----Original Message-----
> From: r-sig-mixed-models-bounces at r-project.org 
> [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf 
> Of Douglas Bates
> Sent: Thursday, April 24, 2008 7:18 AM
> To: R Mixed Models
> Subject: [R-sig-ME] Version of lme4 on R-forge
> 
> There has been some discussion on this list about what is the 
> "most recent" version of the lme4 package.  The version on 
> CRAN is comparatively old.  Several months ago I made some 
> radical changes in the internal representation of the model 
> and I am still working on providing all the earlier 
> capabilities under this new representation.
> This is the version on R-forge.  It is much more advanced 
> than the version on CRAN in the design and even in the theory 
> but there are still areas where its functionality is 
> incomplete.  In particular, the mcmcsamp function in the 
> R-forge version doesn't work well for models where some of 
> the variance components are near zero.  I think I have a way 
> out of that but it will involve more development and coding 
> and testing.
> 
> If I release the R-forge version to CRAN some of the code 
> that is documented in books like Harald Baayen's "Analyzing 
> Linguistic Data"
> and Gelman and Hill's "Data Analysis Using Regression and 
> Multilevel/Hierarchical Models" will cease to function or, 
> worse, produce incorrect results.
> 
> In general I think that users are better off with the version 
> on R-forge except for the long-standing problem of how to 
> come up with p-values on fixed-effects terms.  This is why I 
> get frustrated with this issue of p-values and degrees of 
> freedom.  There are many good things that could be done with 
> the version of lme4 on R-forge.  In particular, the ability 
> to fit models to large data sets with crossed or partially 
> crossed factors for random effects is revolutionary.
> Other software can't do that.  But that doesn't matter.  The 
> only important issue is being able to produce the "correct" 
> degrees of freedom on some tinker-toy text book example.
> 
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list 
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> 




More information about the R-sig-mixed-models mailing list