[R-sig-ME] Fwd: same old question - lme4 and p-values
Jonathan Baron
baron at psych.upenn.edu
Sat Apr 5 13:21:19 CEST 2008
On 04/05/08 12:10, Reinhold Kliegl wrote:
> Here is a section that worked in Kliegl, Risse, & Laubrock (2007, J
> Exp Psychol:Human Perception and Performance, 33, 1250-1251).
This is extremely helpful.
> In perspective, I think the p-value problem will
> simply go away.
I'm not sure what you mean here. If you mean to replace them with
confidence intervals, I have no problem with that. But, as a journal
editor, I am afraid that I will continue to insist on some sort of
evidence that effects are real. This can be done in many ways. But
too many authors submit articles in which the claimed effects can
result from random variation, either in subjects ("participants*") or
items, and they don't correctly reject such alternative explanations
of a difference in means.
I have noticed a kind of split among those who comment on this issue.
On the one side are those who are familiar with fields such as
epidemiology or economics (excluding experimental economics), where
the claim is often made that "the null hypothesis is always false
anyway, so why bother rejecting it?" These are the ones interested in
effect sizes, variance accounted for, etc. They are correct for this
kind of research, but there are other kinds of research.
On the other side, are those from (e.g.) experimental psychology,
where the name of the game is to design experiments that are so well
controlled that the null hypothesis will be true if the effect of
interest is absent. As a member of this group, when I read people
from the first group, I find it very discouraging. It is almost as if
they are saying that what I work so hard to try to do is impossible.
To get a little specific, although I found Gelman and Hill's book very
helpful on many points (and it does not deny the existence of people
like me), it is written largely for members of the first group. By
contrast, Baayen's book is written for people like me, as is the
Baayen, Davidson, and Bates article, "Mixed effects modeling with
crossed random effects for subjects and items."
I'm afraid we do need significance tests, or confidence intervals, or
something.
Jon
* On "participants" vs. "subjects" see:
http://www.psychologicalscience.org/observer/getArticle.cfm?id=1549
--
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron
Editor: Judgment and Decision Making (http://journal.sjdm.org)
More information about the R-sig-mixed-models
mailing list