[R-sig-ME] post hocs for LMMs / GLMMs
Kay Cecil Cichini
Kay.Cichini at uibk.ac.at
Wed May 19 11:23:36 CEST 2010
hello adam,
thanks for your explanations.
say i had a full model with f1+f2+f1*f2+random and was interested only
in the differences of level 1 vs. 2 of f2 within each level of f1, that
is f2.1 vs. f2.2 within f1.A, in f1.B, etc. - would then tests on
specified contrasts not yield about the same results as the
corresponding t-tests on f2.1 vs f2.1 with each f1.A, f1.B, etc. at the
intercept?
like:
testing the effect of f1 within A
Fixed effects:
Estimate z Pr(>|z|)
(Intercept= A, 1) 0.5687 4.283 < 0.001 ***
B 1.1225 6.875 < 0.001 ***
C 1.3807 8.622 < 0.001 ***
D 1.7949 8.301 < 0.001 ***
2 0.0482 0.309 0.75724 <- A, 1 vs. A, 2
B:2 -0.495 -2.656 0.0079 **
C:2 -0.5767 -3.191 0.00142 **
D:2 -0.4922 -2.235 0.02542 *
yours,
kay
Hi Kay,
The general way to do what you want is to pre-define comparisons of
interest. For example, if you thought A would be higher than B C and D, then
you would attach a contrast to your data.frame for the factor f1 that would
compare A to B C and D. If there is a contrasts() attribute on f1, then when
you fit your model (say, using lmer from the lm4 package), R will
automatically parse out and test that contrast specifically.
If you don't do this, most functions will create contrasts that compare
levels of a factor to the first level (A vs B, A vs C, A vs D in the above
case).
...but if you don't have any idea or any theory of how A B C and D differ,
I would recommend that you treat your analysis as exploratory, and then just
look at the differences without testing them and see what's there, try to
come up with a theory, and then go collect more data. When you're just
"comparing levels" or looking at effects, there's a lot more going on than
you'd at first think--in this case, the comparisons to be made are A vs B, A
vs. B and C, A vs. B and D, A vs. C and D, A vs. B C and D, and A vs
0--8 comparisons. The same are available for B C and D, resulting in 32
comparisons. That's a lot! With an interpretation alpha of .05, you may
get a couple false positives. That is why the "reparamaterization" approach
is ill advised--it greatly inflates your likelihood of finding something by
chance alone.
So really, the best thing to do here is to encode the things you hope to
find, and test them--and if you see anything else, call it a theoretically
useful fluke. The effect is positive/negative, but you can't say it's
significant...and at that point you have to replicate it anyway so a precise
p-value isn't super useful.
--Adam
On Wed, 19 May 2010, Kay Cecil Cichini wrote:
> p.s.:
> ..."different parameterizations" may be the wrong term, as the parameters
> actually stay the same and i only change the intercept-level.
>
> Kay Cecil Cichini schrieb:
>> hello,
>>
>> i have several LMMs and GLMMs with 2 nominal fixed factors, f1: A,B,C,D and
>> f2: 1,2. now i need inference on the differences of level 1 vs. 2 of f2
>> within each level of f1, or vice versa differences of A/B, A/C, A/D, B/C,
>> etc. within each level of f2.
>>
>> before i try with glht(): isn't it justified to examining the model's
>> t-tests with re-ordered levels of the nominal variables, by
>> which each of this comparisons can be yielded by the different
>> parameterizations - this seems to be the most convenient way and till now i
>> found no one to explain to me why this may or may not be valid.
>>
>> best regards,
>> kay
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list