[R-sig-ME] F and P values for random factors
Alex Fine
abfine at gmail.com
Sat Feb 6 01:05:17 CET 2016
Hi Mahendra,
You can assess the significance of both fixed and random factors using
likelihood ratio tests. Say you want to test the significance of predictor
A in a model with predictors A and B and random slopes for A. You can do:
full_model = lmer(y ~ A + B)
model_without_A = lmer(y ~ B)
anova(full_model, model_without_A)
The anova() function in this case will return a chi-squared score (df =
number of predic) and a p-value. The same procedure can be used for random
effects, e.g.:
full_model_2 = lmer(y ~ A + B + (1+A | random_thing)
model_without_random_slope_for_A = lmer(y ~ A + B + (1 | random_thing)
anova(full_model, model_without_random_slope_for_A)
This works because the log-likelihoods of nested models, in the limit,
approximate a chi-squared distribution.
See: https://en.wikipedia.org/wiki/Likelihood-ratio_test
Or maybe that wasn't what you were asking at all.
Also I think you forgot to attach the file.
Hope that helps!
Alex
On Thu, Feb 4, 2016 at 8:32 PM, Mahendra Dia <diamahendra at gmail.com> wrote:
> Hi.
>
> I am reaching you out to learn how to compute F ratio and P values for my
> experiment where all the factors are treated as random factors.
> Please see the attached file where I explained my treatments and sample
> data.
>
> I thank you in advance.
>
> Sincerely,
> Mahendra-
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
--
Alex Fine
Ph. (336) 302-3251
web: abfine.github.io/
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list