# [R-sig-ME] linear mixed models explained variances

Paul Johnson paul.johnson at glasgow.ac.uk
Wed Aug 16 17:20:14 CEST 2017

```Hi Lorin,

> I have made models, which are of the structure:
>
>           fit18=lmer(recru\$observed~recru\$predicted*total\$MMIdata+(1|total\$nursery))

A side point: always use the data argument, so that the fitting function takes all the data from the same data frame:

fit18=lmer(observed~predicted*MMIdata+(1|nursery), data = recru.total.merge)

This is less error prone than what you’ve done.

> First of all, I would like to determine the significance of the model. So far I have only been able to determine the significance of the separate factors. But I have read that p-values don’t really work with linear mixed models. So how can I find the significance of the model?

What do you mean by significance of the model? This implies that you want calculate a p-value for some null hypothesis. What null hypothesis? The null hypothesis that all the fixed parameters are zero? You can get this from
anova(fit18, update(fit18, . ~ (1|nursery)))  # the models will be automatically refitted using ML

> Second of all, I would like to determine the variance explained by the separate factors, I have so far:
>
>  1.  Using r.squaredGLLM(fit18) from the MuMIn package, you get a conditional and a marginal R². I have taken the conditional as the variance explained by my whole model. And I have taken the marginal as the variance explained by the fixed effects. Is this correct or did I make false assumptions?

R2C is frequently interpreted as the variance explained by both the fixed and random effect, although I prefer to think of the random effect as a residual (unexplained) variance at a higher level (here unexplained variation between nurseries).

>  2.  Can I assume that the variance explained by the random effects is just the subtraction of conditional and marginal?

Yes, that’s the proportion of variance explained by the random effects.

>  3.  As I have three fixed factors in my model (two + interaction effect), I would like to see how much each of the fixed variables explains. I have followed a method I found online, but I am not that sure about the validity of this method. What I have used is:
>                      fixedvaiance1= whole variance*fvariance1/(rvariance+fvariance1+fvariance2)

I don’t understand the terms in this equation. To me, an intuitive way of gauging the contribution of a fixed effect would be to fit the model with and without the fixed effect and compare (subtract) either the marginal R-squared, or compare the fixed effect variances. The total variance of the fixed effects can be calculated as:

var(model.matrix(fit) %*% fixef(fit))

This should give the same result as

var(predict(fit, re.form = ~ 0))

[Although I think strictly the model sums of squares should be compared instead, which will just be the var(…) * (n - 1)?]

Good luck,
Paul

```