[R-sig-ME] understanding log-likelihood/model fit

Daniel Ezra Johnson danielezrajohnson at gmail.com
Thu Aug 21 14:35:57 CEST 2008


Thank you all for your help.

I'm now referring back to the discussion in Chapter 2 of Pinheiro and
Bates and understanding this much better.
Well, a little better.

In the figures on pp. 73-74, the middle panels (log-residual norm)
seem to illustrate what Douglas Bates has described here as

> "the penalty depend[ing] on the (unconditional) variance
> covariance matrix of the random effects.  When the variances are small
> there is a large penalty.  When the variances are large there is a
> small penalty on the size of the random effects."

And the bottom panels (log-determinant ratio) seem to illustrate

> The measure of model
> complexity, which is related to the determinant of the conditional
> variance of the random effects, given the data, [which] has the opposite
> behavior.  When the variance of the random effects is small the model
> is considered simpler.  The simplest possible model on this scale is
> one without any random effects at all, corresponding to a variance of
> zero.

In these charts, as you move all the way to the right, in the limit,
the values of Delta and theta are maximized, which I believe means the
random effect variance goes to zero (with respect to the residual
variance).

As you move to the left, your model complexity gets worse, but your
model fidelity improves for a time, and that's where you get the
maximum log-likelihood (top panel).

If theta going to infinity represents zero random effects, could you
say that theta going to zero represents random effects that are no
longer distinguishable from fixed effects?

D




More information about the R-sig-mixed-models mailing list