[R-sig-ME] Binomial GLMM vs GLM question (Ken Beath)

Andrew Robinson A.Robinson at ms.unimelb.edu.au
Sun May 18 06:14:46 CEST 2008


Hi Austin,

On Sat, May 17, 2008 at 01:31:55PM -0400, Austin Frank wrote:
> On Fri, May 16 2008, Andrew Robinson wrote:
> 
> > For example, if you make the effects random then you're effectively
> > marginalizing them, whereas if you make them fixed you're forced to
> > condition on them
> 
> Hello all!
> 
> I'm cherry picking this one line from a recent discussion because it's
> the most recent example of people discussing fixed and random effects in
> terms of conditioning and marginalization.  The use of this terminology
> on this list seems to have increased in the past year or so (or, more
> likely, I've just started noticing it), and it's time for me to confess
> that I'm not sure I understand it.
> 
> If we have a model
> 
> #v+
> y ~ 1 + x + (1 | z)
> #v-
> 
> the suggested correspondences between fixed effects and
> conditionalizing, and between random effects and marginalization suggest
> to me that at some point we are interested in
> 
> #v+
> \sum_{z} P(y | x, z)
> #v-
> 
> This is my guess at the correspondence suggested by the quote above, but
> it's based solely on the fact that I think I know what conditional
> probabilities and marginalization are.  It could be 100% off base.

> I guess I have four questions:
> 
> 1.  Is this the correct understanding of how fixed and random effects
>     translate into conditionalizing and marginalizing?

I think that you're close.  Actually the more general case is being
interested in 

P(y | x)

without necessarily having a specific strategy for getting rid of z -
the main options are to estimate it somehow (which, I suppose, quietly
conditions on it) or to integrate it out.  Often z appears within V,
let's say, which is the conditional covariance matrix of y.

y | X, Z ~ N ( X \beta, D Z D' + \Sigma)

> 2.  In mixed logit models, we are modeling probabilities (or, log odds
>     of probabilities) , so this specification maybe makes some sense to
>     me.  But how does it fit into a linear mixed model?

The same way. We're still interested in making inference on the
distribution of y conditional on x, it's just that the distribution is
normal. 

> 3.  What role does this probability play in fitting the model?

It depends on the fitting algorithm.  If it is maximum likelihood,
then the probability is used to construct the likelihood function.  If
the algorithm is penalized least squares, then the probability is not
really used at all, although it is arguably present in that certain
kinds of penalized least squares yield identical fixed estimates to
certain kinds of maximum likelihood approaches.

> 4.  Do the coefficients for fixed effects from the fitted model have an
>     interpretation in terms of the above probability model?

They are the parameter estimates that describe the nature of the
conditional relationship between x and y.

I hope that this helps,

Andrew

-- 
Andrew Robinson  
Department of Mathematics and Statistics            Tel: +61-3-8344-6410
University of Melbourne, VIC 3010 Australia         Fax: +61-3-8344-4599
http://www.ms.unimelb.edu.au/~andrewpr
http://blogs.mbs.edu/fishing-in-the-bay/




More information about the R-sig-mixed-models mailing list