[R-sig-ME] 3-level binomial model
Doran, Harold
HDoran at air.org
Thu Apr 17 13:59:17 CEST 2008
I haven't really followed this thread, but I'd disagree and say that the
variance components have a very meaningful interpretation. If the fixed
effects are the log-odds of success, then the variance component would
be the variability in the log-odds for whatever units are of interest.
On the issue of the ICC for the GLMM, to me this is all hocus-pocus.
This is a meaningful statistic in the world of linear models because the
within-person variance (or your level 1 variance) is assumed
homoskedastic. But, this is not true with generalized linear models.
Now, you can compute it as you did by fixing the level 1 variance at the
logistic scale and you can give reviewers whatever they want, but this
doesn't make it meaningful. So, waving a magic wand to make GLMM
estimates look like linear estimates is neat, but I think the better
path is to show your reviewers why this isn't a meaningful statistic.
On the other hand, if you job is to get past the journal guardians for
tenure, do whatever they ask.
> -----Original Message-----
> From: r-sig-mixed-models-bounces at r-project.org
> [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf
> Of Andrew Robinson
> Sent: Wednesday, April 16, 2008 6:40 PM
> To: Iasonas Lamprianou
> Cc: r-sig-mixed-models at r-project.org
> Subject: Re: [R-sig-ME] 3-level binomial model
>
> Hi Iasonas,
>
> my interpretation of what you are doing by computing those
> quantities is that you are estimating the proportion of
> variance explained in the linear predictor.
>
> A complication with that strategy is that the non-linearity
> in the relationship between the linear predictor and the
> probability estimate induces an interaction between the
> components of variance in terms of their effect upon the
> probability. Also, the linear predictor is commonly
> interpreted in the context of odds ratios (via
> exponentiation), which again doesn't line up with these
> variance components because of the non-linearity in the function.
>
> So, it's not clear to me that the variance components have a
> direct useful interpretation in this model, although I may be
> mistaken.
>
> I seem to recall that Gelman and Hill say sensible things
> about what to do either in this case or in a similar case,
> although again I may be mistaken. I don't have my copy here.
>
> So it seems to me that the reviewers are right to be
> cautious, and you might take a look in G&H.
>
> I hope that this helps.
>
> Andrew
>
>
> On Wed, Apr 16, 2008 at 05:51:07AM -0700, Iasonas Lamprianou wrote:
> > Thank you all for your suggestions. My question, however,
> is how to compute the % of the variance at the level of the
> school and at the level of the pupils. In other words, does
> the concept of intraclass correlation hold in my context? If
> yes, then how can this be computed for the pupils and the
> schools? Is the decomposistion below reasonable?
> > Prof. Bates, maybe you could suggesting something using the lmer?
> >
> > VPCschool = VARschool/(VARschool+VARpupil+3.29) and
> > VPCpupil = VARpupil/(VARschool+VARpupil+3.29)
> >
> > Dr. Iasonas Lamprianou
> > Department of Education
> > The University of Manchester
> > Oxford Road, Manchester M13 9PL, UK
> > Tel. 0044 161 275 3485
> > iasonas.lamprianou at manchester.ac.uk
> >
> >
> >
> > On 16/04/2008, at 12:11 PM, David Duffy wrote:
> >
> > >> I computed the school-level and the pupil-level variance
> like that
> > >> (as described for 2-level models in MlWin manual): I
> assumed that
> > >> my dependent variable is based on a continuous
> unobserved variable
> > >> (perfectly valid according to my theoretical model). Therefore,
> > >> eijk follows a logistic distribution with variance
> pi2/3=3.29. So,
> > >
> > >> VPCschool=VARschool/(VARschool+3.29)=
> 0.17577/(0.17577+3.29)=6.4%
> > >> and VPCpupil=VPCpupil
> /(VPCpupil+3.29)=0.19977/(0.19977+3.29)=7.3%.
> > >
> > >> The reviewers of my paper are not sure if this is the
> best way to
> > >> do it. They may reject my paper and I worry because I have spent
> > >> 3months!!!! writing it. Any ideas to support my method
> or to use a
> > >> better one?
> > >
> > > Would an IRT model for seven "items" be more to their taste? I
> > > don't think the substantive conclusions would be much different.
> > >
> >
> > Multi-level IRT is more appropriate, this allows for the nesting
> > within schools. There is a package mlirt that fits these
> models in a
> > Bayesian framework, but I haven't tried it. There are commercial
> > programs which will fit these, Mplus is advertised to and
> Latent Gold
> > with the Syntax module will, at least for a unidimensional latent
> > variable.
> >
> > What is more worrying is the assumption of a single latent
> variable to
> > model the correlation between tests.
> >
> > Ken
> >
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > R-sig-mixed-models mailing list
> > R-sig-mixed-models at r-project.org
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
> >
> > End of R-sig-mixed-models Digest, Vol 16, Issue 36
> > **************************************************
> >
> >
> > ___________________________________________________________
> > Yahoo! For Good helps you make a difference
> >
> > http://uk.promotions.yahoo.com/forgood/
> >
> > _______________________________________________
> > R-sig-mixed-models at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
> --
> Andrew Robinson
> Department of Mathematics and Statistics Tel:
> +61-3-8344-6410
> University of Melbourne, VIC 3010 Australia Fax:
> +61-3-8344-4599
> http://www.ms.unimelb.edu.au/~andrewpr
> http://blogs.mbs.edu/fishing-in-the-bay/
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list