[R-sig-ME] (lme4) Calculating variance component and random-effect shrinkage in a logistic random-intercept model

Ben Bolker bbo|ker @end|ng |rom gm@||@com
Sun Jun 9 23:56:48 CEST 2019


 Without going into detail at all: one important point is that
logit-transforming responses is not at all the way that GL(M)M works;
instead, it calculates the expected values on the logit scale,
transforms them to the response scale, and does calculations with
weights based on the variances computed from the variance function for
the specified family.

  If you're willing to take a step backward, it might be (much) easier
to understand the basics of shrinkage and SD estimation using a *linear*
mixed model example ...

  Ben Bolker

On 2019-06-09 12:48 p.m., Juho Kristian Ruohonen wrote:
> Hi,
> 
> I'm working with a simple, pedagogical dataset, trying to teach myself how
> the random-effect SD is estimated and how the individual random effects are
> shrunk towards zero.
> 
> The dataset is based on Agresti 2018 (*Introduction to Categorical Data
> Analysis, 3rd ed*. pp. 278-281), but I've modified it to get rid of all 0%
> and 100% proportions, since those would yield infinite logits and
> complicate the calculus. There are just 20 binomial observations,
> representing clusters with sample sizes varying from 5 to 20. And there are
> only two parameters: a fixed intercept and a random intercept:
> 
> 
> 
> *set.seed(2019)d <- data.frame(id = letters[1:20], TrueLogit = rnorm(20), y
> = NA, n = round(runif(20, min = 5, max = 20)))*
> *d$y <- round(plogis(d$TrueLogit) * d$n) *#Create cluster-specific success
> counts based on the "true" logits
> 
> Each cluster's sample proportion (on the logit scale) should be helpful in
> estimating its random intercept:
> *d$EmpiricalLogit <- qlogis(d$y / d$n) *
> 
> The fixed intercept should likewise be easy to calculate -- it is the
> overall sample proportion on the logit scale, no? i.e:
> *d$CommonLogit <- qlogis(sum(d$y) / sum(d$n)) *
> 
> The empirical, unshrunken "random effects" should be easy to calculate as
> differences between the overall sample proportion and the cluster-specific
> ones, all on the logit scale:
> *d$Difference <- d$EmpiricalLogit - d$CommonLogit*
> 
> The empirical SD of the (as yet) unshrunken random effects should be easy
> to calculate as the SD of the cluster-specific differences from the common
> intercept, each weighted by the cluster size:
> *sd(rep(d$Difference, times = d$n)) *#1.028238
> 
> This SD equals 1.03 logits. However, lme4 estimates it at only 0.77:
> *require(lme4)*
> *(mix <- glmer(y/n ~ (1|id) , weights = n, family = binomial, data = d))*
> 
> I thought shrinkage was only applied to the individual random effects, not
> to their estimated variance. But the output of lme4 suggests that even the
> estimated variance undergoes some shrinkage. How?
> 
> Secondly, Agresti (2018: 279), whose examples indeed use lme4, states that
> in an example like ours, the fitted value of each cluster is "a weighted
> average of the sample proportion for that area (cluster) alone and the
> sample proportion after pooling all n (20) samples." (parentheticals mine).
> If this is so, then at least the fitted probabilities should be easy to
> calculate manually by using the aforementioned weighted-average approach:
> 
> 
> 
> *(d$fitted.manual <- sapply(1:nrow(d), function(i){  weighted.mean(c(d$y[i]
> / d$n[i], sum(d$y) / sum(d$n)), w = c(d$n[i], sum(d$n)))  }))*
> 
> Yet this misses the mark as well. The manually fitted values are way more
> shrunken than the lme4-fitted values:
> 
> 
> *d$fitted.lme4 <- fitted(mix)cbind(ManualFitted = d$fitted.manual,
> SampleProp = d$y/d$n, lme4Fitted = d$fitted.lme4)*
> 
> Trying to calculate the random effects manually, as weighted averages
> between 0 and the cluster-specific Difference from the fixed intercept,
> fails equally. The manual calculation results in much more shrinkage than
> that seen in the output of *ranef() *(not shown).
> 
> So, even though the principle behind the calculation of the variance
> component and the individual intercepts in a GLMM sounds straightforward, I
> cannot replicate even one step of the procedure, not even with a maximally
> simple example dataset. This is demoralizing to someone who is trying to
> gain a solid understanding of the method. Any pointers?
> 
> Best,
> 
> Juho
> 
> 	[[alternative HTML version deleted]]
> 
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>



More information about the R-sig-mixed-models mailing list