[R-sig-ME] Confidence interval for sum of coefficients

Michael Cone coanil at posteo.org
Fri Sep 26 07:37:48 CEST 2014


Dear Ben, Nathan,

thank you for the suggestions and explanations, that makes perfect 
sense. I come from a non-statistical background, and, lacking the 
basics, sometimes hit sort of a brick wall in my understanding.

Kind regards,
Michael




On 25.09.2014 16:26, Ben Bolker wrote:
> On 14-09-25 10:17 AM, Doogan, Nathan wrote:
>> Is there an issue with using the variance sum law and the var-covar
>> matrix  to sum two parameters and estimate the variance of the sum?
>> i.e., add their variances and covariances as expressed in the
>> variance covariance matrix of the parameter estimates, probably
>> obtained with vcov(modelObj).
>> 
>> Or is this too simplistic for a mixed model?
>> 
>> -Nate
>> 
>> -- Nathan J. Doogan, Ph.D.  | College of Public Health Post-Doctoral
>> Researcher | The Ohio State University
>> 
> 
> 
>   That was exactly what I was going to suggest (but hadn't gotten 
> around
> to it).  It's slightly less accurate than parametric bootstrapping or
> likelihood profiling (the former is computationally straightforward, 
> the
> latter would have to be implemented more or less from scratch), but
> should be fine in many cases.
> 
>  To be more specific, if you have a linear combination of parameters in
> mind (e.g. lincomb <- c(1,1,1) for adding all three parameters), you 
> want
> 
> lincomb %*% vcov(fitted_model) %*% lincomb
> 
> (R should take care of the transposition where necessary, I think)
> to get the variance.
> 
> By the way, I don't think it makes any sense at all to add confidence
> intervals; as one example, imagine that two quantities have estimated
> values of 1 and 2 with confidence intervals {-1,3} and {1,3}; should 
> the
> net confidence intervals actually be {0,6} ... ?  Or add many values
> with lower bounds at zero -- should the joint lower bound really be
> zero?  If you want to add something, add *variances* and convert to std
> errors and from there to CIs ...
> 
> 
>> 
>> 
>> -----Original Message----- From:
>> r-sig-mixed-models-bounces at r-project.org
>> [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of
>> Michael Cone Sent: Thursday, September 25, 2014 9:26 AM To:
>> r-sig-mixed-models at r-project.org Subject: Re: [R-sig-ME] Confidence
>> interval for sum of coefficients
>> 
>> Dear list,
>> 
>> Lorenz has pointed out to me Ben's suggestion to bootstrap the sums
>> (or any linear combiantion) of coefficients I'm interested in. This
>> may be the general approach, but I struggle to see why it would be
>> illegitimate to simply change the reference level for the treatment
>> contrast coding, fit the model again and run confint() a second time
>> (and do so again for MachineC):
>> 
>>> Machines$Machine <- relevel(Machines$Machine, 'B') fm2 <-
>>> lmer(score ~ Machine + (Machine | Worker), data = Machines)
>>> summary(fm2)
>> Fixed effects: Estimate Std. Error t value (Intercept)   60.322
>> 3.529  17.096 MachineA      -7.967      2.421  -3.291 MachineC
>> 5.950      2.446   2.432
>>> confint(fm2)
>> (Intercept)  52.8500103 67.7944456 MachineA    -13.0931710
>> -2.8401544 MachineC      0.7692323 11.1307757
>> 
>> Now the CI of the intercept is the confidence interval for the
>> overall score of MachineB. Adding lower and upper bounds from fm1
>> would have given somewhat similar, but somewhat wider intervals. (I
>> probably have a lack of understanding as to how CIs can be calculated
>> with. Is there an inuitive explanation for why the bounds don't
>> add?)
>> 
>>> Machines$Machine <- relevel(Machines$Machine, 'C') fm3 <-
>>> lmer(score ~ Machine + (Machine | Worker), data = Machines)
>>> summary(fm3)
>> Fixed effects: Estimate Std. Error t value (Intercept)   66.272
>> 1.806   36.69 MachineB      -5.950      2.446   -2.43 MachineA
>> -13.917      1.540   -9.04
>>> confint(fm3)
>> (Intercept)  62.4471752  70.0972752 MachineB    -11.1307677
>> -0.7692243 MachineA    -17.1780524 -10.6552759
>> 
>> Thanks, and best wishes Michael
>> 
>> Am 25.09.2014 14:11 schrieb Michael Cone:
>>> Hello,
>>> 
>>> I suspect this to be simple, but I can't figure it out.
>>> 
>>>> library(lme4) data(Machines) fm1 <- lmer(score ~ Machine +
>>>> (Machine | Worker), data = Machines) summary(fm1)
>>> Fixed effects: Estimate Std. Error t value (Intercept)   52.356
>>> 1.681  31.151 MachineB       7.967      2.421   3.291 MachineC
>>> 13.917      1.540   9.036
>>>> confint(fm1)
>>> 2.5 %     97.5 % [...] (Intercept) 48.7964047 55.9147119 MachineB
>>> 2.8401623 13.0931789 MachineC    10.6552809 17.1780575
>>> 
>>> [and 14 warnings, but it's just an example: In optwrap(optimizer,
>>> par = start, fn = function(x) dd(mkpar(npar1, ... : convergence
>>> code 1 from bobyqa: bobyqa -- maximum number of function
>>> evaluations exceeded ... In profile.merMod(object, signames =
>>> oldNames, ...) : non-monotonic profile]
>>> 
>>> I'd like to have confidence intervals for the overall score of
>>> MachineA, MachineB, and MachineB. MachineA is easy (CI of the
>>> intercept), but how do I combine the CI of the intercept with the
>>> CI of the MachineB parameter, and likewise the CI of the intercept
>>> with the parameter of MachineC? Can I simply add the lower and
>>> upper bounds of the two intervals or is this naive?
>>> 
>>> Thank you for your time,
>>> 
>>> Michael
>>> 
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> 
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> 
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> 
> 
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



More information about the R-sig-mixed-models mailing list