[R-sig-ME] Why se.fit differ in predict.glm and predict.glmmadmb?
Xandre
alex at iim.csic.es
Wed May 4 15:59:52 CEST 2016
I am aware of that, sorry. Hopefully, with the following link,
https://www.dropbox.com/s/pptl595chabjtly/datos.csv?dl=0 , the example
will be completely reproducible.
Regards,
Alex
El 04/05/2016 a las 15:47, Phillip Alday escribió:
> Hi Alex,
>
> Attachments aren't put through to the mailing list, you need to put them
> on a publicly available share.
>
> But thanks for putting forth a (soon-to-be) reproducible example!
>
> Best,
> Phillip
>
> On Wed, 2016-05-04 at 10:49 +0200, Xandre wrote:
>> Thanks for the interest,
>>
>> Just to check problems of code I tried again with a much more simple
>> example. I made a subset of my original data base (see attached .csv)
>> and run a much more simple model as follows:
>>
>> *> M1<-glm(response~explanatory, **
>> **+ data=datos,**
>> **+ family="binomial")**
>> **> M2<-glmmadmb(response~explanatory, **
>> **+ data=datos,**
>> **+ family="binomial")**
>> **> **
>> **> summary(M1)*
>>
>> Call:
>> glm(formula = response ~ explanatory, family = "binomial", data = datos)
>>
>> Deviance Residuals:
>> Min 1Q Median 3Q Max
>> -1.272 -1.226 1.089 1.128 1.549
>>
>> Coefficients:
>> Estimate Std. Error z value Pr(>|z|)
>> (Intercept) 2.537e-01 6.638e-02 3.822 0.000132 ***
>> explanatory -1.660e-04 5.214e-05 -3.183 0.001456 **
>> ---
>> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>>
>> (Dispersion parameter for binomial family taken to be 1)
>>
>> Null deviance: 3461.2 on 2499 degrees of freedom
>> Residual deviance: 3450.9 on 2498 degrees of freedom
>> AIC: 3454.9
>>
>> Number of Fisher Scoring iterations: 3
>>
>> *> summary(M2)*
>>
>> Call:
>> glmmadmb(formula = response ~ explanatory, data = datos, family =
>> "binomial")
>>
>> AIC: 3454.9
>>
>> Coefficients:
>> Estimate Std. Error z value Pr(>|z|)
>> (Intercept) 2.54e-01 6.64e-02 3.82 0.00013 ***
>> explanatory -1.66e-04 5.21e-05 -3.18 0.00146 **
>> ---
>> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>>
>> Number of observations: total=2500
>>
>> Log-likelihood: -1725.45
>> >
>> *> newdatos <-
>> data.frame(explanatory=seq(min(datos$explanatory),max(datos$explanatory),length.out=10))*
>> >
>> *> pred1<-predict(M1,newdatos,type="link",se.fit =T)**
>> **> pred2<-predict(M2,newdatos,type="link",se.fit =T)*
>> >
>> *> cbind(pred1$fit,pred2$fit)*
>> [,1] [,2]
>> 1 0.22051737 0.22051798
>> 2 0.09972924 0.09972896
>> 3 -0.02105888 -0.02106007
>> 4 -0.14184701 -0.14184909
>> 5 -0.26263513 -0.26263811
>> 6 -0.38342326 -0.38342713
>> 7 -0.50421139 -0.50421615
>> 8 -0.62499951 -0.62500518
>> 9 -0.74578764 -0.74579420
>> 10 -0.86657576 -0.86658322
>> *> cbind(pred1$se.fit,pred2$se.fit)*
>> [,1] [,2]
>> 1 0.05841106 0.06724456
>> 2 0.04037125 0.08232769
>> 3 0.05222381 0.10914997
>> 4 0.08187989 0.14117163
>> 5 0.11645089 0.17557048
>> 6 0.15263291 0.21118808
>> 7 0.18950541 0.24749882
>> 8 0.22673178 0.28423718
>> 9 0.26416245 0.32125649
>> 10 0.30172140 0.35846971
>>
>> #Although now de differences are lower, I think they still are quite
>> important.
>>
>> This is my *sessionInfo()*:
>>
>> R version 3.2.4 Revised (2016-03-16 r70336)
>> Platform: x86_64-w64-mingw32/x64 (64-bit)
>> Running under: Windows 7 x64 Service Pack 1
>>
>> locale:
>> [1] LC_COLLATE=Spanish_Spain.1252 LC_CTYPE=Spanish_Spain.1252
>> LC_MONETARY=Spanish_Spain.1252
>> [4] LC_NUMERIC=C LC_TIME=Spanish_Spain.1252
>>
>> attached base packages:
>> [1] stats graphics grDevices utils datasets methods base
>>
>> other attached packages:
>> [1] glmmADMB_0.8.3.3 MASS_7.3-45
>>
>> loaded via a namespace (and not attached):
>> [1] Matrix_1.2-4 plyr_1.8.3 magrittr_1.5 tools_3.2.4
>> coda_0.18-1 Rcpp_0.12.4 stringi_1.0-1
>> [8] nlme_3.1-126 grid_3.2.4 stringr_1.0.0 R2admb_0.7.13
>> lattice_0.20-33
>>
>>
>> Hopefully all this info will be helpful.
>>
>> Thanks in advance for your time.
>>
>> Regards,
>>
>> Alex
>>
>>
>> El 04/05/2016 a las 0:28, Ben Bolker escribió:
>>> Not sure, this will be worth looking into ...
>>>
>>> On 16-05-03 04:51 PM, Xandre wrote:
>>>> Dear list,
>>>>
>>>> I am running a GLM (family="binomial") without random effects using both
>>>> glm and glmmadmb.
>>>>
>>>> Summaries are almost identical, however when I used the predict function
>>>> as follows:
>>>>
>>>> predict(glm1,newdatos1,type="link",se.fit =T)
>>>>
>>>> predict(admb1,newdatos1,type="link",se.fit =T)
>>>>
>>>> I realized that se.fit differ a lot between them, admb se.fit resulted
>>>> much much higher (fit is almost identical). This is just and example of
>>>> what I found:
>>>>
>>>> glm1$se.fit admb1$se.fit
>>>> 0.04290869 0.2676562
>>>> 0.04435600 0.2733130
>>>> 0.04095631 0.2728592
>>>> 0.03402992 0.2718389
>>>> 0.03000669 0.2713617
>>>> 0.03633637 0.2722059
>>>>
>>>> Maybe I'm missing something or I am making a big mistake. Any help with
>>>> this?
>>>>
>>>>
>>>> Many thanks,
>>>>
>>>> Alexandre Alonso
>>>>
>>>>
>>>> [[alternative HTML version deleted]]
>>>>
>>>> _______________________________________________
>>>> R-sig-mixed-models at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list