[R-sig-ME] blme optimizer warnings

Sijia Huang hu@ng@jcc @end|ng |rom gm@||@com
Thu May 14 04:57:54 CEST 2020


Thank you so much, Vincent!

On Wed, May 13, 2020 at 7:54 PM Vincent Dorie <vdorie using gmail.com> wrote:

> A couple of guesses here in addition to what Ben mentioned, but you
> likely don't want a prior on the covariance of the random effects and
> the weights should be on the scale of inverse variances. The following
> replicates the numbers for the CCREM column from table 1:
>
> blmer(g ~ 1 + (1 | Study) + (1 | Subscale) + (1 | Outcome:Study:Subscale),
>       data = meta, weights = Precision, control =
> lmerControl(optimizer = "bobyqa"),
>       resid.prior = point(1), cov.prior = NULL)
>
>
> On Wed, May 13, 2020 at 10:04 PM Sijia Huang <huangsjcc using gmail.com> wrote:
> >
> > Here it is. Thanks!
> >
> > A demonstration and evaluation of the use of cross-classified
> > random-effects models for meta-analysis
> >
> > On Wed, May 13, 2020 at 6:57 PM Ben Bolker <bbolker using gmail.com> wrote:
> >
> > >
> > >   Can you give a more specific reference? I can't immediately guess
> from
> > > Fernández-Castilla's google scholar page which article it is ...
> > > On 5/13/20 9:36 PM, Sijia Huang wrote:
> > >
> > > Thanks for the quick reply, Ben!
> > >
> > > I am replicating the Fernández-Castilla et al. (2018) article. Below
> are
> > > the data they have in the article. Anything I can do to resolve the
> issue?
> > > Thanks!
> > >
> > > > meta
> > >    Study Outcome Subscale      g Variance Precision
> > > 1      1       1        1 -0.251    0.024    41.455
> > > 2      2       1        1 -0.069    0.001  1361.067
> > > 3      3       1        5  0.138    0.001   957.620
> > > 4      4       1        1 -0.754    0.085    11.809
> > > 5      5       1        1 -0.228    0.020    49.598
> > > 6      6       1        6 -0.212    0.004   246.180
> > > 7      6       2        7  0.219    0.004   246.095
> > > 8      7       1        1  0.000    0.012    83.367
> > > 9      8       1        2 -0.103    0.006   162.778
> > > 10     8       2        3  0.138    0.006   162.612
> > > 11     8       3        4 -0.387    0.006   160.133
> > > 12     9       1        1 -0.032    0.023    44.415
> > > 13    10       1        5 -0.020    0.058    17.110
> > > 14    11       1        1  0.128    0.017    59.999
> > > 15    12       1        1 -0.262    0.032    31.505
> > > 16    13       1        1 -0.046    0.071    14.080
> > > 17    14       1        6 -0.324    0.003   381.620
> > > 18    14       2        6 -0.409    0.003   378.611
> > > 19    14       3        7  0.080    0.003   385.319
> > > 20    14       4        7 -0.140    0.003   385.542
> > > 21    15       1        1  0.311    0.005   185.364
> > > 22    16       1        1  0.036    0.005   205.063
> > > 23    17       1        6 -0.259    0.001   925.643
> > > 24    17       2        7  0.196    0.001   928.897
> > > 25    18       1        1  0.157    0.013    74.094
> > > 26    19       1        1  0.000    0.056    17.985
> > > 27    20       1        1  0.000    0.074    13.600
> > > 28    21       1        6 -0.013    0.039    25.425
> > > 29    21       2        7 -0.004    0.039    25.426
> > > 30    22       1        1 -0.202    0.001  1487.992
> > > 31    23       1        1  0.000    0.086    11.628
> > > 32    24       1        1 -0.221    0.001   713.110
> > > 33    25       1        1 -0.099    0.001   749.964
> > > 34    26       1        5 -0.165    0.000  6505.024
> > > 35    27       1        1 -0.523    0.063    15.856
> > > 36    28       1        1  0.000    0.001  1611.801
> > > 37    29       1        6  0.377    0.045    22.045
> > > 38    29       2        7  0.575    0.046    21.677
> > > 39    30       1        1  0.590    0.074    13.477
> > > 40    31       1        1  0.020    0.001  1335.991
> > > 41    32       1        1  0.121    0.043    23.489
> > > 42    33       1        1 -0.101    0.003   363.163
> > > 43    34       1        1 -0.101    0.003   369.507
> > > 44    35       1        1 -0.104    0.004   255.507
> > > 45    36       1        1 -0.270    0.003   340.761
> > > 46    37       1        1  0.179    0.150     6.645
> > > 47    38       1        2  0.468    0.020    51.255
> > > 48    38       2        4 -0.479    0.020    51.193
> > > 49    39       1        5 -0.081    0.024    42.536
> > > 50    40       1        1 -0.071    0.043    23.519
> > > 51    41       1        1  0.201    0.077    13.036
> > > 52    42       1        6 -0.070    0.006   180.844
> > > 53    42       2        7  0.190    0.006   180.168
> > > 54    43       1        1  0.277    0.013    79.220
> > > 55    44       1        5 -0.086    0.001   903.924
> > > 56    45       1        5 -0.338    0.002   469.260
> > > 57    46       1        1  0.262    0.003   290.330
> > > 58    47       1        5  0.000    0.003   304.959
> > > 59    48       1        1 -0.645    0.055    18.192
> > > 60    49       1        5 -0.120    0.002   461.802
> > > 61    50       1        5 -0.286    0.009   106.189
> > > 62    51       1        1 -0.124    0.006   172.261
> > > 63    52       1        1  0.023    0.028    35.941
> > > 64    53       1        5 -0.064    0.001   944.600
> > > 65    54       1        1  0.000    0.043    23.010
> > > 66    55       1        1  0.000    0.014    72.723
> > > 67    56       1        5  0.000    0.012    85.832
> > > 68    57       1        1  0.000    0.012    85.832
> > >
> > >
> > > On Wed, May 13, 2020 at 6:00 PM Ben Bolker <bbolker using gmail.com> wrote:
> > >
> > >>     Without looking very carefully at this:
> > >>
> > >> * unless your response variable is somehow already centered at zero by
> > >> design, a model with no intercept at all is going to be
> > >> weird/problematic (random effects are always zero-centered by
> definition).
> > >>
> > >> * is it really OK to have an infinite scale in your wishart prior?
> (It
> > >> may be fine, I'm not immediately familiar with the blme
> > >> parameterizations, it just looks weird)
> > >>
> > >> * the fact that your standard devs are all exactly 1 suggests that the
> > >> optimizer bailed out before actually doing anything (these are the
> > >> default starting values).
> > >>
> > >>    Can you provide a reproducible example?
> > >>
> > >> On 5/13/20 8:53 PM, Sijia Huang wrote:
> > >> > Hi everyone,
> > >> > I am fitting a cross-classified model with blme, but getting 1
> optimizer
> > >> > warning. The code and output are shown below. Any suggestions
> regarding
> > >> > fixing the estimation issue? Thanks!
> > >> >
> > >> >
> > >> >> meta.example <- blmer(g~0+(1|Study)+(1|Subscale)+
> > >> > 1|Outcome:Study:Subscale),
> > >> > +                       data=meta, weights = Variance,
> > >> > +                       resid.prior = point(1),
> > >> > +                       control = lmerControl(optimizer="bobyqa"))
> > >> >
> > >> >> meta.example
> > >> > Cov prior  : Outcome:Study:Subscale ~ wishart(df = 3.5, scale = Inf,
> > >> > posterior.scale = cov, common.scale = TRUE)
> > >> >             : Study ~ wishart(df = 3.5, scale = Inf,
> posterior.scale =
> > >> cov,
> > >> > common.scale = TRUE)
> > >> >             : Subscale ~ wishart(df = 3.5, scale = Inf,
> posterior.scale
> > >> =
> > >> > cov, common.scale = TRUE)
> > >> > Resid prior: point(value = 1)
> > >> > Prior dev  : NaN
> > >> >
> > >> > Linear mixed model fit by maximum likelihood  ['blmerMod']
> > >> > Formula: g ~ 0 + (1 | Study) + (1 | Subscale) + (1 |
> > >> Outcome:Study:Subscale)
> > >> >     Data: meta
> > >> > Weights: Variance
> > >> >       AIC      BIC   logLik deviance df.resid
> > >> >       Inf      Inf     -Inf      Inf       64
> > >> > Random effects:
> > >> >   Groups                 Name        Std.Dev.
> > >> >   Outcome:Study:Subscale (Intercept) 1
> > >> >   Study                  (Intercept) 1
> > >> >   Subscale               (Intercept) 1
> > >> >   Residual                           1
> > >> > Number of obs: 68, groups:  Outcome:Study:Subscale, 68; Study, 57;
> > >> > Subscale, 7
> > >> > No fixed effect coefficients
> > >> > convergence code 0; 1 optimizer warnings; 0 lme4 warnings
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > Best,
> > >> > Sijia
> > >> >
> > >> >       [[alternative HTML version deleted]]
> > >> >
> > >> > _______________________________________________
> > >> > R-sig-mixed-models using r-project.org mailing list
> > >> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> > >>
> > >> _______________________________________________
> > >> R-sig-mixed-models using r-project.org mailing list
> > >> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> > >>
> > >
> >
> >         [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list