[R-sig-ME] Anomalous results with glmer().

Ben Bolker bbolker at gmail.com
Wed May 28 04:14:56 CEST 2014

  I will try to have a look.
  I agree that the positive variance estimates seem (much) more sensible
in a data set of this size ...

  Obviously it will take me a little while to get all the fits done on a
data set of this non-trivial size, but here are some preliminary thoughts:

 * I will try out the 'allFit.R' code mentioned on the mailing list
previously just to see how the results from 5 or 6 different optimizers
 * I will try lme4.0 and hope to get results the same as lme4 0.999999-0
 * I will evaluate the deviances of both the old and new fits to see
which fit is actually better, and use bbmle::slicetrans to look at the
shape of the likelihood surface between the two points

  As for explaining the 'crazy' result, if it actually turns out to be
(close to) the MLE for this data set: I would look at pictures of the
data, predictions, etc., and try to see if there's some sort of
confounder/Simpson's paradox thing going on here where the marginal
effect (= raw tabulation) is in fact very different from the conditional
effect ...

  Ben Bolker

On 14-05-27 09:59 PM, Rolf Turner wrote:
> Some months back I sent an inquiry to this list concerning the analysis
> of some linguistics data with which I am involved.  I am *still*
> (psigh!!!) struggling with these data and am getting results which are
> making no sense to me.
> Basically if I fit a (reasonably sensible) model using an old version of
> lme4 (0.999999-0) I get sensible looking estimates for the fixed effect
> coefficients, but the estimates of the variances of the random effects
> are essentially zero.  Which is silly.
> If I fit the same model using lme4 version 1.1-7 (and ignore the warning
> about failure to converge) I get sensible looking estimates of the
> variances of the random effects, but an impossibly wrong estimate
> of at least one of the fixed effect coefficients.  (The estimate says
> that the success probability is larger for phoneme type "Mclus" than it
> is for the baseline type "Fclus".  However a raw tabulation show that
> the success probability for Mclus is much, much smaller than for Fclus.
> I have included more detail in the attached file notesME.txt for those
> who are interested.  This file induces explicit specification of the
> model that I used. The results of the fit using version 0.999999-0 are
> in the file oldLme4Rslts.txt; the results from version 1.1-7 are in
> newLme4Rslts.txt.
> The data set is a bit too big to attach; it has 62601 records.  I have
> therefore made it available (as a *.csv file) on my web page:
>     https://www.stat.auckland.ac.nz/~rolf
> Click on "Linguistics data for R-SIG-ME".
> I am really being driven nuts by this weirdness and would appreciate
> some avuncular advice from the knowledgeable.  (Ben???)
> cheers,
> Rolf
> -- 
> Rolf Turner
> Technical Editor ANZJS
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

More information about the R-sig-mixed-models mailing list