[R] [OT] good reference for mixed models and EM algorithm

Ravi Varadhan rvaradhan at jhmi.edu
Mon Feb 11 17:49:06 CET 2008


Hi Doug & Ted,

The multivariate Aitken accelerator suggested by Ted is numerically
ill-conditioned.  I have written a globally-convergent, general-purpose EM
accelerator that works well. It is quite simple to implement for any EM-type
algorithm (e.g. ECM, ECME which are all monotone in likelihood).  My paper
on that should be coming out soon in Scandinavian J of Stats.  I would be
interested in helping with its implementation for EM acceleration in large
data sets with non-nested random effects.  

Best,
Ravi.

----------------------------------------------------------------------------
-------

Ravi Varadhan, Ph.D.

Assistant Professor, The Center on Aging and Health

Division of Geriatric Medicine and Gerontology 

Johns Hopkins University

Ph: (410) 502-2619

Fax: (410) 614-9625

Email: rvaradhan at jhmi.edu

Webpage:  http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html

 

----------------------------------------------------------------------------
--------

-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On
Behalf Of Ted Harding
Sent: Monday, February 11, 2008 11:19 AM
To: r-help at r-project.org
Subject: Re: [R] [OT] good reference for mixed models and EM algorithm

On 11-Feb-08 15:07:37, Douglas Bates wrote:
> [...]
> Except that Doug Bates doesn't use the EM algorithm for fitting mixed
> models any more.  The lme4 package previously had an option for
> starting with EM (actually ECME, which is a variant of EM) iterations
> but I have since removed it.  For large data sets and especially for
> models with non-nested random effects, the EM iterations just slowed
> things down relative to direct optimisation of the log-likelihood.
> [...]

The "raw" EM Algorithm can be slow. I have had good success
using Aitken Acceleration for it.

The basic principle is that, once an interative algorithm
gets to a stage where (approximately)

[A]  (X[n+1] - X) = k*(X[n] -X)

where X[n] is the result at the n-th iteration, -1 < k < 1, and
X is the limit, then you can use recent results to predict the
limit. Taking the above equation literally, along with its
analogue for the next step:

[B]  (X[n+2] - X) = k*(X[n+1] -X)

from which k = (X[n+2] - X[[n+1])/(X[n+1] - X[n])

and then

[C] X = (X[n+1] - X[n])/(1 - k).

If X is multidimensional (say dimension = p), then k is a
pxp matrix, and you want all its eigenvalues to be less than 1
in modulus. Then you use the matrix analogues of the above
equations, based it on (p+1) successive iterations
X[n], X[n+1], ... , X[n+p+1]), i.e. on the p-vector

  c(X[n+1]-X[n], X[n+1]-X[n+1], ... , X[n+p+1]-X[n+p])

I have had good experience with this too!

The best method of proceeding is:

Stage 1: Monitor the sequence {X[n]} until it seems that
  equation [A] is beginning to be approximately true;

Stage 2: Apply equations [A], [B], [C] to estimate X.

Stage 3: Starting at this X, run a few more iterations
  so that you get a better (later) estimate of k, and
  then apply [A], [B], [C] aqain to re-estimate X.

Repeat stage 3 until happy (or bored).

The EM Algorithm, in most cases, falls into the class
of procedures to which Aitken Acceleration is applicable.

Best wishes to all,
Ted.

--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding at manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 11-Feb-08                                       Time: 16:18:45
------------------------------ XFMail ------------------------------

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



More information about the R-help mailing list