# [R] LME: internal workings of QR factorization

Douglas Bates bates at stat.wisc.edu
Fri Apr 13 00:50:41 CEST 2007

```On 4/12/07, Izmirlian, Grant (NIH/NCI) [E] <izmirlig at mail.nih.gov> wrote:
> Hi:
>
> I've been reading "Computational Methods for Multilevel Modeling" by Pinheiro and Bates, the idea of embedding the technique in my own c-level code. The basic idea is to rewrite the joint density in a form to mimic a single least squares problem conditional upon the variance parameters.  The paper is fairly clear except that some important level of detail is missing. For instance, when we first meet Q_(i):
>
> /                    \                  /                                 \
> | Z_i     X_i   y_i  |                  | R_11(i)     R_10(i)     c_1(i)  |
> |                    | =         Q_(i)  |                                 |
> | Delta   0     0    |                  |   0         R_00(i)     c_0(i)  |
> \                    /                  \                                 /
>
> the text indicates that the Q-R factorization is limited to the first q columns of the augmented matrix on the left.  If one plunks the first
> q columns of the augmented matrix on the left into a qr factorization, one obtains an orthogonal matrix Q that is (n_i + q) x q and a nonsingular upper triangular matrix R that is q x q.  While the text describes R as a nonsingular upper triangular q x q, the matrix Q_(i) is described as a square (n_i + q) x (n_i + q) orthogonal matrix.  The remaining columns in the matrix to the right are defined by applying transpose(Q_(i)) to both sides.  The question is how to augment my Q which is orthogonal (n_i + q) x q  with the missing (n_i + q) x n_i portion producing the orthogonal square matrix mentioned in the text?  I tried appending the n_i x n_i identity matrix to the block diagonal, but this doesn't work as the resulting likelihood is insensitive to the variance parameters.
>
> Grant Izmirlian

The QR decomposition of an n by p matrix (n > p) can be written as the
product of an orthogonal n by n matrix Q and an n by p matrix R which
is zero below the main diagonal.  Because the rows of R beyond the pth
are zero, there is no need to store them.  For some purposes it is
more convenient to write the decomposition as the product of Q1, an n
by p matrix with orthonormal columns and R1 a p by p upper triangular
matrix.

If you are going to be incorporating calculations like this in your
own code I would recommend looking at the "Implementation" vignette in
the lme4 package.  It describes the computational approach used in the
latest version of lmer (currently called lmer2 but to become lmer at
some point) which allows for multiple non-nested grouping factors.
The techniques that Jose and I describe in the paper you mention only
handles nested grouping factors cleanly.

That vignette has been updated after the last release of the lme4
package.  You can get the expanded version from the SVN repository or
wait until after R-2.5.0 is released and we release new versions of
the Matrix and lme4 packages for R-2.5.0.

```