lm.fit {stats} R Documentation

## Fitter Functions for Linear Models

### Description

These are the basic computing engines called by lm used to fit linear models. These should usually not be used directly unless by experienced users. .lm.fit() is a bare-bones wrapper to the innermost QR-based C code, on which glm.fit and lsfit are also based, for even more experienced users.

### Usage

lm.fit (x, y,    offset = NULL, method = "qr", tol = 1e-7,
singular.ok = TRUE, ...)

lm.wfit(x, y, w, offset = NULL, method = "qr", tol = 1e-7,
singular.ok = TRUE, ...)

.lm.fit(x, y, tol = 1e-7)


### Arguments

 x design matrix of dimension n * p. y vector of observations of length n, or a matrix with n rows. w vector of weights (length n) to be used in the fitting process for the wfit functions. Weighted least squares is used with weights w, i.e., sum(w * e^2) is minimized. offset (numeric of length n). This can be used to specify an a priori known component to be included in the linear predictor during fitting. method currently, only method = "qr" is supported. tol tolerance for the qr decomposition. Default is 1e-7. singular.ok logical. If FALSE, a singular model is an error. ... currently disregarded.

### Details

If y is a matrix, offset can be a numeric matrix of the same dimensions, in which case each column is applied to the corresponding column of y.

### Value

a list with components (for lm.fit and lm.wfit)

 coefficients p vector residuals n vector or matrix fitted.values n vector or matrix effects n vector of orthogonal single-df effects. The first rank of them correspond to non-aliased coefficients, and are named accordingly. weights n vector — only for the *wfit* functions. rank integer, giving the rank df.residual degrees of freedom of residuals qr the QR decomposition, see qr.

Fits without any columns or non-zero weights do not have the effects and qr components.

.lm.fit() returns a subset of the above, the qr part unwrapped, plus a logical component pivoted indicating if the underlying QR algorithm did pivot.

lm which you should use for linear least squares regression, unless you know better.

### Examples

require(utils)
set.seed(129)

n <- 7 ; p <- 2
X <- matrix(rnorm(n * p), n, p) # no intercept!
y <- rnorm(n)
w <- rnorm(n)^2

str(lmw <- lm.wfit(x = X, y = y, w = w))

str(lm. <- lm.fit (x = X, y = y))

## fits w/o intercept:
all.equal(unname(coef(lm(y ~ X-1))),
unname(coef( lm.fit(X,y))))
all.equal(unname(coef( lm.fit(X,y))),
coef(.lm.fit(X,y)))

if(require("microbenchmark")) {
mb <- microbenchmark(lm(y~X-1), lm.fit(X,y), .lm.fit(X,y))
print(mb)
boxplot(mb, notch=TRUE)
}



[Package stats version 4.4.0 Index]