[Rd] Addendum to wishlist bug report #10931 (factanal) (PR#12754)

ulrich.keller at uni.lu ulrich.keller at uni.lu
Tue Sep 9 13:55:11 CEST 2008


--=-hiYzUeWcRJ/+kx41aPIZ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Hi,

on March 10 I filed a wishlist bug report asking for the inclusion of
some changes to factanal() and the associated print method. The changes
were originally proposed by John Fox in 2005; they make print.factanal()
display factor correlations if factanal() is called with rotation =
"promax". Since I got no replies, and I am really tired of my R-curious
social science colleagues asking "What, it can't even display factor
correlations?!", I made the changes myself. I would be very grateful if
they'd find their way into a release.

I corrected a small error in John Fox's code and made another change
that enables factor correlations not only for promax, but also for the
rotation methods in package GPArotation.

The changes are against R-devel, downloaded on September 9th 2008.
Changes are indicated by comments from John Fox and me. I also changed
factanal.Rd accordingly, this is commented too.

My bug report is at
http://bugs.r-project.org/cgi-bin/R/wishlist?id=10931;user=guest

John Fox's original post is at
http://tolstoy.newcastle.edu.au/R/devel/05/06/1414.html

The changed files factanal.R and factanal.Rd are attached. If there is
anything else I can do to help these changes make it into R, please let
me know.

Thanks and best regards,

Uli

-- 
Ulrich Keller
Université du Luxembourg
EMACS research unit
B.P. 2
L-7201 Walferdange
Luxembourg
Mail ulrich.keller at uni.lu
Phone +352 46 66 44 9 278

--=-hiYzUeWcRJ/+kx41aPIZ
Content-Disposition: attachment; filename="factanal.R"
Content-Type: text/plain; name="factanal.R"; charset="utf-8"
Content-Transfer-Encoding: 7bit

#  File src/library/stats/R/factanal.R
#  Part of the R package, http://www.R-project.org
#
#  This program is free software; you can redistribute it and/or modify
#  it under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.
#
#  This program is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.
#
#  A copy of the GNU General Public License is available at
#  http://www.r-project.org/Licenses/

## Hmm, MM thinks diag(.) needs checking { diag(vec) when length(vec)==1 !}
## However, MM does not understand that factor analysis
##   is a *multi*variate technique!
factanal <-
    function (x, factors, data = NULL, covmat = NULL, n.obs = NA,
              subset, na.action, start = NULL,
              scores = c("none", "regression", "Bartlett"),
              rotation = "varimax",
              control = NULL, ...)
{
    sortLoadings <- function(Lambda)
    {
        cn <- colnames(Lambda)
        Phi <- attr(Lambda, "covariance")
        ssq <- apply(Lambda, 2, function(x) -sum(x^2))
        Lambda <- Lambda[, order(ssq), drop = FALSE]
        colnames(Lambda) <- cn
        neg <- colSums(Lambda) < 0
        Lambda[, neg] <- -Lambda[, neg]
        if(!is.null(Phi)) {
            unit <- ifelse(neg, -1, 1)
            attr(Lambda, "covariance") <-
                unit %*% Phi[order(ssq), order(ssq)] %*% unit
        }
        Lambda
    }
    cl <- match.call()
    na.act <- NULL
    if (is.list(covmat)) {
        if (any(is.na(match(c("cov", "n.obs"), names(covmat)))))
            stop("'covmat' is not a valid covariance list")
        cv <- covmat$cov
        n.obs <- covmat$n.obs
        have.x <- FALSE
    }
    else if (is.matrix(covmat)) {
        cv <- covmat
        have.x <- FALSE
    }
    else if (is.null(covmat)) {
        if(missing(x)) stop("neither 'x' nor 'covmat' supplied")
        have.x <- TRUE
        if(inherits(x, "formula")) {
            ## this is not a `standard' model-fitting function,
            ## so no need to consider contrasts or levels
            mt <- terms(x, data = data)
            if(attr(mt, "response") > 0)
                stop("response not allowed in formula")
            attr(mt, "intercept") <- 0
            mf <- match.call(expand.dots = FALSE)
            names(mf)[names(mf) == "x"] <- "formula"
            mf$factors <- mf$covmat <- mf$scores <- mf$start <-
                mf$rotation <- mf$control <- mf$... <- NULL
            mf[[1]] <- as.name("model.frame")
            mf <- eval.parent(mf)
            na.act <- attr(mf, "na.action")
            if (.check_vars_numeric(mf))
                stop("factor analysis applies only to numerical variables")
            z <- model.matrix(mt, mf)
        } else {
            z <- as.matrix(x)
            if(!is.numeric(z))
                stop("factor analysis applies only to numerical variables")
            if(!missing(subset)) z <- z[subset, , drop = FALSE]
        }
        covmat <- cov.wt(z)
        cv <- covmat$cov
        n.obs <- covmat$n.obs
    }
    else stop("'covmat' is of unknown type")
    scores <- match.arg(scores)
    if(scores != "none" && !have.x)
        stop("requested scores without an 'x' matrix")
    p <- ncol(cv)
    if(p < 3) stop("factor analysis requires at least three variables")
    dof <- 0.5 * ((p - factors)^2 - p - factors)
    if(dof < 0)
        stop(gettextf("%d factors is too many for %d variables", factors, p),
             domain = NA)
    sds <- sqrt(diag(cv))
    cv <- cv/(sds %o% sds)

    cn <- list(nstart = 1, trace = FALSE, lower = 0.005)
    cn[names(control)] <- control
    more <- list(...)[c("nstart", "trace", "lower", "opt", "rotate")]
    if(length(more)) cn[names(more)] <- more

    if(is.null(start)) {
        start <- (1 - 0.5*factors/p)/diag(solve(cv))
        if((ns <- cn$nstart) > 1)
            start <- cbind(start, matrix(runif(ns-1), p, ns-1, byrow=TRUE))
    }
    start <- as.matrix(start)
    if(nrow(start) != p)
        stop(gettextf("'start' must have %d rows", p), domain = NA)
    nc <- ncol(start)
    if(nc < 1) stop("no starting values supplied")

    best <- Inf
    for (i in 1:nc) {
        nfit <- factanal.fit.mle(cv, factors, start[, i],
                                 max(cn$lower, 0), cn$opt)
        if(cn$trace)
            cat("start", i, "value:", format(nfit$criteria[1]),
                "uniqs:", format(as.vector(round(nfit$uniquenesses, 4))), "\n")
        if(nfit$converged && nfit$criteria[1] < best) {
            fit <- nfit
            best <- fit$criteria[1]
        }
    }
    if(best == Inf) stop("unable to optimize from these starting value(s)")
    load <- fit$loadings
    if(rotation != "none") {
        rot <- do.call(rotation, c(list(load), cn$rotate))
                                        # the following lines modified by J. Fox, 26 June 2005
        if (is.list(rot)){
          load <- rot$loadings
                                        #the following lines changed by Ulrich Keller, 9 Sept 2008          
          fit$rotmat <-
            if(inherits(rot, "GPArotation")) {
              t(solve(rot$Th))
            } else {
              rot$rotmat
            }
                                        #end changes Ulrich Keller, 9 Sept 2008
        }
        else load <- rot
                                        # end modifications J. Fox, 26 June 2005
    }
    fit$loadings <- sortLoadings(load)
    class(fit$loadings) <- "loadings"
    fit$na.action <- na.act # not used currently
    if(have.x && scores != "none") {
        Lambda <- fit$loadings
        zz <- scale(z, TRUE, TRUE)
        switch(scores,
               regression = {
                   sc <- zz %*% solve(cv, Lambda)
                   if(!is.null(Phi <- attr(Lambda, "covariance")))
                       sc <- sc %*% Phi
               },
               Bartlett = {
                   d <- 1/fit$uniquenesses
                   tmp <- t(Lambda * d)
                   sc <- t(solve(tmp %*% Lambda, tmp %*% t(zz)))
               })
        rownames(sc) <- rownames(z)
        colnames(sc) <- colnames(Lambda)
        if(!is.null(na.act)) sc <- napredict(na.act, sc)
        fit$scores <- sc
    }
    if(!is.na(n.obs) && dof > 0) {
        fit$STATISTIC <- (n.obs - 1 - (2 * p + 5)/6 -
                     (2 * factors)/3) * fit$criteria["objective"]
        fit$PVAL <- pchisq(fit$STATISTIC, dof, lower.tail = FALSE)
    }
    fit$n.obs <- n.obs
    fit$call <- cl
    fit
}

factanal.fit.mle <-
    function(cmat, factors, start=NULL, lower = 0.005, control = NULL, ...)
{
    FAout <- function(Psi, S, q)
    {
        sc <- diag(1/sqrt(Psi))
        Sstar <- sc %*% S %*% sc
        E <- eigen(Sstar, symmetric = TRUE)
        L <- E$vectors[, 1:q, drop = FALSE]
        load <- L %*% diag(sqrt(pmax(E$values[1:q] - 1, 0)), q)
        diag(sqrt(Psi)) %*% load
    }
    FAfn <- function(Psi, S, q)
    {
        sc <- diag(1/sqrt(Psi))
        Sstar <- sc %*% S %*% sc
        E <- eigen(Sstar, symmetric = TRUE, only.values = TRUE)
        e <- E$values[-(1:q)]
        e <- sum(log(e) - e) - q + nrow(S)
##        print(round(c(Psi, -e), 5))  # for tracing
        -e
    }
    FAgr <- function(Psi, S, q)
    {
        sc <- diag(1/sqrt(Psi))
        Sstar <- sc %*% S %*% sc
        E <- eigen(Sstar, symmetric = TRUE)
        L <- E$vectors[, 1:q, drop = FALSE]
        load <- L %*% diag(sqrt(pmax(E$values[1:q] - 1, 0)), q)
        load <- diag(sqrt(Psi)) %*% load
        g <- load %*% t(load) + diag(Psi) - S
        diag(g)/Psi^2
    }
    p <- ncol(cmat)
    if(is.null(start))
        start <- (1 - 0.5*factors/p)/diag(solve(cmat))
    res <- optim(start, FAfn, FAgr, method = "L-BFGS-B",
                 lower = lower, upper = 1,
                 control = c(list(fnscale=1,
                 parscale = rep(0.01, length(start))), control),
                 q = factors, S = cmat)
    Lambda <- FAout(res$par, cmat, factors)
    dimnames(Lambda) <- list(dimnames(cmat)[[1]],
                             paste("Factor", 1:factors, sep = ""))
    p <- ncol(cmat)
    dof <- 0.5 * ((p - factors)^2 - p - factors)
    un <- res$par
    names(un) <- colnames(cmat)
    class(Lambda) <- "loadings"
    ans <- list(converged = res$convergence == 0,
                loadings = Lambda, uniquenesses = un,
                correlation = cmat,
                criteria = c(objective = res$value, counts = res$counts),
                factors = factors, dof = dof, method = "mle")
    class(ans) <- "factanal"
    ans
}

print.loadings <- function(x, digits = 3, cutoff = 0.1, sort = FALSE, ...)
{
    Lambda <- unclass(x)
    p <- nrow(Lambda)
    factors <- ncol(Lambda)
    if (sort) {
        mx <- max.col(abs(Lambda))
        ind <- cbind(1:p, mx)
        mx[abs(Lambda[ind]) < 0.5] <- factors + 1
        Lambda <- Lambda[order(mx, 1:p),]
    }
    cat("\nLoadings:\n")
    fx <- format(round(Lambda, digits))
    names(fx) <- NULL
    nc <- nchar(fx[1], type="c")
    fx[abs(Lambda) < cutoff] <- paste(rep(" ", nc), collapse = "")
    print(fx, quote = FALSE, ...)
    vx <- colSums(x^2)
    varex <- rbind("SS loadings" = vx)
    if(is.null(attr(x, "covariance"))) {
        varex <- rbind(varex, "Proportion Var" = vx/p)
        if(factors > 1)
            varex <- rbind(varex, "Cumulative Var" = cumsum(vx/p))
    }
    cat("\n")
    print(round(varex, digits))
    invisible(x)
}

print.factanal <- function(x, digits = 3, ...)
{
    cat("\nCall:\n", deparse(x$call), "\n\n", sep = "")
    cat("Uniquenesses:\n")
    print(round(x$uniquenesses, digits), ...)
    print(x$loadings, digits = digits, ...)
                                        # the following lines added by J. Fox, 26 June 2005
    if (!is.null(x$rotmat)){

      tmat <- solve(x$rotmat)
      R <- tmat %*% t(tmat)
      factors <- x$factors
      rownames(R) <- colnames(R) <- paste("Factor", 1:factors, sep="")

                                        # the following line changed by Ulrich Keller, 9 Sept 2008
      if (TRUE != all.equal(c(R), c(diag(factors)))){
        cat("\nFactor Correlations:\n")
        print(R, digits=digits, ...)
      }


    }
                                        # end additions J. Fox, 23 June 2005
    if(!is.null(x$STATISTIC)) {
        factors <- x$factors
        cat("\nTest of the hypothesis that", factors, if(factors == 1)
            "factor is" else "factors are", "sufficient.\n")
        cat("The chi square statistic is", round(x$STATISTIC, 2), "on", x$dof,
            if(x$dof == 1) "degree" else "degrees",
            "of freedom.\nThe p-value is", signif(x$PVAL, 3), "\n")
    } else {
        cat(paste("\nThe degrees of freedom for the model is",
                  x$dof, "and the fit was", round(x$criteria["objective"], 4),
                  "\n"))
    }
    invisible(x)
}

varimax <- function(x, normalize = TRUE, eps = 1e-5)
{
    nc <- ncol(x)
    if(nc < 2) return(x)
    if(normalize) {
        sc <- sqrt(drop(apply(x, 1, function(x) sum(x^2))))
        x <- x/sc
    }
    p <- nrow(x)
    TT <- diag(nc)
    d <- 0
    for(i in 1:1000) {
        z <- x %*% TT
        B  <- t(x) %*% (z^3 - z %*% diag(drop(rep(1, p) %*% z^2))/p)
        sB <- La.svd(B)
        TT <- sB$u %*% sB$vt
        dpast <- d
        d <- sum(sB$d)
        if(d < dpast * (1 + eps)) break
    }
    z <- x %*% TT
    if(normalize) z <- z * sc
    dimnames(z) <- dimnames(x)
    class(z) <- "loadings"
    list(loadings = z, rotmat = TT)
}

promax <- function(x, m = 4)
{
    if(ncol(x) < 2) return(x)
    dn <- dimnames(x)
    xx <- varimax(x)
    x <- xx$loadings
    Q <- x * abs(x)^(m-1)
    U <- lm.fit(x, Q)$coefficients
    d <- diag(solve(t(U) %*% U))
    U <- U %*% diag(sqrt(d))
    dimnames(U) <- NULL
    z <- x %*% U
    U <- xx$rotmat %*% U
    dimnames(z) <- dn
    class(z) <- "loadings"
    list(loadings = z, rotmat = U)
}

--=-hiYzUeWcRJ/+kx41aPIZ
Content-Disposition: attachment; filename="factanal.Rd"
Content-Type: text/x-matlab; name="factanal.Rd"; charset="utf-8"
Content-Transfer-Encoding: 8bit

% File src/library/stats/man/factanal.Rd
% Part of the R package, http://www.R-project.org
% Copyright 1995-2007 R Core Development Team
% Distributed under GPL 2 or later

\name{factanal}
\alias{factanal}
%\alias{factanal.fit.mle}
\encoding{latin1}
\title{Factor Analysis}
\description{
  Perform maximum-likelihood factor analysis on a covariance matrix or
  data matrix.
}
\usage{
factanal(x, factors, data = NULL, covmat = NULL, n.obs = NA,
         subset, na.action, start = NULL,
         scores = c("none", "regression", "Bartlett"),
         rotation = "varimax", control = NULL, \dots)
}
\arguments{
  \item{x}{A formula or a numeric matrix or an object that can be
    coerced to a numeric matrix.}
  \item{factors}{The number of factors to be fitted.}
  \item{data}{An optional data frame (or similar: see
    \code{\link{model.frame}}), used only if \code{x} is a formula.  By
    default the variables are taken from \code{environment(formula)}.}
  \item{covmat}{A covariance matrix, or a covariance list as returned by
    \code{\link{cov.wt}}.  Of course, correlation matrices are covariance
    matrices.}
  \item{n.obs}{The number of observations, used if \code{covmat} is a
    covariance matrix.}
  \item{subset}{A specification of the cases to be used, if \code{x} is
    used as a matrix or formula.}
  \item{na.action}{The \code{na.action} to be used if \code{x} is
    used as a formula.}
  \item{start}{\code{NULL} or a matrix of starting values, each column
    giving an initial set of uniquenesses.}
  \item{scores}{Type of scores to produce, if any.  The default is none,
    \code{"regression"} gives Thompson's scores, \code{"Bartlett"} given
    Bartlett's weighted least-squares scores. Partial matching allows
    these names to be abbreviated.}
  \item{rotation}{character. \code{"none"} or the name of a function
    to be used to rotate the factors: it will be called with first
    argument the loadings matrix, and should return a list with component
    \code{loadings} giving the rotated loadings, or just the rotated loadings.}
  \item{control}{A list of control values,
    \describe{
      \item{nstart}{The number of starting values to be tried if
	\code{start = NULL}. Default 1.}
      \item{trace}{logical. Output tracing information? Default \code{FALSE}.}
      \item{lower}{The lower bound for uniquenesses during
	optimization. Should be > 0. Default 0.005.}
      \item{opt}{A list of control values to be passed to
	\code{\link{optim}}'s \code{control} argument.}
      \item{rotate}{a list of additional arguments for the rotation function.}
    }
  }
  \item{\dots}{Components of \code{control} can also be supplied as
    named arguments to \code{factanal}.}
}
\details{
  The factor analysis model is
  \deqn{x = \Lambda f + e}{ x = Lambda f + e}
  for a \eqn{p}--element row-vector \eqn{x}, a \eqn{p \times k}{p x k}
  matrix of \emph{loadings}, a \eqn{k}--element vector of \emph{scores}
  and a \eqn{p}--element vector of errors.  None of the components
  other than \eqn{x} is observed, but the major restriction is that the
  scores be uncorrelated and of unit variance, and that the errors be
  independent with variances \eqn{\Phi}{Phi}, the
  \emph{uniquenesses}.  Thus factor analysis is in essence a model for
  the covariance matrix of \eqn{x},
  \deqn{\Sigma = \Lambda^\prime\Lambda + \Psi}{Sigma = Lambda'Lambda + Psi}
  There is still some indeterminacy in the model for it is unchanged
  if \eqn{\Lambda}{Lambda} is replaced by \eqn{G\Lambda}{G Lambda} for
  any orthogonal matrix \eqn{G}.  Such matrices \eqn{G} are known as
  \emph{rotations} (although the term is applied also to non-orthogonal
  invertible matrices).
  
  If \code{covmat} is supplied it is used.  Otherwise \code{x} is used if
  it is a matrix, or a formula \code{x} is used with \code{data} to
  construct a model matrix, and that is used to construct a covariance
  matrix.  (It makes no sense for the formula to have a response,
  and all the variables must be numeric.)  Once a covariance matrix is found or
  calculated from \code{x}, it is converted to a correlation matrix for
  analysis.  The correlation matrix is returned as component
  \code{correlation} of the result.

  The fit is done by optimizing the log likelihood assuming multivariate
  normality over the uniquenesses.  (The maximizing loadings for given
  uniquenesses can be found analytically: Lawley & Maxwell (1971,
  p. 27).)  All the starting values supplied in \code{start} are tried
  in turn and the best fit obtained is used.  If \code{start = NULL}
  then the first fit is started at the value suggested by
  \enc{Jöreskog}{Joreskog} (1963) and given by Lawley & Maxwell
  (1971, p. 31), and then \code{control$nstart - 1} other values are
  tried, randomly selected as equal values of the uniquenesses.

  The uniquenesses are technically constrained to lie in \eqn{[0, 1]},
  but near-zero values are problematical, and the optimization is
  done with a lower bound of \code{control$lower}, default 0.005
  (Lawley & Maxwell, 1971, p. 32).

  Scores can only be produced if a data matrix is supplied and used.
  The first method is the regression method of Thomson (1951), the
  second the weighted least squares method of Bartlett (1937, 8).
  Both are estimates of the unobserved scores \eqn{f}.  Thomson's method
  regresses (in the population) the unknown \eqn{f} on \eqn{x} to yield
  \deqn{\hat f = \Lambda^\prime \Sigma^{-1} x}{hat f = Lambda' Sigma^-1 x}
  and then substitutes the sample estimates of the quantities on the
  right-hand side.  Bartlett's method minimizes the sum of squares of
  standardized errors over the choice of \eqn{f}, given (the fitted)
  \eqn{\Lambda}{Lambda}.

  If \code{x} is a formula then the standard NA-handling is applied to
  the scores (if requested): see \code{\link{napredict}}.
}
\value{
  An object of class \code{"factanal"} with components
  \item{loadings}{A matrix of loadings, one column for each factor.  The
    factors are ordered in decreasing order of sums of squares of
    loadings, and given the sign that will make the sum of the loadings
    positive.}
  \item{uniquenesses}{The uniquenesses computed.}
  \item{correlation}{The correlation matrix used.}
  \item{criteria}{The results of the optimization: the value of the
    negative log-likelihood and information on the iterations used.}
  \item{factors}{The argument \code{factors}.}
  \item{dof}{The number of degrees of freedom of the factor analysis model.}
  \item{method}{The method: always \code{"mle"}.}
  %Modification by Ulrich Keller, Sept 9 2008
  \item{rotmat}{The rotation matrix if relevant.}
  %End modicication by Ulrich Keller, Sept 9 2008
  \item{scores}{If requested, a matrix of scores.  \code{napredict} is
    applied to handle the treatment of values omitted by the \code{na.action}.}
  \item{n.obs}{The number of observations if available, or \code{NA}.}
  \item{call}{The matched call.}
  \item{na.action}{If relevant.}
  \item{STATISTIC, PVAL}{The significance-test statistic and P value, if
    if can be computed.}
}

\note{
  There are so many variations on factor analysis that it is hard to
  compare output from different programs.  Further, the optimization in
  maximum likelihood factor analysis is hard, and many other examples we
  compared had less good fits than produced by this function.  In
  particular, solutions which are Heywood cases (with one or more
  uniquenesses essentially zero) are much often common than most texts
  and some other programs would lead one to believe.
}

\references{
  Bartlett, M. S. (1937) The statistical conception of mental factors.
  \emph{British Journal of Psychology}, \bold{28}, 97--104.

  Bartlett, M. S. (1938) Methods of estimating mental
  factors. \emph{Nature}, \bold{141}, 609--610.
  
  \enc{Jöreskog}{Joreskog}, K. G. (1963)
  \emph{Statistical Estimation in Factor Analysis.}  Almqvist and Wicksell.
  
  Lawley, D. N. and Maxwell, A. E. (1971) \emph{Factor Analysis as a
    Statistical Method.} Second edition. Butterworths.

  Thomson, G. H. (1951) \emph{The Factorial Analysis of Human Ability.}
  London University Press.
}

\seealso{
  \code{\link{print.loadings}},
  \code{\link{varimax}}, \code{\link{princomp}},
  \code{\link[datasets]{ability.cov}}, \code{\link[datasets]{Harman23.cor}},
  \code{\link[datasets]{Harman74.cor}}
}

\examples{
# A little demonstration, v2 is just v1 with noise,
# and same for v4 vs. v3 and v6 vs. v5
# Last four cases are there to add noise
# and introduce a positive manifold (g factor)
v1 <- c(1,1,1,1,1,1,1,1,1,1,3,3,3,3,3,4,5,6)
v2 <- c(1,2,1,1,1,1,2,1,2,1,3,4,3,3,3,4,6,5)
v3 <- c(3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,5,4,6)
v4 <- c(3,3,4,3,3,1,1,2,1,1,1,1,2,1,1,5,6,4)
v5 <- c(1,1,1,1,1,3,3,3,3,3,1,1,1,1,1,6,4,5)
v6 <- c(1,1,1,2,1,3,3,3,4,3,1,1,1,2,1,6,5,4)
m1 <- cbind(v1,v2,v3,v4,v5,v6)
cor(m1)
factanal(m1, factors=3) # varimax is the default
factanal(m1, factors=3, rotation="promax")
# The following shows the g factor as PC1
prcomp(m1)

## formula interface
factanal(~v1+v2+v3+v4+v5+v6, factors = 3,
         scores = "Bartlett")$scores

## a realistic example from Barthlomew (1987, pp. 61-65)
utils::example(ability.cov)
}
\keyword{multivariate}

--=-hiYzUeWcRJ/+kx41aPIZ--



More information about the R-devel mailing list