[R] Multivariate regression in R
(Ted Harding)
Ted.Harding at nessie.mcc.ac.uk
Thu Jan 16 21:37:03 CET 2003
On 16-Jan-03 Huntsinger, Reid wrote:
>>From the equation you write, I don't see why "lsfit" wouldn't work.
> To estimate the covariance matrix of e you could use the sample
> covariance matrix of the residuals. If desired, use its cholesky
> decomposition to transform to make the error approximately
> uncorrelated, then refit (and back-transform the coefficient matrix).
>
> Stacking the columns of Y and replicating X won't do what you write;
> it forces each univariate regression to have the same coefficients.
> To get what you wrote you would replicate X in blocks of a
> block-diagonal
> matrix.
>
> I'm not sure I understand the role of W. If what you want is to fit
> equations like
>
> y(i) = a(w(i)) + xB(i) + e(i)
>
> for each observation and each response y(1),...,y(p), then I guess you
> could just fit each response separately, treating W as a factor. Then
> estimate the covariance matrix of e via the residuals as before, etc.
Thanks Reid! To clarify, let me spell out a simple example:
Y X W
y11 y12 y13 x11 x12 1 2 3
y21 y22 y23 x21 x22 1 3 2
y31 y32 y33 x31 x32 2 1 3
y41 y42 y43 x41 x42 2 1 3
y51 y52 y53 x51 x52 3 1 2
....
is a matrix representation of the data. Here, Y is the matrix of N
3-variate responses, X is an Nx2 matrix of quantitative covariates,
W is an Nx3 matrix of levels of the 3-level qualitative factor W
where Wij shows which level of W is associated with the jth variate
of Yi (the ith row of Y). After allowing for the effects of X and W,
it is expected that the residuals
e
e11 e12 e13
e21 e22 e23
e31 e32 e33
e41 e42 e43
e51 e52 e53
....
will be correlated between columns (in fact reflecting a real mechanism
in the process generating the data), and for modelling it is supposed
that (e1,e2,e3) have a joint Normal distribution with covariance matrix S.
The additive X component of the model can be written as X*B where
B = b11 b12 b13
b21 b22 b23
and the bij are the regression coefficients of Y on the quantitative
covariates (X1,X2).
The additive W component could be written in matrix form (for the above
"data") as:
W1 W2 W3
w1 * 1 0 0 + w2 * 0 1 0 + w3 * 0 0 1
1 0 0 0 0 1 0 1 0
0 1 0 1 0 0 0 0 1
0 1 0 1 0 0 0 0 1
0 1 0 0 0 1 1 0 0
... ... ...
where
col j of W1 is 1 where the jth variate in Y has level 1 of W (j = 1,2,3),
col j of W2 is 1 where the jth variate in Y has level 2 of W (j = 1,2,3),
col j of W3 is 1 where the jth variate in Y has level 3 of W (j = 1,2,3)
and w1, w2, w3 are the scalar magnitudes of the "effects" of levels
1, 2, 3 of W.
Hence the multivariate regression model for the data could be written in
matrix form as
Y = X*B + w1*W1 + w2*W2 + w3*W3 + e
where e is 3-dim N(0,S), and B, w1, w2, w3 and S are to be estimated.
What, in R, I can't make out how to do is to give some function (which
function?) a model specification of the form
Y ~ X + W1 + W2 + W3
but in such a way that it will fit a 2x3 matrix B of coefficients for X,
but scalar coefficients w1, w2, w3 for W1, W2, W3 (and also come up with
the estimated covariance matrix for the residuals e, though this, as you
point out, could be obtained after the fit from the Nx3 residuals;
however, it should be available from the function since it would have
to be used in fitting the model).
Analytically, the log-likelihood can be written (summing over r)
(-N/2)*log(det(S)) - 0.5*SUM[ e_r' * S^(-1) * e_r ] (e_r = rth row)
where e = Y - B*X - w1*W1 - w2*W2 - w3*W3. After differentiation and
algebra, one could implement the resulting matrix equations in octave
(or matlab) and proceed to a solution. One could even do this, as a
numerical procedure, in R -- but I'd rather not! Indeed, R's richness
in model-fitting resources tempts one to think that this problem may
be solvable using these -- it's just that I can't seem to put my hand
on what's needed.
Your response suggests that the way to go is to fit covariates and
factors to each Y-variate in turn, take the residuals from these fits,
and estimate the covariance matrix S. But thereafter you have to get
into considering them jointly, in order to improve the fit iteratively,
since S is involved in the joint weighting to be applied.
So I'm still not seeing how to do this in R ...
Many thanks for the reply!
Best wishes,
Ted.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding at nessie.mcc.ac.uk>
Fax-to-email: +44 (0)870 167 1972
Date: 16-Jan-03 Time: 20:29:01
------------------------------ XFMail ------------------------------
More information about the R-help
mailing list