# Using the MVLM Package

#### 2017-03-26

Many research questions involve the collection of multivariate outcome data. A common goal in these scenarios is the identification of predictors $$\left(\mathbf{X}_{n \times p}\right)$$ that are associated with the multivariate outcome $$\left(\mathbf{Y}_{n \times q}\right)$$. Methods such as multivariate multiple regression (MMR) and multivariate analysis of variance (MANOVA) are typical approaches to accomplishing this goal.

This package allows users to conduct MMR and MANOVA in conjunction with the the analytic p-values proposed by McArtor et al. (under review), in lieu of classical $$p$$-values based on statistics such as Wilks’ Lambda and Pillai’s Trace.

The remainder of this vignette is designed to illustrate the use of the MVLM package, which is straight-forward to use. For further technical details of the methods being implemented, please refer to McArtor et al. (under review).

## 1. Fitting the Model

To illustrate this package, we include the toy dataset mvlmdata, which is comprised of five continuous outcome variables $$\mathbf{Y}$$ and three predictors $$\mathbf{X}$$ measured on 200 subjects. All outcome variables are standardized to have unit variance.

The first predictors is continuous, the second is an unordered categorical variable (variable type factor) with three factors, and the third is an ordered categorical variable with four factors (variable type ordered). By default, mvlm() uses contrast (“sum”) coding to test the effects of unordered categorical variables and polynomial contrasts to test ordered categorical variables.

The function mvlm() can be used to test the association of $$\mathbf{X}$$ and $$\mathbf{Y}$$. Consider the main-effects only model and the model containing all two-way interaction terms:

library(MVLM)
data(mvlmdata)
Y <- as.matrix(Y.mvlm)

# Main effects
mvlm.res <- mvlm(Y ~ Cont + Cat + Ord, data = X.mvlm)
summary(mvlm.res)
##                 Statistic Numer.DF Pseudo.R.Square    p.value
## Omnibus Effect     9.6204        6         0.23022 9.7466e-13 ***
## (Intercept)       54.5358        1                    < 1e-12 ***
## Cont              51.0061        1         0.20344    < 1e-14 ***
## Cat                0.7998        2         0.00638    0.49033
## Ord                2.4970        3         0.02988   0.031381 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Two-Way Interactions
mvlm.res.int <- mvlm(Y ~ .^2, data = X.mvlm)
summary(mvlm.res.int)
##                 Statistic Numer.DF Pseudo.R.Square  p.value
## Omnibus Effect    20.7815       17        0.659995  < 1e-20 ***
## (Intercept)       94.0068        1                  < 1e-20 ***
## Cont              54.0874        1        0.101044  < 1e-14 ***
## Cat                1.7948        2        0.006706  0.13116
## Ord                2.4615        3        0.013795 0.033529 *
## Cont:Cat          89.5030        2        0.334412  < 1e-20 ***
## Cont:Ord           0.4837        3        0.002711  0.81433
## Cat:Ord            0.9440        6        0.010581  0.48113
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The results of the second model illustrate that the ordered categorical variable has a small but significant conditional main effect on the outcome variables. Perhaps more interestingly, the continuous and categorical variable are shown to have a strong two-way interaction effect that influences the outcome variables.

Note that the tests involving the main effect of the categorical predictors utilize two and three degrees of freedom, reflecting the fact that there are four levels of the factor. Similarly, the interaction terms utilize the same number of numerator degrees of freedom that are used in the standard linear model.

If a user is interested in testing the effect of specific dummy/contrast codes of a categorical predictor separately, the following approach could be used. First, compute a design matrix that includes the specific contrasts of interest. Second, transform that contrast matrix into a data frame where all entries are numeric. Third, regress $$\mathbf{Y}$$ onto that matrix directly. This approach is illustrated in the following code which uses the R defaults of dummy-codes for Cat and polynomial coding for Ord.

# --- Main Effects Only --- #

# Get dummy-codes for the categorical predictor using the first group as a
# reference category, and then remove the leading column of 1's
X.dum <- model.matrix(~., data = X.mvlm)[,-1]
X.dum <- as.data.frame(X.dum)

# Use these reformatted data in MMR
mvlm.res.dum <- mvlm(Y ~ ., data = X.dum)
summary(mvlm.res.dum)
##                 Statistic Numer.DF Pseudo.R.Square    p.value
## Omnibus Effect     9.6204        6       0.2302239 9.7466e-13 ***
## (Intercept)       17.8136        1                 1.9039e-06 ***
## Cont              51.0061        1       0.2034365    < 1e-14 ***
## Cat2               0.8653        1       0.0034511     0.3807
## Cat3               0.1842        1       0.0007346    0.89722
## Ord.L              6.0172        1       0.0239993  0.0058359 **
## Ord.Q              0.5791        1       0.0023097    0.53639
## Ord.C              0.7867        1       0.0031377    0.41628
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# --- Interaction Model --- #

# Get dummy-codes for the categorical predictor using the first group as a
# reference category, and then remove the leading column of 1's
X.dum.int <- model.matrix(~.^2, data = X.mvlm)[,-1]
X.dum.int <- as.data.frame(X.dum.int)

# Use these reformatted data in MMR
mvlm.res.dum.int <- mvlm(Y ~ ., data = X.dum.int)
summary(mvlm.res.dum.int)
##                 Statistic Numer.DF Pseudo.R.Square    p.value
## Omnibus Effect    20.7815       17       0.6599948    < 1e-20 ***
## (Intercept)       34.5739        1                 5.7345e-11 ***
## Cont             116.4608        1       0.2175675    < 1e-20 ***
## Cat2               1.6394        1       0.0030627    0.17761
## Cat3               0.5671        1       0.0010593    0.54485
## Ord.L              1.2576        1       0.0023495    0.25358
## Ord.Q              0.6198        1       0.0011579    0.50938
## Ord.C              0.9097        1       0.0016995    0.36254
## Cont:Cat2        2.9931        1       0.0055916    0.05714 .
## Cont:Cat3      139.6784        1       0.2609416    < 1e-20 ***
## Cont:Ord.L       0.3475        1       0.0006492    0.73083
## Cont:Ord.Q       0.2189        1       0.0004090    0.86249
## Cont:Ord.C       0.8693        1       0.0016240    0.37907
## Cat2:Ord.L       0.7130        1       0.0013320    0.45418
## Cat3:Ord.L       0.9899        1       0.0018492    0.33256
## Cat2:Ord.Q       0.6793        1       0.0012690    0.47312
## Cat3:Ord.Q       1.4397        1       0.0026896    0.21324
## Cat2:Ord.C       1.4569        1       0.0027217    0.20985
## Cat3:Ord.C       0.7666        1       0.0014322    0.42621
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Note that, appropriately, the overall fit (i.e. omnibus effect) of these models mirror the fit of the models originally fit to the data. This is because the original multiple-DF tests were simultaneously testing the set of contrasts that combined to form the corresponding omnibus effect. For example, in the main effects models, the original three-DF effect of the ordered predictor amounts to a simultaneous test of the linear, quadratic, and cubic effects.

### 2. Fitted values, residuals, and predictions

This package also provides means for computing fitted values, residuals, and predictions for new data. Fitted values and residuals are straight-forward:

Y.hat <- fitted(mvlm.res)
Y.resid <- residuals(mvlm.res)
round(cor(X.dum, Y.hat), 4)
##           Y.1     Y.2     Y.3     Y.4     Y.5
## Cont   0.8760  0.8898  0.9302  0.9333  0.9392
## Cat2   0.2125  0.3095  0.0041 -0.1065  0.2266
## Cat3  -0.0879 -0.1704 -0.0107  0.1830 -0.1335
## Ord.L  0.3014  0.2019  0.1724 -0.0947  0.1172
## Ord.Q -0.0421  0.0441  0.0278 -0.1004  0.0078
## Ord.C  0.1696  0.1794  0.0970  0.3852  0.0254
round(cor(X.dum, Y.resid), 4)
##       Y.1 Y.2 Y.3 Y.4 Y.5
## Cont    0   0   0   0   0
## Cat2    0   0   0   0   0
## Cat3    0   0   0   0   0
## Ord.L   0   0   0   0   0
## Ord.Q   0   0   0   0   0
## Ord.C   0   0   0   0   0

Predictions can also be made on new data:

# Split the data  - one part to build the model, the other to predict
X.train <- X.mvlm[1:150,]
Y.train <- as.matrix(Y.mvlm[1:150,])
X.test <- X.mvlm[151:200,]
Y.test <- as.matrix(Y.mvlm[151:200,])

# Fit the model to the training set
mvlm.res.pred <- mvlm(Y.train ~ ., data = X.train)

# Get predictions
Y.test.pred <- predict(mvlm.res.pred, newdata = X.test)

# Plot the predicted values vs. the true values in the test set
par(mfrow = c(2,3))
for(k in 1:5){
plot(Y.test.pred[,k], Y.test[,k], main = paste("Outcome Variable", k),
xlab = 'Predicted Value', ylab = 'Observed Value')
}

## 3. References

Davies, R. B. (1980). The Distribution of a Linear Combination of chi-square Random Variables. Journal of the Royal Statistical Society. Series C (Applied Statistics), 29(3), 323–333.

Duchesne, P., & De Micheaux, P.L. (2010). Computing the distribution of quadratic forms: Further comparisons between the Liu-Tang-Zhang approximation and exact methods. Computational Statistics and Data Analysis, 54(4), 858–862.

McArtor, D. B., Grasman, R. P. P. P., Lubke, G. H., & Bergeman, C. S. (submitted). A new approach to conducting linear model hypothesis tests with a multivariate outcome.