[R] Test for equality of coefficients in multivariate multiple regression

Ulrich Keller uhkeller at web.de
Wed Jul 19 12:09:36 CEST 2006


Hello and thank you for your answers, Andrew and Berwin. If I'm not 
mistaken, the mixed-model version of Berwin's approach would be:

#My stuff:
DF<-data.frame(x1=rep(c(0,1),each=50),x2=rep(c(0,1),50))
tmp<-rnorm(100)
DF$y1<-tmp+DF$x1*.5+DF$x2*.3+rnorm(100,0,.5)
DF$y2<-tmp+DF$x1*.5+DF$x2*.7+rnorm(100,0,.5)
x.mlm<-lm(cbind(y1,y2)~x1+x2,data=DF)

#1st part of Andrew's suggestion:
DF2 <- with(DF, data.frame(y=c(y1,y2)))
DF2$x11 <- with(DF, c(x1, rep(0,100)))
DF2$x21 <- with(DF, c(x2, rep(0,100)))
DF2$x12 <- with(DF, c(rep(0,100), x1))
DF2$x22 <- with(DF, c(rep(0,100), x2))
DF2$x1 <- with(DF, c(x1, x1))
DF2$wh <- rep(c(0,1), each=100)

#Mixed version of models:
 > DF2$unit <- rep(c(1:100), 2)
 > library(nlme)
 > mm1 <- lme(y~wh + x11 + x21 + x12 + x22, random= ~1 | unit, DF2, 
method="ML")
 > fixef(mm1)
(Intercept)          wh         x11         x21         x12         x22
 0.07800993  0.15234579  0.52936947  0.13853332  0.37285132  0.46048418
 > coef(x.mlm)
                    y1        y2
(Intercept) 0.07800993 0.2303557
x1          0.52936947 0.3728513
x2          0.13853332 0.4604842
 > mm2 <- update(mm1, y~wh + x1 + x12 + x22)
 > anova(mm1, mm2)
    Model df      AIC      BIC    logLik   Test  L.Ratio p-value
mm1     1  8 523.6284 550.0149 -253.8142                       
mm2     2  7 522.0173 545.1055 -254.0086 1 vs 2 0.388908  0.5329

This seems to be correct. What anova() tells me is that the effect of x1 
is the same for y1 and y2. What I don't understand then is why the 
coefficients for x12 and x22 differ so much between mm1 and mm2:

 > fixef(mm2)
(Intercept)          wh          x1         x12         x22
  0.1472766   0.1384474   0.5293695  -0.1565182   0.3497476
 > fixef(mm1)
(Intercept)          wh         x11         x21         x12         x22
 0.07800993  0.15234579  0.52936947  0.13853332  0.37285132  0.46048418

Sorry for being a bit slow here, I'm (obviously) not a statistician.
Thanks again,

Uli

Andrew Robinson wrote:
> G'day Berwin,
>
> my major reason for preferring the mixed-effect approach is that, as
> you can see below, the residual df for your models are 195 and 194,
> respectively.  The 100 units are each contributing two degrees of
> freedom to your inference, and presumably to your estimate of the
> variance.  I feel a little sqeueamish about that, because it's not
> clear to me that they can be assumed to be independent.  I worry about
> the effects on the size of the test.  With a mixed-effects model each
> unit could be a cluster of two observations, and I would guess the
> size would be closer to nominal, if not nominal.
>
> Cheers
>
> Andrew



More information about the R-help mailing list