[R] Re: testing slopes different than a given value

Vito Ricci vito_ricci at yahoo.com
Fri Feb 11 09:29:57 CET 2005


Hi,

We know that a regression coefficent fitted by sample
data (under usual linear model hypothesis) b_hat has
mean=b and se=se(b_hat); (b_hat-b)/s(b_hat) is
distributed as Student’s t distribution with df=n-2.
So you can test h0:b=b0 hA:b<>b0 using t test (for
large sample normal distribution is the same of a t
distribution):

x1<-rnorm(100)
x2<-rnorm(100)
e<-rnorm(100)
y<-3+0.6*x1+0.3*x2 +e
fm<-lm(y~x1+x2)

> summary(fm)

Call:
lm(formula = y ~ x1 + x2)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.17610 -0.65146 -0.09532  0.54848  2.41966 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  3.04924    0.09661  31.562  < 2e-16 ***
x1           0.55124    0.09930   5.551 2.47e-07 ***
x2           0.23477    0.10534   2.229   0.0281 *  
---
Signif. codes:  0 `***' 0.001 `**' 0.01 `*' 0.05 `.'
0.1 ` ' 1 

Residual standard error: 0.9492 on 97 degrees of
freedom
Multiple R-Squared: 0.2687,     Adjusted R-squared:
0.2536 
F-statistic: 17.82 on 2 and 97 DF,  p-value: 2.561e-07
> b<-coef(fm)
> b
(Intercept)          x1          x2 
  3.0492374   0.5512398   0.2347682
you get b_hat standard errors from summary(fm):

se<-c(0.09661,0.09930,0.10534)
> se
[1] 0.09661 0.09930 0.10534

ttest<-(b[2]-0.6)/se[2]

> ttest
        x1 
-0.4910391
> 1-pt(ttest,df=97) ##p-value, as df is high we can
use normal distribution
      x1 
0.687746 

we accept h0 :b1=0.6;

Hoping I helped you.
Best regards,
Vito

You wrote:
In a multiple linear regression with two independent
variables is there any function in R to test for the
coefficients being different than some given values?
Example:
x1<-rnorm(100)
x2<-rnorm(100)
y<-3+0.6*x1+0.3*x2 
fm<-lm(y~x1+x2)
Obtain a test for the coefficients for x1 being 
different than 0.6 and for x2 different than 0.3
Thanks
Manuel



=====
Diventare costruttori di soluzioni
Became solutions' constructors

"The business of the statistician is to catalyze 
the scientific learning process."  
George E. P. Box

Top 10 reasons to become a Statistician

     1. Deviation is considered normal
     2. We feel complete and sufficient
     3. We are 'mean' lovers
     4. Statisticians do it discretely and continuously
     5. We are right 95% of the time
     6. We can legally comment on someone's posterior distribution
     7. We may not be normal, but we are transformable
     8. We never have to say we are certain
     9. We are honestly significantly different
    10. No one wants our jobs


Visitate il portale http://www.modugno.it/
e in particolare la sezione su Palese  http://www.modugno.it/archivio/palese/




More information about the R-help mailing list