[R] bootstrapping in regression
Thomas Mang
Thomas.Mang at fiwi.at
Thu Jan 29 17:43:54 CET 2009
Hi,
Please apologize if my questions sounds somewhat 'stupid' to the trained
and experienced statisticians of you. Also I am not sure if I used all
terms correctly, if not then corrections are welcome.
I have asked myself the following question regarding bootstrapping in
regression:
Say for whatever reason one does not want to take the p-values for
regression coefficients from the established test statistics
distributions (t-distr for individual coefficients, F-values for
whole-model-comparisons), but instead apply a more robust approach by
bootstrapping.
In the simple linear regression case, one possibility is to randomly
rearrange the X/Y data pairs, estimate the model and take the
beta1-coefficient. Do this many many times, and so derive the null
distribution for beta1. Finally compare beta1 for the observed data
against this null-distribution.
What I now wonder is how the situation looks like in the multiple
regression case. Assume there are two predictors, X1 and X2. Is it then
possible to do the same, but just only rearranging the values of one
predictor (the one of interest) at a time? Say I want again to test
beta1. Is it then valid to many times randomly rearrange the X1 data
(and keeping Y and X2 as observed), fit the model, take the beta1
coefficient, and finally compare the beta1 of the observed data against
the distributions of these beta1s ?
For X2, do the same, randomly rearrange X2 all the time while keeping Y
and X1 as observed etc.
Is this valid ?
Second, if this is valid for the 'normal', fixed-effects only
regression, is it also valid to derive null distributions for the
regression coefficients of the fixed effects in a mixed model this way?
Or does the quite different parameters estimation calculation forbid
this approach (Forbid in the sense of bogus outcome) ?
Thanks, Thomas
More information about the R-help
mailing list