[R] Validation / Training - test data
Sam
Sam_Smith at me.com
Wed Sep 29 15:03:56 CEST 2010
Thanks for this,
I had used
> validate(model0, method="boot",B=200)
To get a index.corrected Brier score,
However i am also wanting to bootstrap the predicted probabilities output from predict(model1, type = "response") to get a idea of confidence, or am i best just using se.fit = TRUE and then calculating the 95%CI? Does what i want to do make sense?
Thanks
On 29 Sep 2010, at 13:38, Frank Harrell wrote:
Split sample validation is highly unstable with your sample size.
The rms package can help with bootstrapping or cross-validation, assuming
you have all modeling steps repreated for each resample.
Frank
-----
Frank Harrell
Department of Biostatistics, Vanderbilt University
--
View this message in context: http://r.789695.n4.nabble.com/Validation-Training-test-data-tp2718523p2718905.html
Sent from the R help mailing list archive at Nabble.com.
______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
More information about the R-help
mailing list