[R] Jackknife and rpart

Liaw, Andy andy_liaw at merck.com
Thu Apr 17 14:21:12 CEST 2003

That's essentially leave-one-out cross-validation.

In addition to Frank's suggestion, you might want to check out the
errorest() function in the ipred package.  You can do k-fold CV or the .632+


> -----Original Message-----
> From: chumpmonkey at hushmail.com [mailto:chumpmonkey at hushmail.com]
> Sent: Wednesday, April 16, 2003 1:28 PM
> To: R-help at stat.math.ethz.ch
> Subject: [R] Jackknife and rpart
> Hi,
> First, thanks to those who helped me see my gross misunderstanding of
> randomForest. I worked through a baging tutorial and now 
> understand the
> "many tree" approach. However, it is not what I want to do! My bagged
> errors are accpetable but I need to use the actual tree and 
> need a single
> tree application. 
> I am using rpart for a classification tree but am interested in a more
> unbaised estimator of error in my tree. I lack sufficent data to train
> and test the tree and I'm hoping to bootstrap, or rather jacknife, an
> error estimate.
> I do not think the rpart.object can be applied to the 
> jackknife function
> in bootstrap but can I do something as simple as:
> for(i in 1:number of samples){
>   remove i from the data
>   run the tree
>   compare sample[i] to the tree using predict
>   create an error matrix}
> This would give me a confussion matrix of data not included 
> in the tree's
> constuction.
> Am I being obtuse again?
> Thanks, CM
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help

More information about the R-help mailing list