[R-sig-Epi] [R-sig-epi] Sensitivity, specificity, and predictive values

dcm2104 at columbia.edu dcm2104 at columbia.edu
Wed Mar 5 14:04:35 CET 2008


Hi,

A good way to circumvent many of the aforementioned limitations is to  
resort to non-parametrical ordinary boostrapping whereby you re-sample  
your dataset B times (B is typically greater than 5000 but rarely  
smaller than 1000 unless your original data-set is very small or  
computational time is too expensive). You can then calculate the  
sensitivity, specificity, PPV, and NPV for each re-sampled dataset.  
Finally, you estimate the mean and confidence interval for  
bootstrap-generated sensitivity, specificity, PPV, and NPV  
distributions.

If applicable, you can use these distributions to comapre two or more  
test diagnositic. For example, you can sample the sensitivity  
distribution of two test diagnostic (via, e.g., bootstrap again or  
permutation), computing their differences, and then testing (t-test)  
whether the final distribution has a zero-mean.

The same procedure applies to other estimates (e.g., specificity, PPV,  
etc) and other tests along the same line may be constructed. You can  
load library(boot) and type "?boot" in the terminal for further  
information.

If neither test is a "gold standard," the estimation of  
prevalence-dependent PPV and NPV is considerably more complicated.

Daniel



More information about the R-sig-Epi mailing list