[R] Fw: Logistic regresion - Interpreting (SENS) and (SPEC)
Gad Abraham
gabraham at csse.unimelb.edu.au
Thu Oct 16 01:48:00 CEST 2008
Frank E Harrell Jr wrote:
> Gad Abraham wrote:
>>> This approach leaves much to be desired. I hope that its
>>> practitioners start gauging it by the mean squared error of predicted
>>> probabilities.
>>
>> Is the logic here is that low MSE of predicted probabilities equals a
>> better calibrated model? What about discrimination? Perfect calibration
>
> Almost. I was addressed more the wish for the use of strategies that
> maximize precision while keeping bias to a minimim.
>
>> implies perfect discrimination, but I often find that you can have two
>
> That doesn't follow. You can have perfect calibration in the large with
> no discrimination.
I'm not sure I understand: if you have perfect calibration, so that you
correctly assign the probability Pr(y=1|x) to each x, doesn't it follow
that the x will also be ranked in correct order of probability, which is
what the AUC is measuring?
>
>> competing models, the first with higher discrimination (AUC) and worse
>> calibration, and the the second the other way round. Which one is the
>> better model?
>
> I judge models on the basis of both discrimination (best measured with
> log likelihood measures, 2nd best AUC) and calibration. It's a
> two-dimensional issue and we don't always know how to weigh the two. For
> many purposes calibration is a must. In those we don't look at
> discrimination until calibration-in-the-small is verified at high
> resolution.
By "log likelihood measures" do you mean likelihood-ratio tests?
--
Gad Abraham
Dept. CSSE and NICTA
The University of Melbourne
Parkville 3010, Victoria, Australia
email: gabraham at csse.unimelb.edu.au
web: http://www.csse.unimelb.edu.au/~gabraham
More information about the R-help
mailing list