# [R] scaling of nonbinROC penalties - accurate classification with random data?

Jonathan Williams drjhw at live.co.uk
Thu Jan 24 16:01:03 CET 2013

```Dear R Helpers

I am having difficulty understanding how to use the penalty matrix for the nomROC function in package 'nonbinROC'.

The documentation says that the values of the penalty matrix code the
penalty function L[i,j] in which 0 <= L[i,j] <= 1 for
j > i, but gives no other constraints. It gives an example that if we have an ordered response with 4 categories, then we might wish to penalise larger misclassifications more - so there is (for example) 0 penalty for correct classifications, 0.25 penalty for misclassifying by one category, 0.5 penalty for misclassifying by two categories and 1.0 penalty for misclassifying by 3 categories.

I wanted to use a penalty matrix with equal distances between the 4 categories (0, 1/3, 2/3, 1). But, I found that if I simply re-scale the penalty matrix, while maintaining equal distances between categories, then the estimate of overall accuracy increases. In fact **even with random data** one can achieve any value for accuracy - including unity - by re-scaling the penalty matrix. So, I'd like to ask what, if any are the contraints on the scaling process?

Here is a working code that illustrates my difficulty:-

library(nonbinROC); set.seed(1)
# create ordinal random gold standard with 4 categories
gldstd=round(runif(5000)*4); gldstd[gldstd==0]=4; gldstd=ordered(gldstd)

#create random predictor, uncorrelated with gold standard
pred0=rnorm(5000, mean=1); boxplot(pred0~gldstd)

# define penalty matrices
ordered_penalty = matrix(c(0,0,0,0,1/3,0,0,0,2/3,1/3,0,0,1,2/3,1/3,0), nrow = 4)
constant_penalty = matrix(c(0,0,0,0,1,0,0,0,1,1,0,0,1,1,1,0), nrow = 4)

#default penalty matrix accurately shows no association
ordROC(gldstd,pred0, penalty=constant_penalty)

#but, reducing penalties in default penalty matrix while maintaining constant values indicates association
ordROC(gldstd,pred0, penalty=constant_penalty/2)

# ordered penalty matrix shows association
ordROC(gldstd,pred0, penalty=ordered_penalty)

# reducing penalties in ordered penalty matrix, while maintaining constant proportions, indicates stronger association
ordROC(gldstd,pred0, penalty=ordered_penalty/2)

Here is the output:-

> #default penalty matrix accurately shows no association
> ordROC(gldstd,pred0, penalty=constant_penalty)
\$`Pairwise Accuracy`
Pair  Estimate Standard.Error
1 1 vs 2 0.5027981     0.01161765
2 1 vs 3 0.5029039     0.01171770
3 1 vs 4 0.5210819     0.01138566
4 2 vs 3 0.5006777     0.01171804
5 2 vs 4 0.5177406     0.01141367
6 3 vs 4 0.5171959     0.01151508

\$`Penalty Matrix`
1 2 3 4
1 0 1 1 1
2 0 0 1 1
3 0 0 0 1
4 0 0 0 0

\$`Overall Accuracy`
Estimate Standard.Error
1 0.5107522    0.005994222

> #but, reducing penalties in default penalty matrix while maintaining constant values indicates association
> ordROC(gldstd,pred0, penalty=constant_penalty/2)
\$`Pairwise Accuracy`
Pair  Estimate Standard.Error
1 1 vs 2 0.5027981     0.01161765
2 1 vs 3 0.5029039     0.01171770
3 1 vs 4 0.5210819     0.01138566
4 2 vs 3 0.5006777     0.01171804
5 2 vs 4 0.5177406     0.01141367
6 3 vs 4 0.5171959     0.01151508

\$`Penalty Matrix`
1   2   3   4
1 0 0.5 0.5 0.5
2 0 0.0 0.5 0.5
3 0 0.0 0.0 0.5
4 0 0.0 0.0 0.0

\$`Overall Accuracy`
Estimate Standard.Error
1 0.7446239    0.002997111

>
> # ordered penalty matrix shows association
> ordROC(gldstd,pred0, penalty=ordered_penalty)
\$`Pairwise Accuracy`
Pair  Estimate Standard.Error
1 1 vs 2 0.5027981     0.01161765
2 1 vs 3 0.5029039     0.01171770
3 1 vs 4 0.5210819     0.01138566
4 2 vs 3 0.5006777     0.01171804
5 2 vs 4 0.5177406     0.01141367
6 3 vs 4 0.5171959     0.01151508

\$`Penalty Matrix`
1         2         3         4
1 0 0.3333333 0.6666667 1.0000000
2 0 0.0000000 0.3333333 0.6666667
3 0 0.0000000 0.0000000 0.3333333
4 0 0.0000000 0.0000000 0.0000000

\$`Overall Accuracy`
Estimate Standard.Error
1 0.7118917    0.004060646

> # reducing penalties in ordered penalty matrix, while maintaining constant proportions, indicates stronger association
> ordROC(gldstd,pred0, penalty=ordered_penalty/2)
\$`Pairwise Accuracy`
Pair  Estimate Standard.Error
1 1 vs 2 0.5027981     0.01161765
2 1 vs 3 0.5029039     0.01171770
3 1 vs 4 0.5210819     0.01138566
4 2 vs 3 0.5006777     0.01171804
5 2 vs 4 0.5177406     0.01141367
6 3 vs 4 0.5171959     0.01151508

\$`Penalty Matrix`
1         2         3         4
1 0 0.1666667 0.3333333 0.5000000
2 0 0.0000000 0.1666667 0.3333333
3 0 0.0000000 0.0000000 0.1666667
4 0 0.0000000 0.0000000 0.0000000

\$`Overall Accuracy`
Estimate Standard.Error
1 0.8559458    0.002030323

I cannot see why penalising differences between categories might suggest that a random predictor has significant "Overall Accuracy". If I use a constant penalty matrix with all (off-diagonal) values very close to zero, then the overall accuracy approaches 1. It seems counter-intuitive to me that the estimate of overall accuracy for an ordinal gold standard should depend on the absolute values of the penalty matrix.

So, I would like to ask, ought there to be some constraint on the values of the penalty matrix? For example, (a) should the penalty matrix always contain at least one penalty with a value of 1 and/or (b) should there be any other constraint on the sum of penalties in the matrix (e.g. should the matrix sum to some multiple of the number of categories), or (c) is one free to use arbitrarily-scaled penalty matrices?

I apologise if I am wasting your by making an obvious mistake. I am a clinician, not a statistician. So, I do not understand the mathematics.