[R] Inter-rater reliability for 3 unique raters with binary outcome
Lubo Larsson
|@r@@on|ubo @end|ng |rom gm@||@com
Wed Sep 11 02:23:27 CEST 2019
Hello,
I would like to know if there is an R utility for computing some
measure of inter-rater reliability/agreement for 3 raters (columns),
where each rating is a binary assessment. Further, the three raters
are unique (the same rater's assessment in any single column) so that
ideally the statistic would account for this fact.
I see there was a package "concord" that had a variety of utilities
for computing kappa stats but looks to me that it is no longer
available.
More information about the R-help
mailing list