[R-meta] inter-rater agreement for multiple raters
jepusto at gmail.com
Tue Aug 1 17:50:18 CEST 2017
This question pertains to a common issue in systematic reviews (although
not meta-analysis per-se).
I have a group of 5 raters who screened abstracts for possible inclusion in
a systematic review. Each abstract was scored (dichotomously, as "not
relevant" or "possibly relevant") by at least two raters. However, only a
small percentage of abstracts were scored by all 5 raters.
Does anyone know of an R package that will compute inter-rater agreement
statistics (e.g., Fleiss's kappa or Light's kappa or something better that
I don't know about) for such a design? The functions that I know of (e.g.,
from the irr package) do list-wise deletion, which means that the
statistics are computed based on a very small subset of the sample. I need
something that uses pair-wise deletion.
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis