[R-sig-ME] Cross-Classified MLM for reliability study
Jack Wilkinson
jack.wilkinson at manchester.ac.uk
Wed Aug 21 12:47:22 CEST 2013
Thankyou for your helpful comments Stuart.
My data are actually unbalanced, due to a number of missing observations (some patients have only 1 or 2 measurements from a given rater), so the ANOVA approach is out the window. Even if the data were balanced, the fact that each rater takes repeated measurements on each patient complicates the issue.
The ratings are continuous measurements of aortic stiffness (indeed, 'rater' was a poor choice of wording on my part, as these are measurements rather than ordinal ratings). Apologies for the inadequate description of the scenario.
The model I am interested in has been described by Eliasziw et al:
Eliasziw, M., Young, S. L., Woodbury, M. G., & Fryday-Field, K. (1994). Statistical methodology for the concurrent assessment of interrater and intrarater reliability: using goniometric measurements as an example. Physical therapy, 74(8), 777–88. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/8047565
The bit I am struggling with is specifying the model using lmer (actually, even if my data were balanced, it is unclear to me how I would go about specifying the corresponding random-effects ANOVA in R).
Thanks again for taking the time to respond,
Jack.
Jack Wilkinson, Medical Statistician, Research and Development,
Salford Royal NHS Foundation Trust, +44(0) 161 206 8125.
-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Stuart Luppescu
Sent: 20 August 2013 17:01
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Cross-Classified MLM for reliability study
On Tue, 2013-08-20 at 13:02 +0000, Jack Wilkinson wrote:
> Dear list members,
>
> I am hoping to use a multilevel model to analyse data from a rater reliability study. Interest lies in assessing interrater and intrarater reliability.
> I have 20 patients. Four measurements were taken in succession on each patient by each of two raters (ie: each patient was measured 8 times).
> I believe this to be an example of a cross-classified MLM, with observations nested both within patients and within raters. I am interested in fitting a model that contains an intercept, a random patient term, a rater term and a random patient by rater interaction term. The interaction term represents the interrater error, and the level 1 residual represents intrarater error. I intend to use the variance estimates from the model to calculate variance partition coefficients, representing inter and intrarater reliability. The question is whether or not I have specified the model correctly in lme4.
>
> As I only have two raters, I understand that I should treat rater as a fixed effect. My attempt to specify the model is as follows.
>
> fixed<-lmer(pwv~ 1 +(1|id) + rater + (1|id:rater), data = reliability)
>
> In fact, in a subsequent experiment, I would be interested in looking at a larger number of raters, treating rater as a random effect. Then I would specify the model as:
>
> random<-lmer(pwv~ 1 +(1|id) + (1|rater) + (1|id:rater), data = reliability).
>
> Would these examples specify my models of interest? As a new user of lme4 and multilevel models, I will apologise for any naivety on my part.
I have done something similar in an analysis of observations of teacher performance. I'm by no means an expert in this area, but I have a couple of comments:
1) If you only have two raters and both raters evaluated all the patients, a mixed random effects model is probably overkill. Why not just use 2-way ANOVA, or calculate a kappa statistic?
2) The ratings the observers are giving are probably not numbers. Is 2-1 the same as 4-3? I doubt it. So, if you are insistent on using this type of analysis you should use ordered categorical outcomes. I have found that MCMCglmm works best for this. Here is the model I use (tid is the teacher ID and obsid is the observer ID; comp.f is the evaluation framework components they are rated on):
glme4 <- MCMCglmm(rating.o ~ comp.f,
prior=list(R=list(V=1, fix=1), G=list(G1=list(V=1, nu=0), G2=list(V=1, nu=0))),
random = ~tid + obsid,
family = "ordinal",
nitt=10000,
data = ratings)
Then to calculate the reliability I get the ICC like this:
tid.var <- summary(glme4)$Gcovariances[,1][1]
obsid.var <- summary(glme4)$Gcovariances[,1][2]
ICC <- tid.var/(tid.var + obsid.var + 1)
--
Stuart Luppescu -=-=- slu <AT> ccsr <DOT> uchicago <DOT> edu CCSR at U of C ,.;-*^*-;., ccsr.uchicago.edu
(^_^)/ 才文と智奈美の父
[Crash programs] fail because they are based on the theory that, with nine women pregnant, you can get a baby a month.
-- Wernher von Braun
--
Stuart Luppescu <slu at ccsr.uchicago.edu>
University of Chicago
_______________________________________________
R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list