[R-sig-ME] Reliability via mixed effects modelling

Mike Lawrence Mike.Lawrence at dal.ca
Thu Apr 7 15:43:48 CEST 2011


Thanks, I'll definitely check out those refs.

In the meantime, I played with the method I coded a bit more and I'm
less enthusiastic about it. I used a few different data sets and find
that sometimes I'll get identical estimates of reliability for the
Intercept and the effect of condition, and when I add another fixed
effect whose reliability needs to be estimated, the order that I enter
them in the lmer formula can matter (though only with certain data
sets I explore).

It's strange though, because other times the code obtains estimates of
reliability that match up pretty well with what I obtain by more
brute-force methods of bootstrapping estimates of the effect variances
by hand.


On Thu, Apr 7, 2011 at 4:44 AM, Viechtbauer Wolfgang (STAT)
<wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
> Hi Mike,
>
> I haven't read your mail in detail, but I know that some work has been done on that issue. See, for example:
>
> Laenen, A. et al. (2006). Generalized reliability estimation using repeated measurements. British Journal of Mathematical and Statistical Psychology, 59, 113-131. http://onlinelibrary.wiley.com/doi/10.1348/000711005X66068/full
>
> Laenen, A. et al. (2007). A Measure for the Reliability of a Rating Scale Based on Longitudinal Clinical Trial Data. Psychometrika, 72, 443-448. http://www.springerlink.com/content/em0034pw7u55547q/
>
> Molenberghs, G. et al. (2007). Estimating Reliability and Generalizability from Hierarchical Biomedical Data. Journal of Biopharmaceutical Statistics, 17, 595-627. http://www.informaworld.com/smpp/content~db=all~content=a780211328
>
> Laenen, A. et al. (2009). Reliability of a longitudinal sequence of scale ratings. Psychometrika, 74, 49-64. http://www.springerlink.com/content/v237801006831302/
>
> Best,
>
> --
> Wolfgang Viechtbauer
> Department of Psychiatry and Neuropsychology
> School for Mental Health and Neuroscience
> Maastricht University, P.O. Box 616
> 6200 MD Maastricht, The Netherlands
> Tel: +31 (43) 368-5248
> Fax: +31 (43) 368-8689
> Web: http://www.wvbauer.com
>
>> -----Original Message-----
>> From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-
>> bounces at r-project.org] On Behalf Of Mike Lawrence
>> Sent: Thursday, April 07, 2011 01:25
>> To: r-sig-mixed-models at r-project.org
>> Subject: [R-sig-ME] Reliability via mixed effects modelling
>>
>> Hi folks,
>>
>> In my research I typically have human participants play simple video
>> games and measure the speed and accuracy of their responses to certain
>> stimuli. This usually yields many observations per condition of
>> interest per participant, so mixed effects modelling (specifying
>> participant as a random effect and condition as a fixed effect)
>> becomes rather useful.
>>
>> I'm wondering, however, if I might gain even more utility from mixed
>> effects models by getting them to help me compute the reliability of
>> the fixed effects I'm measuring. That is, normally reliability might
>> be measured by something like test-retest, where you run your
>> participants through one run of the experiment, compute a condition
>> effect for each participant, then repeat and see how well the first
>> and second estimated condition effects correlate across participants.
>> Alternatively, one could employ a "split-half" procedure whereby only
>> one session is conducted after which each of the multiple observations
>> from each participant in each condition is randomly as "A" or "B"; one
>> can then compute condition effects within A trials and B trials
>> separately within each participant and finally compute the correlation
>> between the condition effects in A and B across participants.
>>
>> Finally, I'm fairly certain that if one were to obtain an estimate of
>> the expected within-participant variance of the condition effect and
>> an estimate of the expected between-participant variance of the
>> condition effect, the formula:
>>
>> r  = 1/(1+within_variance/between_variance)
>>
>> will achieve an estimate of reliability that does not rely on
>> correlation. (I believe this latter approach may be somehow
>> mathematically related to intra-class correlation, but I have not been
>> able to see precisely how)
>>
>> With the latter approach in mind, I notice that if I permit a mixed
>> effects model to estimate unique condition effects within each
>> participant, as in:
>>
>> fit = lmer(
>>     dv ~ condition + (condition | participant)
>> )
>>
>> then ranef( fit , postVar=TRUE ) will return information that strikes
>> me may be useful for estimating reliability of the effect of
>> condition. I've coded a function that I believe computes the variances
>> needed for the above non-correlational computation of reliability and
>> then bootstraps confidence intervals on these variances and the
>> resulting reliability estimate:
>>
>> https://gist.github.com/906741
>>
>> Does this make sense at all? Or should I go back to computing
>> reliability the traditional correlation way?
>>
>>
>> Mike
>>
>> --
>> Mike Lawrence
>> Graduate Student
>> Department of Psychology
>> Dalhousie University
>>
>> Looking to arrange a meeting? Check my public calendar:
>> http://tr.im/mikes_public_calendar
>>
>> ~ Certainty is folly... I think. ~
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>




More information about the R-sig-mixed-models mailing list