[R-sig-ME] Accounting for dv score's validity using weights - metafor or lme4?
itzikf at outlook.com
Sat Jan 20 12:34:45 CET 2018
I have a dataset of 964 data points clustered within 250 participant. Each participant gave several open-responses to a specific question, and these responses were then coded by additional participants (~25 per response) on a scale from 0-100 (representing probability, but it doesn't seem to matter for the question here). Now, the simplest way to analyze this is to simply take the median (or winsorized mean) of the different ratings, and using this as the dv. However, because some responses produced more disagreement between raters than others, I thought it might be wise to account for this 'uncertainty' in some way.
I've considered two ways. First, I believe one might try to think of the problem in a meta-analytic framework. Each median rating in-fact represents sort of a 'sample estimate' taken from a distribution which variance corresponds to the variance between ratings, just like in random effects meta-analysis each effect size is weighted by its precision because it is assumed to come from a population effect size for this study. If this sound ok I could use the metafor package to specify a multilevel model. The problem is that I don't know of any study using such an approach.
Second, I know there is a weights argument in lme4, but not exactly sure how it works, and how its results will relate to the first solution.
Any advise or relevant reference is highly appreciated!
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models