[R-sig-ME] Testing differences in measurement variance
Ben Bolker
bbolker at gmail.com
Thu Aug 12 23:48:10 CEST 2010
I wrote fast and was a little bit confused (and hence confusing).
When you said "variability of measurement" I read it as "measurement
variability", so I was assuming a model like
Var(group i) = Var(process error) + Var(measurement error, group i)
I may have the details/syntax wrong, you'll have to check it (of
course).
Ben
On 10-08-12 05:44 PM, Mike Lawrence wrote:
> Thanks for the reply, but I'm a little confused; if I'm interested in
> estimating and comparing variances, how is it useful to "assume that
> the underlying variability is the same"?
>
> Mike
>
> On Thu, Aug 12, 2010 at 6:15 PM, Ben Bolker<bbolker at gmail.com> wrote:
>
>> On 10-08-12 05:11 PM, Mike Lawrence wrote:
>>
>>> Hi folks,
>>>
>>> Can mixed effects modelling be used to compare the variability of
>>> measurement between treatments? That is, in the conventional ANOVA
>>> world, if I were interested in studying the effect of a treatment
>>> (within or between groups) on the variability of measurement, I would
>>> (1) measure multiple individuals each multiple times, obtain an SD of
>>> measurement per individual, then (2) submit these SD scores as a
>>> dependent variable to an ANOVA. I wonder if this traditional two-stage
>>> process could be replaced with a single mixed effects analysis, which
>>> presumably would permit increased power through things like shrinkage,
>>> accounting for measurement confidence (by taking into account
>>> different numbers of observations within each individual), etc.
>>>
>>> If this is possible, how would I structure the lmer() call to achieve
>>> such estimation and comparison of measurement variance?
>>>
>>> Mike
>>>
>>>
>>>
>> If you're willing to assume that the underlying variability is the same, I
>> think you can do this in lme() [not lmer()] by specifying
>> weights=varIdent(~ttt)
>>
>>
>>
>
>
>
More information about the R-sig-mixed-models
mailing list