[R-sig-ME] Why am I getting a Variance of 0 for my random effect

John Maindonald john.maindonald at anu.edu.au
Thu Aug 12 01:29:53 CEST 2010


Surely you do want to treat nurses as a random effect, by analysing 
summary data at the nurse level, if not in a multi-level model.

The zero variance may (if it really would prefer to be negative) be telling 
you that there is a systematic difference between the two times, for which 
your model needs to account.  Maybe there is a learning effect -- 2nd time 
is systematically different from the first.  Did your model account for such 
an effect?

Or (requires more thought to model), those who do badly the first time
may learn rather more from their experience than those who did
moderately well, doing better than average next time?  It appears that
the data have the information needed to get insight on these questions.

The most insightful approach might well be separate regressions
for 2-1 differences and 2+1 averages.  I'd do those analyses whatever 
else you do.  

John Maindonald             email: john.maindonald at anu.edu.au
phone : +61 2 (6125)3473    fax  : +61 2(6125)5549
Centre for Mathematics & Its Applications, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
http://www.maths.anu.edu.au/~johnm

On 12/08/2010, at 4:24 AM, Kevin E. Thorpe wrote:

> On 08/11/2010 02:16 PM, Daniel Ezra Johnson wrote:
>> Try data1$RN<- as.factor(data1$RN).
> 
> Thanks, but that has no effect.  That is I get the same results.
> 
>> 
>> On Wed, Aug 11, 2010 at 2:13 PM, Kevin E. Thorpe
>> <kevin.thorpe at utoronto.ca>  wrote:
>>> Hello.
>>> 
>>> I'm getting a variance of 0 on a random effect and I don't know why.
>>> I suspect I've not set the model up correctly.  My transcript is below
>>> with my own comments sprinkled in for time to time.
>>> 
>>> A little bit about the data (which I will provide off-list if requested).
>>>  We have nurses managing an aspect of patient care
>>> according to different algorithms.  Interest focuses on of the
>>> algorithms result in different outcomes.  I have restricted this
>>> to only nurses who did each algorithm twice (in case my problem
>>> was being caused by some nurses doing only one algorithm, possibly
>>> only one time).
>>> 
>>> I figured that since I have multiple observations per nurse, I
>>> should treat nurse as a random effect, but maybe I confused myself
>>> again.
>>> 
>>> 
>>> R version 2.11.1 Patched (2010-07-21 r52598)
>>> Copyright (C) 2010 The R Foundation for Statistical Computing
>>> ISBN 3-900051-07-0
>>> 
>>> R is free software and comes with ABSOLUTELY NO WARRANTY.
>>> You are welcome to redistribute it under certain conditions.
>>> Type 'license()' or 'licence()' for distribution details.
>>> 
>>>  Natural language support but running in an English locale
>>> 
>>> R is a collaborative project with many contributors.
>>> Type 'contributors()' for more information and
>>> 'citation()' on how to cite R or R packages in publications.
>>> 
>>> Type 'demo()' for some demos, 'help()' for on-line help, or
>>> 'help.start()' for an HTML browser interface to help.
>>> Type 'q()' to quit R.
>>> 
>>>> library(lattice)
>>>> library(lme4)
>>> 
>>>> str(data1)
>>> 'data.frame':   72 obs. of  3 variables:
>>>  $ RN        : int  1 1 2 3 7 7 9 9 15 15 ...
>>>  $ Assignment: Factor w/ 2 levels "E","N": 1 1 1 1 1 1 1 1 1 1 ...
>>>  $ AUChr     : num  12.26 7.23 9.26 4.04 10.31 ...
>>>> tmp1<- with(data1,aggregate(AUChr,list(RN=RN,Assigment=Assignment),mean))
>>>> names(tmp1)[3]<- "Mean"
>>>> 
>>>> tmp2<- with(data1,aggregate(AUChr,list(RN=RN,Assignment=Assignment),var))
>>>> names(tmp2)[3]<- "Variance"
>>>> 
>>>> meanvar<- merge(tmp1,tmp2)
>>> 
>>> The point of this is to show that the means are not all the same,
>>> nor are the variances.
>>> 
>>>> meanvar
>>>   RN Assignment   Mean  Variance
>>> 1   1          E  9.745  12.65045
>>> 2   1          N  7.185   1.36125
>>> 3  15          E 10.605  15.07005
>>> 4  15          N 10.385   4.41045
>>> 5  16          E  8.175   0.00845
>>> 6  16          N  8.420   1.03680
>>> 7   2          E  7.300   7.68320
>>> 8   2          N  6.950   1.00820
>>> 9  21          E  9.670   9.41780
>>> 10 21          N 10.535   2.44205
>>> 11 22          E  7.720   2.04020
>>> 12 22          N  7.930   1.21680
>>> 13 24          E  9.555  10.35125
>>> 14 24          N  9.330   0.38720
>>> 15 25          E  8.240   0.92480
>>> 16 25          N  9.485   0.00125
>>> 17 27          E  8.635   0.08405
>>> 18 27          N  7.745   3.72645
>>> 19 28          E  9.635   8.61125
>>> 20 28          N  8.315  10.35125
>>> 21  3          E  6.005   7.72245
>>> 22  3          N 11.435  55.44045
>>> 23 31          E  9.590   9.94580
>>> 24 31          N 10.570  16.70420
>>> 25 35          E  9.055   0.32805
>>> 26 35          N  9.925  14.41845
>>> 27 36          E  9.040   2.08080
>>> 28 36          N  7.395   1.14005
>>> 29  5          E  8.430   3.38000
>>> 30  5          N 17.385 139.94645
>>> 31  6          E  6.930   0.24500
>>> 32  6          N  8.330   1.72980
>>> 33  7          E 10.650   0.23120
>>> 34  7          N  7.375   0.09245
>>> 35  9          E  8.885   7.56605
>>> 36  9          N  8.405   0.73205
>>> 
>>> Model with "Assignment" (algorithm).
>>> 
>>>> lmer(AUChr~Assignment+(1|RN),data=data1,REML=FALSE)
>>> Linear mixed model fit by maximum likelihood
>>> Formula: AUChr ~ Assignment + (1 | RN)
>>>   Data: data1
>>>   AIC   BIC logLik deviance REMLdev
>>>  365.7 374.8 -178.8    357.7   356.9
>>> Random effects:
>>>  Groups   Name        Variance Std.Dev.
>>>  RN       (Intercept) 0.0000   0.0000
>>>  Residual             8.4152   2.9009
>>> Number of obs: 72, groups: RN, 18
>>> 
>>> Fixed effects:
>>>            Estimate Std. Error t value
>>> (Intercept)   8.7703     0.4835   18.14
>>> AssignmentN   0.5131     0.6837    0.75
>>> 
>>> Correlation of Fixed Effects:
>>>            (Intr)
>>> AssignmentN -0.707
>>> 
>>> 
>>> Model without the algorithm variable.
>>> 
>>>> lmer(AUChr~(1|RN),data=data1,REML=FALSE)
>>> Linear mixed model fit by maximum likelihood
>>> Formula: AUChr ~ (1 | RN)
>>>   Data: data1
>>>   AIC   BIC logLik deviance REMLdev
>>>  364.3 371.1 -179.1    358.3   358.5
>>> Random effects:
>>>  Groups   Name        Variance Std.Dev.
>>>  RN       (Intercept) 0.000    0.0000
>>>  Residual             8.481    2.9122
>>> Number of obs: 72, groups: RN, 18
>>> 
>>> Fixed effects:
>>>            Estimate Std. Error t value
>>> (Intercept)   9.0268     0.3432    26.3
>>>> 
>>>> sessionInfo()
>>> R version 2.11.1 Patched (2010-07-21 r52598)
>>> Platform: i686-pc-linux-gnu (32-bit)
>>> 
>>> locale:
>>>  [1] LC_CTYPE=en_US       LC_NUMERIC=C         LC_TIME=en_US
>>>  [4] LC_COLLATE=C         LC_MONETARY=C        LC_MESSAGES=en_US
>>>  [7] LC_PAPER=en_US       LC_NAME=C            LC_ADDRESS=C
>>> [10] LC_TELEPHONE=C       LC_MEASUREMENT=en_US LC_IDENTIFICATION=C
>>> 
>>> attached base packages:
>>> [1] stats     graphics  grDevices utils     datasets  methods   base
>>> 
>>> other attached packages:
>>> [1] lme4_0.999375-34   Matrix_0.999375-42 lattice_0.18-8
>>> 
>>> loaded via a namespace (and not attached):
>>> [1] grid_2.11.1   nlme_3.1-96   stats4_2.11.1
>>>> 
>>>> proc.time()
>>>   user  system elapsed
>>>  3.488   0.056   3.536
> 
> 
> -- 
> Kevin E. Thorpe
> Biostatistician/Trialist, Knowledge Translation Program
> Assistant Professor, Dalla Lana School of Public Health
> University of Toronto
> email: kevin.thorpe at utoronto.ca  Tel: 416.864.5776  Fax: 416.864.3016
> 
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models




More information about the R-sig-mixed-models mailing list