[R-sig-ME] Different random effects variances for outcomes and groups

Sarah Barry sarah at stats.gla.ac.uk
Wed Sep 26 11:57:44 CEST 2007


David,

Perhaps that's actually what I was fitting before:

(2) 
lmer2(y~-1+coord1+coord2+coord1:(time+group+time:group)+coord2:(time+group+time:group) 
+ (0+coord1+coord2+coord1:group+coord2:group|ID), data=simdata)

with a separate group effect for each coordinate.  An alternative with 
the same group effect across coordinates would be:

(3) 
lmer2(y~-1+coord1+coord2+coord1:(time+group+time:group)+coord2:(time+group+time:group) 
+ (0+coord1+coord2+group|ID), data=simdata)

Model (2) gives the same loglik as model (1) (the desired model with 
stratified variances), but a higher AIC.  Model (3) gives a slightly 
lower loglik, with the same AIC as model (2).  BIC is a bit more 
varying, being lowest for model (1) and highest for model (2).

Sarah

David Afshartous wrote:

>One additional comment below ...
>
>
>On 9/25/07 10:03 AM, "David Afshartous" <dafshartous at med.miami.edu> wrote:
>
>  
>
>>Hi Sarah,
>>A few comments below.
>>David
>>
>>
>>On 9/25/07 9:05 AM, "Sarah Barry" <sarah at stats.gla.ac.uk> wrote:
>>
>>    
>>
>>>Hi David,
>>>
>>>Thanks for your reply.  I comment on your various questions below.
>>>
>>>David Afshartous wrote:
>>>
>>>      
>>>
>>>>Hi Sarah,
>>>>
>>>>Did you receive any replies to your post?  Some comments below, perhaps
>>>>tangential to you main question but possibly of interest.
>>>>
>>>>First let me make sure I understand you data structure.  You have 100
>>>>children in each of 2 groups, and for each child you take 3 measurements at
>>>>coordinate 1 and coordinate 2.
>>>>Hence there are 100 x 2 x 3 x 3 = 1200 observations.  Moreover, the children
>>>>in the two groups are different children, hence the different IDs in the
>>>>data (as opposed to the same children being in both groups e.g. when each
>>>>child receives both treatment and placebo; this affects the number of random
>>>>effects).
>>>>
>>>> 
>>>>
>>>>        
>>>>
>>>Yes, you are right here (apart from a typo - there are 100 x 2 x 3 x 2 =
>>>1200 observations).  The groups do contain different children.
>>>
>>>      
>>>
>>>>In your model equation:
>>>> 
>>>>
>>>>        
>>>>
>>>>>y_{ir}(t) = \beta_{0r} + \beta_{1r} group_i + \beta_{2r} t + \beta_{3r}
>>>>>group_i : t  + b_{ir0} + group_i * b_{ir1} + \epsilon_{ir}(t),
>>>>>
>>>>>   
>>>>>
>>>>>          
>>>>>
>>>>You have a random intercept model, where the intercept is broken down
>>>>according to fixed effects for coordinate and group, and similarly for the
>>>>slope.  I assume you want the random effect for the intercept stratified
>>>>according to both group and coordinate.  I'm not sure how the terms above
>>>>reflect this; perhaps all you need is the term b_{irp} ~ > N(0,
>>>>\sigma_{rp}2), for p=0,1; r = 1,2.
>>>>
>>>> 
>>>>
>>>>        
>>>>
>>>I think you're right here and this corresponds with what you say below.
>>>I think my model should be instead written as
>>>
>>>y_{ir}(t) = \beta_{0r} + \beta_{1r} group_i + \beta_{2r} t + \beta_{3r}
>>>group_i : t  + group_i * b_{irp} + \epsilon_{ir}(t),
>>>
>>>where p=0,1 represents the value of 'group_i' and is distributed as you
>>>suggest.  This seems to make more sense than everyone having a random
>>>intercept from one distribution and group 1 having an additional one.
>>>This way they are completely separate.
>>>
>>>      
>>>
>>>>RE your simulation code to generate the random effects, I assume you have
>>>>broken "randeff" into blocks of 300 ( = 1200/4) because the group x
>>>>coordinate stratification yields 4 distinct combinations.
>>>>However, I have a question RE the way in which these random effects are
>>>>simulated.  For instance, consider patient 1 in group 1.  According to your
>>>>simulation code, two separate random normals will be generated to reflect
>>>>the random effect of this patient at coordinate 1 and coordinate 2, with
>>>>random normal variance equal to the sum of the group random effect variance
>>>>and the coordinate random effect variance.  However, I don't think this
>>>>reflects the nesting appropriately.  Perhaps the group component should only
>>>>be generated once as it is the same child in the same group; and the
>>>>coordinate component is the part that needs to be generated twice. This of
>>>>course will increase the correlation between the two realized values.
>>>>(BTW, did you chose the standard deviations values of 15, 10, 25, and 20 to
>>>>reflect the aforementioned sub-component structure, and if so what were the
>>>>standard deviations of the sub-components?).
>>>>
>>>> 
>>>>
>>>>        
>>>>
>>>Yes, I should have considered this more carefully.  Certainly the random
>>>effects for coordinates 1 and 2 for a particular individual should have
>>>been simulated from a multivariate normal distribution rather than from
>>>two separate normals.  I don't think that there is so much a group
>>>component and a coordinate component, as a coordinate component that
>>>differs depending on which group the individual comes from.  So, for
>>>example, instead of the randeff statement below we could have (repeated
>>>across times and inserted in the correct positions for each group):
>>>
>>>randeff.gp0 <- mvrnorm(n.subj/2, c(0,0), matrix(c(100,50,50,400),
>>>nc=2))  # coordinates 1 and 2 (respectively) for individuals in group 0 #
>>>randeff.gp1 <- mvrnorm(n.subj/2, c(0,0), matrix(c(225,70,70,625),
>>>nc=2))  # coordinates 1 and 2 (respectively) for individuals in group 1 #
>>>
>>>I think the group itself should only be a fixed effect because, while
>>>individuals are randomly sampled from two populations of cleft-lip
>>>patients and healthy controls, the groups are fixed (not sampled from a
>>>population of groups).  What do you think?  This has been one point that
>>>I have wondered long and hard about.
>>>
>>>      
>>>
>>I agree that group should be a fixed effect based on your description. In
>>your current model, group is a fixed effect, and the random effect on the
>>intercept is for child ID, although this depends on both the group and
>>coordinate for the child.  You do not have a random effect for group per se.
>>This can be contrasted w/ instead employing separate random effects for the
>>nesting levels (Pinheiro & Bates p.40 is a good example of a nesting
>>structure where random effects are used for each level of the nesting.)  It
>>would be interesting to observe the difference in the results.
>>
>>    
>>
>
>Moreover, allowing a random effect for group is definitely not what you want
>as this would not model the differential variability in your data.  Each
>child would have a single group random effect (distributed the same for each
>group), and there would be a child random effect that depends on the
>coordinate, but they would not be distributed differently across the groups
>(you would get 1 list instead of two lists for the random effects in your
>second model statement).  Having said that, I estimated both cases for some
>simple repeated measures data that has a treatment factor.  Specifically:
>1) Stratified RE variance: fixed effects for treatment, RE for each subject,
>stratified per treatment group.
>2) RE for treatment group: fixed effects for treatment, RE for each subject
>(not stratified per treatment), plus RE for treatment group.
>Although I really don't want option 2), I was curious about the differences
>in model fit.  Both models were extremely similar for AIC/BIC/LogLik, which
>is surprising since 2) doesn't model at all the differential growth curve
>variability per treatment in my data. I didn't try this comparison for your
>data as I was unsure of how to modify your lmer2 statement (perhaps mine was
>much simpler due to the single factor of treatment). If you are able to do
>the appropriate modification I'd be interested in the results.
>
>  
>
>>>>With respect to random effects, I assume your model will generate 400 unique
>>>>random effects estimates, i.e., two (for each coordinate) for each of the
>>>>200 children.  And each of these may be viewed as the sum of the
>>>>sub-components of coordinate and group.  Running your first lmer2 model
>>>>statement yields a 200 x 4 matrix for the estimated random effects, w/ each
>>>>row being a patient and the columns corresponding apparently to the
>>>>aforementioned subcomponents:
>>>>
>>>>An object of class ³ranef.lmer²
>>>>[[1]]
>>>>          coord1       coord2  coord1:group coord2:group
>>>>1    -0.502182860   7.98888012  -4.590717867    5.6973871
>>>>2    -0.190673717   3.38674017  -1.849503513    2.4098210
>>>>3    -0.981561080  12.80952815  -8.127985669    9.1788197
>>>>Etc
>>>>
>>>>However, I would think some of these cells need to be 0, e.g., each patient
>>>>is only in 1 group and thus shouldn't have a random effect estimate from
>>>>both groups?  Or am I reading the table completely wrong?
>>>>
>>>>Now, when I ran your second lmer2 model statement and checked the estimated
>>>>random effects (too messy to copy here), I got two lists of 2 random effects
>>>>per child (1 for each coordinate), where it appears that the two lists
>>>>correspond to the two groups, and apparently there are 0's for children that
>>>>were not in the respective group. Based on the estimated random effects
>>>>produced by the two model statements, I think that the second more
>>>>representative of what you're trying to do.  Have a look at the random
>>>>effects for the second model statement and let me know if you agree.
>>>>
>>>>Cheers,
>>>>David
>>>> 
>>>>
>>>>        
>>>>
>>>I do agree and I'm not sure now what model the first lmer2 model
>>>statement is actually fitting, because I would have expected at least
>>>some of the coord1:group and coord2:group random effects to be zero.
>>>The second lmer2 statement gives more sensible answers and corresponds
>>>with the rewritten model above.  Also when I check the estimated random
>>>effects variances and covariances against the actual values, there is
>>>good correspondence so I think I will proceed with this parameterisation.
>>>
>>>Many thanks for your help,
>>>Sarah
>>>
>>>      
>>>
>>>>
>>>>
>>>>On 9/19/07 8:28 AM, "Sarah Barry" <sarah at stats.gla.ac.uk> wrote:
>>>>
>>>> 
>>>>
>>>>        
>>>>
>>>>>Dear all,
>>>>>
>>>>>I wonder if someone could give me some insight on coding lmer.  I have
>>>>>facial shape data on children in two groups at four time points (3,6,12
>>>>>and 24 months).  Each child has a set of coordinate positions measured
>>>>>on their face at each time point (the set of coordinates is the same
>>>>>across individuals and times).  Take coordinates 1 and 2 only for now
>>>>>(reproducible code at the bottom of this email for simulated data).
>>>>>
>>>>>If I plot the trends for coordinates 1 and 2 for each individual over
>>>>>time, there is a different amount of variance amongst the individuals
>>>>>(at least in the intercept, and maybe in the slope) for the two
>>>>>coordinates and also within the two groups, with group 1 (cleft) having
>>>>>higher variation than group 0 (control).  I want to allow for these
>>>>>sources of variation in the model.  The other thing is that I would
>>>>>expect coordinate positions within an individual to be correlated so I
>>>>>also want to allow for this.  The model, therefore, would be (for
>>>>>coordinate r=1,2 measured on individual i at time t, group_i an
>>>>>indicator variable taking value one for group 1 and zero otherwise):
>>>>>
>>>>>y_{ir}(t) = \beta_{0r} + \beta_{1r} group_i + \beta_{2r} t + \beta_{3r}
>>>>>group_i : t  + b_{ir0} + group_i * b_{ir1} + \epsilon_{ir}(t),
>>>>>
>>>>>where \epsilon_{ir}(t) ~ N(0, \sigma2) and the random effect b_{irp} ~
>>>>>N(0, \sigma_{rp}2), for p=0,1.   I think the following code is
>>>>>appropriate (model 1):
>>>>>
>>>>>
>>>>>          
>>>>>
>lmer2(y~-1+coord1+coord2+coord1:(time+group+time:group)+coord2:(time+group+>>>>
>t
>  
>
>>>>>im
>>>>>e:group) 
>>>>>+ (0+coord1+coord2+coord1:group+coord2:group|ID), data=simdata)
>>>>>
>>>>>where coord1 and coord2 are indicator variables for coordinates 1 and 2,
>>>>>respectively, time is continuous and group is an indicator variable
>>>>>taking value one for the cleft group and zero for the controls.  Does
>>>>>the random effects part make sense?  I'm especially unsure about
>>>>>allowing correlations between all of the random effects terms, although
>>>>>I think that's it's appropriate under this parameterisation because each
>>>>>person has a value for both coordinates 1 and 2, and the group effect is
>>>>>additional. 
>>>>>
>>>>>
>>>>>An alternative parameterisation is (model 2):
>>>>>
>>>>>
>>>>>          
>>>>>
>lmer2(y~-1+coord1+coord2+coord1:(time+group+time:group)+coord2:(time+group+>>>>
>t
>  
>
>>>>>im
>>>>>e:group) 
>>>>>+ (0+gp0:coord1+gp0:coord2|ID)+(0+coord1:group+coord2:group|ID),
>>>>>data=simdata),
>>>>>
>>>>>where gp0 is an indicator variable taking value one if the individual is
>>>>>in group 0 and zero otherwise.  It seems to me that this should be
>>>>>equivalent to model 1, but it doesn't appear to be (perhaps this just
>>>>>comes down to fewer correlations estimated in model 2).
>>>>>
>>>>>If a correlation between random effects is estimated to be 1 or -1, is
>>>>>this generally because the model is over-parameterised?
>>>>>
>>>>>Reproducible code is below.
>>>>>
>>>>>set.seed(100)
>>>>>n.subj <- 200
>>>>>n.times <- 3
>>>>>n.coords <- 2
>>>>>simdata <- data.frame(coord1=c(rep(1,n.subj*n.times),
>>>>>rep(0,n.subj*n.times)))
>>>>>simdata$coord2 <- c(rep(0,n.subj*n.times), rep(1,n.subj*n.times))
>>>>>simdata$coord <- ifelse(simdata$coord1==1, 1, 2)
>>>>>simdata$ID <- rep(rep(1:n.subj, each=n.times),2)
>>>>>simdata$time <- rep(1:n.times, n.subj*n.coords)
>>>>>simdata$group <- rep(c(1,0,1,0), each=n.subj*n.times/2)
>>>>>simdata$gp0 <- 1-simdata$group
>>>>>simdata$y <- rep(NA, dim(simdata)[1])
>>>>>randeff <- c(rep(rnorm(n.subj/2, 0, 15),each=n.times),
>>>>>rep(rnorm(n.subj/2, 0, 10), each=n.times), rep(rnorm(n.subj/2, 0, 25),
>>>>>each=n.times), rep(rnorm(n.subj/2, 0, 20), each=3))
>>>>>for (i in 1:dim(simdata)[1]) simdata$y[i] <- rnorm(1,
>>>>>randeff[i]+simdata$time[i]+simdata$coord[i]+10*simdata$group[i], 16)
>>>>>
>>>>>
>>>>>          
>>>>>
>lmer2(y~-1+coord1+coord2+coord1:(time+group+time:group)+coord2:(time+group+>>>>
>t
>  
>
>>>>>im
>>>>>e:group) 
>>>>>+ (0+coord1+coord2+coord1:group+coord2:group|ID), data=simdata)
>>>>>
>>>>>          
>>>>>
>lmer2(y~-1+coord1+coord2+coord1:(time+group+time:group)+coord2:(time+group+>>>>
>t
>  
>
>>>>>im
>>>>>e:group) 
>>>>>+ (0+gp0:coord1+gp0:coord2|ID)+(0+coord1:group+coord2:group|ID),
>>>>>data=simdata),
>>>>>
>>>>>Many thanks for any help.
>>>>>
>>>>>Best regards,
>>>>>Sarah
>>>>>   
>>>>>
>>>>>          
>>>>>
>>>> 
>>>>
>>>>        
>>>>
>>_______________________________________________
>>R-sig-mixed-models at r-project.org mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>    
>>
>
>  
>

-- 
Sarah Barry, MSc
Department of Statistics
University of Glasgow
Tel: +44 (0)141 330 2474
Fax: +44 (0)141 330 4814
www.stats.gla.ac.uk/~sarah




More information about the R-sig-mixed-models mailing list