[R-sig-ME] "varFunc" classes

dan jackson dan.jackson at mrc-bsu.cam.ac.uk
Thu Dec 15 18:39:09 CET 2016


Hi thanks for this Ben I am still at the playing stage (and I am new to all this and so am at the bewildered and learning stage) but I am trying to fit an individual patient data meta-analysis with continuous data for the first time.

In the paper "A general framework for the use of logistic regression models in meta-analysis", Simmonds and Higgins suggest fitting the model for binary outcome data 

glmer(cbind(event,n-event) ∼ factor(study) + factor(treat) + (treat-1|study), data=HRT2, family=binomial(link="logit”)) 

(see their appendix) so I thought 

fm10 <- lme(OUT ~ factor(study)+treat, data = DD, random = ~  treat-1|study,weights=varIdent(form = ~ 1 | study),control=list(maxIter
= 500, msMaxIter = 500))

Would just be a natural continuous version of this (allowing different residual variances in each study also, this is usually considered important, and also upping the maxits to get convergence). In Higgins and Simmonds treat is binary so +treat and +factor(treat) are the same thing in the fixed effect part of the model, the intuition is (as I understand it) you want a different baseline average in each study (so factor(study)) because this is bound to vary and also the treatment effect to differ randomly from one study to the next, to relax the strong assumption that the treatment effect is the same for each study.

I hope that helps explain my motivation, essentially I am "just" trying to "copy" what Simmonds and Higgins did for binary data in the continuous case.

Dan


-----Original Message-----
From: Ben Bolker [mailto:bbolker at gmail.com] 
Sent: 15 December 2016 16:59
To: Dan Jackson
Cc: Dan Jackson; r-sig-mixed-models at r-project.org; daniel.jackson at mrc-bsu.cam.ac.uk
Subject: Re: [R-sig-ME] "varFunc" classes

  It doesn't generally make sense to include a factor ("study" in your
case) as both a random and a fixed effect in the same model; I appreciate that you're trying to include only the variability among treatments within studies, but this is a little bit tricky from both a practical and a conceptual point of view.  Is there a reason that you're using

 OUT ~ factor(study)+treat, data = DD, random = ~ treat-1|study

and not

 OUT ~ treat, data=DD, random= ~treat|study

or

  OUT ~ treat, data=DD, random=~1|study/treat

?  The latter estimates the variation of treatments within studies as compound symmetric, which is a little bit easier computationally.



On Thu, Dec 15, 2016 at 6:42 AM, Dan Jackson <dan.jackson at mrc-bsu.cam.ac.uk> wrote:
> Thanks to everyone for their help, Ben and Wolfgang in particular. I 
> managed to get some results for a toy-warm up example using
>
> fm10 <- lme(OUT ~ factor(study)+treat, data = DD, random = ~
> treat-1|study,weights=varIdent(form = ~ 1 | 
> treat-1|study),control=list(maxIter =
> 500, msMaxIter = 500))
>
> here DD is a dataframe containing simulated data: 10,000 rows of OUT 
> ("outcome", just normally distributed data) treat (0/1 with 
> probability 0.5 for "success") and study contains the study number 
> from 1 to 10, 1 for the first 1000 observations, 2 for the second 1000 
> and so on. For fun I made the first and last groups have larger 
> variances to see if varIndent would pick this up -- which it clearly did which is wonderful.
>
> I would not have gotten this far without your help Ben and Wolfgang in 
> particular. My only slight residual (no pun intended?!) concern is 
> that I seem to be getting some O's for DOF for the which is a bit 
> perplexing (for the the factor(study) s in the model). I have done 
> some googling and this DOF issue seems to be a special issue in its 
> own right. Since I am happy to use normal approximations I can get my 
> inferences but this is something that might be worth thinking about, 
> DOF=0 would appear to be just plain wrong to me!?!
>
> I am confident I will be able to fit this type of model to my real 
> data in the new year. Thanks all, Merry Christmas, Dan
>
>
> On 14/12/2016 19:02, Ben Bolker wrote:
>>
>>    I'm too lazy to see if anyone has answered this one already, so ...
>>
>> On 16-12-13 03:48 AM, Dan Jackson wrote:
>>>
>>> Dear lme4 authors,
>>>
>>> I am sure you are very busy so I will just ask my question very quickly.
>>> I
>>> was reading the book "Mixed-effects models in S and S-plus" by 
>>> Pinheiro and Bates. On the top of page 208 of this book, there is a 
>>> Table 5.1 that implements various "varFunc" classes. One of these 
>>> classes would seem to be what I need for my data: varIdent - 
>>> different variances per stratum. I do know that different subets in 
>>> my data have very different variances you see, so I would need to 
>>> include this.
>>>
>>> However this book relates to S-plus and I am not sure if this has 
>>> been implemented in R, in the glmer package? My data are continuous 
>>> so I would just need this for lmer (and not glmer). If it has not 
>>> been implemented is there any "workaround"?
>>
>>    It has been implemented in R, but in the nlme package rather than 
>> the
>> lme4 package (which contains lmer and glmer).
>>
>>     Historical note: nlme is the package associated with Pinheiro and 
>> Bates's 2000 book. R's first "stable beta" version (according to
>> Wikipedia) was released in 2000.  Doug Bates has been involved in R 
>> since the beginning (or almost?).
>>
>>    If you need to do this in lme4::lmer, you can do it in a sort of 
>> clunky way by setting up dummy variables for group differences and 
>> adding an individual-level random effect, e.g.
>>
>>
>> data(Orthodont,package="nlme")
>>
>> O2 <- transform(Orthodont,
>>                  obs=seq(nrow(Orthodont))) ## observation-level 
>> variance
>>
>>
>> ## since Female var < Male var, we have to use a dummy for Male ## to 
>> add extra variance for males (won't work the other way because ## we 
>> can't have a negative variance)
>>
>> m1 <-lmer(distance ~ age + (age|Subject) +
>>           (0+dummy(Sex,"Male")|obs),
>>       control=lmerControl(check.nobs.vs.nlev="ignore",
>>                           check.nobs.vs.nRE="ignore"),
>>       O2)
>>
>> library(nlme)
>> m2 <- lme(distance~age,random=~age|Subject,
>>            data=Orthodont,
>>            weights=varIdent(form=~1|Sex))
>> summary(m2)
>>
>> all.equal(c(logLik(m1)),c(logLik(m2)))
>> all.equal(c(fixef(m1)),c(fixef(m2)),tolerance=1e-6)
>>
>>
>>
>>
>>> Thanks in advance for any advice, Dan Jackson
>>>
>>>
>>>         [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list 
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>



More information about the R-sig-mixed-models mailing list