[R] Performance problem

Spencer Graves spencer.graves at pdf.com
Tue Jul 20 16:59:55 CEST 2004


      1.  Are you using "nlme" or "lme4"?  If the former, try the 
latter.  I'm told it is quite a bit different and potentially 
substantially faster, depending on the mode. 

      2.  Have you tried what people would have done before "lme", 
namely computing and summarizing, e.g, 350 separate analyses, 
summarizing the results on a data.frame, and then analyzing those 
coefficients as data?  If you want to fix certain parameters across all 
350, have you considered fitting a reduced model, e.g., using "lm", then 
including some of the estimated parameters in the 350 individual fits 
using "offset"?  This may be good enough for some purposes.  In my 
experience, after I've done things like this, I've often found that the 
big model I could not estimate in "lme" was not appropriate, anyway. 

      hope this helps.  spencer graves

Stephan Moratti wrote:

>>From: gerhard.krennrich at basf-ag.de
>>Precedence: list
>>MIME-Version: 1.0
>>To: r-help at stat.math.ethz.ch
>>Date: Tue, 20 Jul 2004 11:23:25 +0200
>>Message-ID: <OF79E18D67.A9A4078E-ONC1256ED7.0032096F at rz-c007-j650.basf-ag.de>
>>Content-Type: text/plain; charset=us-ascii
>>Subject: [R] Performance problem
>>Message: 60
>>
>>Dear all,
>>I have a performance problem in terms of computing time.
>>I estimate mixed models on a fairly large number of subgroups (10000) using
>>lme(.) within the by(.) function and it takes hours to do the calculation
>>on a fast notebook under Windows.
>>I suspect by(.) to be a poor implementation for doing individual analysis
>>on subgroups.
>>Is there an alternative and more efficient way for doing by-group
>>processing within lme(.).
>>
>>Here some code to give you a glimpse
>>gfit <- by(longdata, gen, function(x) lme(fixed=response ~ dye + C(treat,
>>base = 4 ),
>>           data=x,random =~ 1 | slide)  )
>>
>>Thanks in advance & regards
>>Gerhard Krennrich
>>
>>    
>>
>
>Sorry, that I can't contribute to a solution. But I have a similar problem,
>doing lme's on 350 source estimations of MEG brain data. So if somebody
>knows some improvement, please let me know !
>
>Stephan Moratti
>
>
>
>-----------------------------
>Dipl. Psych. Stephan Moratti
>Dept. of Psychology
>University of Konstanz
>P.O Box D25
>Phone: +40 (0)7531 882385
>Fax: +49 (0)7531 884601
>D-78457 Konstanz, Germany
>
>e-mail: Stephan.Moratti at uni-konstanz.de
>http://www.clinical-psychology.uni-konstanz.de/
>
>______________________________________________
>R-help at stat.math.ethz.ch mailing list
>https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>  
>




More information about the R-help mailing list