[R] classification for huge datasets: SVM yields memory troubles
wolf.privat at gmx.de
Mon Dec 13 21:56:22 CET 2004
I'm a beginner in the SVM-module but I have seen there is a parameter called
cachesize #cache memory in MB (default 40)
please let me know if this parameter solved your problem, I might get the
same number of samples in the near future.
"Christoph Lehmann" <christoph.lehmann at gmx.ch> schrieb im Newsbeitrag
news:41BD8A9F.4040509 at gmx.ch...
> I have a matrix with 30 observations and roughly 30000 variables, each
> obs belongs to one of two groups. With svm and slda I get into memory
> troubles ('cannot allocate vector of size' roughly 2G). PCA LDA runs
> fine. Are there any way to use the memory issue withe SVM's? Or can you
> recommend any other classification method for such huge datasets?
> P.S. I run suse 9.1 on a 2G RAM PIV machine.
> thanks for a hint
> R-help at stat.math.ethz.ch mailing list
> PLEASE do read the posting guide!
More information about the R-help