[R] building a formula for glm() with 30,000 independent variables
Roger Koenker
roger at ysidro.econ.uiuc.edu
Sun Nov 10 23:31:02 CET 2002
If the design matrix for such a model is dense then this sounds
quite hopeless, but if (as I suspect) it is quite sparse, then glm
could be adapted along the lines of the slm.fit.csr code in the
SparseM package submitted a few weeks ago. I've run penalized l1
estimation problems of this size, say n= 120,000 by p=30,000,
on my now antiquated ultra 2, with half a gig of memory. The trick
of course is that only the non zero elements of X are stored and
there are only about 500,000 of these in my applications.
url: http://www.econ.uiuc.edu Roger Koenker
email roger at ysidro.econ.uiuc.edu Department of Economics
vox: 217-333-4558 University of Illinois
fax: 217-244-6678 Champaign, IL 61820
On Sun, 10 Nov 2002 ripley at stats.ox.ac.uk wrote:
> Well, the theory of perceptrons says you will find perfect discrimination
> with high probability even if there is no structure unless n is well in
> excess of 2p. So you do have 100,000 units? If so you have many
> gigabytes of data and no R implementation I know of will do this for you.
> Also, the QR decomposition would take a very long time.
>
> You could call glm.fit directly if you could form the design matrix
> somehow but I doubt if this would run in an acceptable time.
>
> On Sun, 10 Nov 2002, Ben Liblit wrote:
>
> > I would like to use R to perform a logistic regression with about
> > 30,000 independent variables. That's right, thirty thousand. Most
> > will be irrelevant: the intent is to use the regression to identify
> > the few that actually matter.
> >
> > Among other things, this calls for giving glm() a colossal "y ~ ..."
> > formula with thirty thousand summed terms on its right hand side. I
> > build up the formula as a string and then call as.formula() to convert
> > it. Unfortunately, the conversion fails. The parser reports that it
> > has overflowed its stack. :-(
> >
> > Is there any way to pull this off in R? Can anyone suggest
> > alternatives to glm() or to R itself that might be capable of handling
> > a problem of this size? Or am I insane to even be considering an
> > analysis like this?
>
> --
> Brian D. Ripley, ripley at stats.ox.ac.uk
> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272860 (secr)
> Oxford OX1 3TG, UK Fax: +44 1865 272595
>
> -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
> r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
> Send "info", "help", or "[un]subscribe"
> (in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch
> _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
>
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
More information about the R-help
mailing list