[R] memory problem for R
Spencer Graves
spencer.graves at pdf.com
Fri Jan 30 20:36:50 CET 2004
Was your 10% sample contiguous or randomly selected from the
entire file? If contiguous, you might get something from, say,
processing the file in 100 contiguous blocks, computing something like
the mean of each 1% block (or summarizing in some other way within
blocks), then combining the summaries and do regression on block
summaries.
If it was an honest random sample (e.g., selecting approximately
10% from each 10%), then the block averaging won't work: You have an
inherent singularity in the structure of the data that will likely not
permit you to estimate everything you want to estimate. You need to
understand that singularity / lack of estimability and decide what to do
about it.
In either case, "lm(..., singular.ok=T)" will at least give you an
answer even when the model is not fully estimable.
hope this helps.
spencer graves
Yun-Fang Juan wrote:
>Pleaase see the comments below.
>
>
>>>Here is the exact error I got
>>>----------------------
>>>Read 73 items
>>>Error: cannot allocate vector of size 1953 Kb
>>>Execution halted
>>>-----------------------
>>>I am running R on Freebsd 4.3
>>>with double CPU and 2 GB memory
>>>Is that sufficient?
>>>
>>>
>>Clearly not. What is the structure of your `attributes'? As Andy Liaw
>>said, the design matrix may be bigger than that if there are factors
>>involved. (And you need several copies of the design matrix.)
>>
>>I would try a 10% sample of the rows to get a measure of what will fit
>>into your memory. I have never seen a regression problem for which 600k
>>cases were needed, and would be interested to know the context. (It is
>>hard to imagine that the cases are from a single homogeneous population
>>and that a linear model fits so well that the random error is not
>>dominated by systematic error.)
>>
>>
>I tried 10% sample and it turned out the matrix became singular after I did
>that.
>Ther reason is some of the attributes only have zero values most of the
>time.
>The data i am using is web log data and after some transformation, they are
>all numeric.
>Can we specify some parameters in read.table so that the program will treat
>all the vars as numeric
>(with this context, hopefully that will reduce the memory consumption) ?
>
>thanks a lot,
>
>Yun-Fang
>
>
>>>Yun-Fang
>>>----- Original Message -----
>>>From: "Yun-Fang Juan" <yunfang at yahoo-inc.com>
>>>To: <r-help at stat.math.ethz.ch>
>>>Sent: Thursday, January 29, 2004 7:03 PM
>>>Subject: [R] memory problem for R
>>>
>>>
>>>
>>>
>>>>Hi,
>>>>I try to use lm to fit a linear model with 600k rows and 70
>>>>
>>>>
>attributes.
>
>
>>>>But I can't even load the data into the R environment.
>>>>The error message says the vector memory is used up.
>>>>
>>>>Is there anyone having experience with large datasets in R? (I bet)
>>>>
>>>>Please advise.
>>>>
>>>>
>>>>thanks,
>>>>
>>>>
>>>>Yun-Fang
>>>>
>>>>[[alternative HTML version deleted]]
>>>>
>>>>______________________________________________
>>>>R-help at stat.math.ethz.ch mailing list
>>>>https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>>>>PLEASE do read the posting guide!
>>>>
>>>>
>>>http://www.R-project.org/posting-guide.html
>>>
>>>
>>>>
>>>>
>>>______________________________________________
>>>R-help at stat.math.ethz.ch mailing list
>>>https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>>>PLEASE do read the posting guide!
>>>
>>>
>http://www.R-project.org/posting-guide.html
>
>
>>>
>>>
>>--
>>Brian D. Ripley, ripley at stats.ox.ac.uk
>>Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
>>University of Oxford, Tel: +44 1865 272861 (self)
>>1 South Parks Road, +44 1865 272866 (PA)
>>Oxford OX1 3TG, UK Fax: +44 1865 272595
>>
>>
>>
>>
>
>______________________________________________
>R-help at stat.math.ethz.ch mailing list
>https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>
>
More information about the R-help
mailing list