[R] reading very large files
juli g. pausas
pausas at gmail.com
Sun Feb 4 15:27:34 CET 2007
Hi all,
The small modification was replacing
Write.Rows <- Chunk[Chunk.Sel - Cuts[i], ] # (2nd line from the end)
by
Write.Rows <- Chunk[Chunk.Sel - Cuts[i] ] # Chunk has one dimension only
Running times:
- For the Jim Holtman solution (reading once, using diff and skiping
from one record to the other)
[1] 49.80 0.27 50.96 NA NA
- For Marc Schwartz solution (reading in chunks of 100000)
[1] 1203.94 9.12 1279.04 NA NA
Both in R2.4.1, under Windows:
> R.version
_
platform i386-pc-mingw32
arch i386
os mingw32
system i386, mingw32
status
major 2
minor 4.1
year 2006
month 12
day 18
svn rev 40228
language R
version.string R version 2.4.1 (2006-12-18)
>
Juli
On 03/02/07, Marc Schwartz <marc_schwartz at comcast.net> wrote:
> On Sat, 2007-02-03 at 19:06 +0100, juli g. pausas wrote:
> > Thank so much for your help and comments.
> > The approach proposed by Jim Holtman was the simplest and fastest. The
> > approach by Marc Schwartz also worked (after a very small
> > modification).
> >
> > It is clear that a good knowledge of R save a lot of time!! I've been
> > able to do in few minutes a process that was only 1/4th done after 25
> > h!
> >
> > Many thanks
> >
> > Juli
>
> Juli,
>
> Just out of curiosity, what change did you make?
>
> Also, what were the running times for the solutions?
>
> Regards,
>
> Marc
>
>
>
--
http://www.ceam.es/pausas
More information about the R-help
mailing list