[R] Running *slow*

thomas.chesney thomas.chesney at nottingham.ac.uk
Fri Oct 7 10:31:08 CEST 2011


Thank you Michael and Patrick for your responses. Michael - your code ran in
under 5 minutes, which I find stunning, and Patrick I have sent the Inferno
doc to the copier for printing and reading this weekend.

I now have 8 million values in my lookup table and want to replace each
value in Dat with the index of that value in the lookup table. In line with
Chapter 2 in the Inferno doc, I created a list of appropriate size first,
rather than growing it, but still couldn't figure out how to do it without
looping in R, so it still runs extremely slowly, even just to process the
first 1000 values in Dat. My original code (before I tried specifiying the
size of Dat2) was:

Dat2 <- c()

for (i in 1:nrow(Dat))
{
for (j in 1:2)
{
Dat2 <- c(Dat2, match(Dat[i,j], ltable))
}}

write(t(edgelist), "EL.txt", ncolumns=2)

Can anyone suggest a way of doing this without looping in R? Or is the
bottleneck the c function? I am looking at apply this morning, but Gentleman
(2009) suggests apply isn't very efficient. 

--
View this message in context: http://r.789695.n4.nabble.com/Running-slow-tp3878093p3881365.html
Sent from the R help mailing list archive at Nabble.com.



More information about the R-help mailing list