[Rd] speeding up perception
Robert Stojnic
rainmansr at gmail.com
Sun Jul 3 14:13:03 CEST 2011
Hi Simon,
On 03/07/11 05:30, Simon Urbanek wrote:
> This is just a quick, incomplete response, but the main misconception is really the use of data.frames. If you don't use the elaborate mechanics of data frames that involve the management of row names, then they are definitely the wrong tool to use, because most of the overhead is exactly to manage to row names and you pay a substantial penalty for that. Just drop that one feature and you get timings similar to a matrix:
I tried to find some documentation on why there needs to be extra row
names handling when one is just assigning values into the column of a
data frame, but couldn't find any. For a while I stared at the code of
`[<-.data.frame` but couldn't figure out it myself. Can you please
summarise what exactly is going one when one does m[1, 1] <- 1 where m
is a data frame?
I found that the performance is significantly different with different
number of columns. For instance
# reassign first column to 1
example <- function(m){
for(i in 1:1000)
m[i,1] <- 1
}
m <- as.data.frame(matrix(0, ncol=2, nrow=1000))
system.time( example(m) )
user system elapsed
0.164 0.000 0.163
m <- as.data.frame(matrix(0, ncol=1000, nrow=1000))
system.time( example(m) )
user system elapsed
34.634 0.004 34.765
When m is a matrix, both run well under 0.1s.
Increasing the number of rows (but not the number of iterations) leads
to some increase in time, but not as drastic when increasing column
number. Using m[[y]][x] in this case doesn't help either.
example2 <- function(m){
for(i in 1:1000)
m[[1]][i] <- 1
}
m <- as.data.frame(matrix(0, ncol=1000, nrow=1000))
system.time( example2(m) )
user system elapsed
36.007 0.148 36.233
r.
More information about the R-devel
mailing list