[Rd] more efficient small subsets from moderate vectors?
Martin Morgan
mtmorgan at fhcrc.org
Wed Nov 19 04:52:27 CET 2008
This creates a named vector of length nx, then repeatedly draws a
single sample from it.
lkup <- function(nx, m=10000L) {
tbl <- seq_len(nx)
names(tbl) <- as.character(tbl)
v <- sample(names(tbl), m, replace=TRUE)
system.time(for(k in v) tbl[k], gcFirst=TRUE)
}
There is an abrupt performance degredation at nx=1000
> lkup(1000)
user system elapsed
0.180 0.000 0.179
> lkup(1001)
user system elapsed
2.444 0.016 2.462
This is because of the heuristic at stringSubscript.c:424, which
switches from a 'naive' nx * ns algorithm (ns is the number of
elements to be extracted, ns = 1 above) to a hash-based strategy when
nx * ns > 1.
It seems like the 'naive' algorithm takes nx * ns time, whereas the
hash-based algorithm takes C(nx) + ns, where C(nx) is the cost of
creating the hash. Guessing that the cost of building the hash is
about linear in nx, the results above suggest a new heuristic for
switching at ns of about 15.
(I don't quite follow the last sentence of the comment above
stringSubscript, so perhaps I misunderstand entirely).
Martin
> sessionInfo()
R version 2.9.0 Under development (unstable) (2008-11-15 r46953)
i686-pc-linux-gnu
locale:
LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
--
Martin Morgan
Computational Biology / Fred Hutchinson Cancer Research Center
1100 Fairview Ave. N.
PO Box 19024 Seattle, WA 98109
Location: Arnold Building M2 B169
Phone: (206) 667-2793
More information about the R-devel
mailing list