[R] performance of adaptIntegrate vs. integrate
baptiste auguie
baptiste.auguie at googlemail.com
Fri Nov 11 22:59:57 CET 2011
Dear Hans,
[see inline below]
On 11 November 2011 22:44, Hans W Borchers <hwborchers at googlemail.com> wrote:
> baptiste auguie <baptiste.auguie <at> googlemail.com> writes:
>
>>
>> Dear list,
>>
>> [cross-posting from Stack Overflow where this question has remained
>> unanswered for two weeks]
>>
>> I'd like to perform a numerical integration in one dimension,
>>
>> I = int_a^b f(x) dx
>>
>> where the integrand f: x in IR -> f(x) in IR^p is vector-valued.
>> integrate() only allows scalar integrands, thus I would need to call
>> it many (p=200 typically) times, which sounds suboptimal. The cubature
>> package seems well suited, as illustrated below,
>>
>> library(cubature)
>> Nmax <- 1e3
>> tolerance <- 1e-4
>> integrand <- function(x, a=0.01) c(exp(-x^2/a^2), cos(x))
>> adaptIntegrate(integrand, -1, 1, tolerance, 2, max=Nmax)$integral
>> [1] 0.01772454 1.68294197
>>
>> However, adaptIntegrate appears to perform quite poorly when compared
>> to integrate. Consider the following example (one-dimensional
>> integrand),
>>
>> library(cubature)
>> integrand <- function(x, a=0.01) exp(-x^2/a^2)*cos(x)
>> Nmax <- 1e3
>> tolerance <- 1e-4
>>
>> # using cubature's adaptIntegrate
>> time1 <- system.time(replicate(1e3, {
>> a <<- adaptIntegrate(integrand, -1, 1, tolerance, 1, max=Nmax)
>> }) )
>>
>> # using integrate
>> time2 <- system.time(replicate(1e3, {
>> b <<- integrate(integrand, -1, 1, rel.tol=tolerance, subdivisions=Nmax)
>> }) )
>>
>> time1
>> user system elapsed
>> 2.398 0.004 2.403
>> time2
>> user system elapsed
>> 0.204 0.004 0.208
>>
>> a$integral
>> > [1] 0.0177241
>> b$value
>> > [1] 0.0177241
>>
>> a$functionEvaluations
>> > [1] 345
>> b$subdivisions
>> > [1] 10
>>
>> Somehow, adaptIntegrate was using many more function evaluations for a
>> similar precision. Both methods apparently use Gauss-Kronrod
>> quadrature, though ?integrate adds a "Wynn's Epsilon algorithm". Could
>> that explain the large timing difference?
>
>
> Cubature is astonishingly slow here though it was robust and accurate in
> most cases I used it. You may write to the maintainer for more information.
>
Indeed, I have been a happy user of cubature too, for
higher-dimensional problems. And I've used other codes from Steve
Johnson before which were all of high quality. I might send an email
to both him and the package developer to inquire about this poor
performance.
> Even a pure R implementation of adaptive Gauss-Kronrod as in pracma::quadgk
> is faster than cubature:
>
> library(pracma)
> time3 <- system.time(replicate(1e3,
> { d <<- quadgk(integrand, -1, 1) } # 0.0177240954011303
> ) )
> # user system elapsed
> # 0.899 0.002 0.897
>
> If your functions are somehow similar and you are willing to dispense with
> the adaptive part, then compute Gaussian quadrature nodes xi and weights wi
> for the fixed interval and compute sum(wi * fj(xi)) for each function fj.
> This avoids recomputing nodes and weights for every call or function.
I've used such a technique before, in C++ code intefaced with Rcpp.
however, I do like the adaptative part, and implementing it seems less
trivial. I don't really want to reinvent the wheel if I can help it
find a faster track.
>
> Package 'gaussquad' provides such nodes and weights. Or copy the relevant
> calculations from quadgk (the usual (7,15)-rule).
I've used the nodes and weights from statmod::gauss.quad.
Thanks for the tips.
Best regards,
baptiste
>
> Regards, Hans Werner
>
>
>> I'm open to suggestions of alternative ways of dealing with
>> vector-valued integrands.
>>
>> Thanks.
>>
>> baptiste
>>
>>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
More information about the R-help
mailing list