[R-sig-dyn-mod] Compiled code in deSolve: fail with Lorenz model

Granjon David dgranjon at ymail.com
Thu Feb 1 14:37:07 CET 2018


Dear Members,

I was curious to see the speed difference between deSolve, odeintr, RxODE (and maybe others) by integrating 1000 times the very famous Lorenz model.

1) Below is the implementation using odeintr:

library(odeintr)
x <- 1:1000
## with cpp ##
Lorenz.sys = '
double test1, test2, test3;
test1 = 10.0 * (x[1] - x[0]);
test2 = 28.0 * x[0] - x[1] - x[0] * x[2];
test3 = -8.0 / 3.0 * x[2] + x[0] * x[1];
dxdt[0] = test1;
dxdt[1] = test2;
dxdt[2] = test3;
' # Lorenz.sys
code <- compile_sys("lorenz", Lorenz.sys)
time_odeintr <- unlist(
  lapply(1:length(x), FUN = function(i) {
    system.time({out <- lorenz(rep(1, 3), 100, 0.001)})[3]
  })
)

2) Below is a similar code corresponding to RxODE:

# ## With RxODE ##
library(RxODE)
lorenz_RxODE <- "
   d/dt(x1) = k1 * (x2 - x1);
   d/dt(x2) = k2 * x1 - x2 - x1 * x3;
   d/dt(x3) = k3 * x3 + x1 * x2;
"
mod1 <- RxODE(model = lorenz_RxODE, modName = "lorenz_RxODE", 
              wd = getwd(), do.compile = TRUE)
inits <- rep(1, 3)
theta <- c(k1 = 10, k2 = 28, k3 = -8/3)
ev1 <- eventTable()
ev1$add.sampling(seq(0, 100, by = 0.001))

time_RxODE <- unlist(
  lapply(1:length(x), FUN = function(i) {
    system.time(out <- mod1$run(theta, events = ev1, inits = inits))[3]
  })
)

3) Finally, below is the implementation using deSolve (and compiled code). The integration fails, and I have no idea how to fix it. On the other hand, using deSolve without compiled code work perfectly.

## with Compiled deSolve in C ##
library(deSolve)
system("R CMD SHLIB lorenz_desolve.c")
dyn.load(paste("lorenz_desolve", .Platform$dynlib.ext, sep = ""))
parms <- c(k1 = 10, k2 = 28, k3 = -8/3) 
state <- c(y1 = 1, y2 = 1, y3 = 1) 
times <- c(0, 100, 0.001)
 
time_compiled_deSolve <- unlist(
   lapply(1:length(x), FUN = function(i) {
     system.time(out <- ode(y = state , times, func = "derivs", parms = parms,  
                             dllname = "lorenz_desolve", initfunc = "initmod", 
                            nout = 1, outnames = "Sum"))[3]
   })
 )

The c code is here: Very important to call it « lorenz_desolve.c » 

/* file mymod.c */
#include <R.h>
static double parms[3];

#define k1 parms[0]
#define k2 parms[1]
#define k3 parms[2]

/* initializer  */
void initmod(void (* odeparms)(int *, double *))
{
  int N=3;
  odeparms(&N, parms);
}
/* Derivatives and 1 output variable */
void derivs (int *neq, double *t, double *y, double *ydot,
             double *yout, int *ip)
{
  if (ip[0] <1) error("nout should be at least 1");
  ydot[0] = k1 * (y[1] - y[0]);
  ydot[1] = k2 * y[0] - y[1] - y[0] * y[2];
  ydot[2] = k3 * y[2] + y[0] * y[1];
  
  yout[0] = y[0] + y[1] + y[2];
}


Do you have any idea of why deSolve fails, while RxODE and odeintr manage to do the job without setting any particular numerical conditions?

Best regards

Granjon David
PhD
University of Zürich
Institute of Physiology
Winterthurerstr. 190, Y23 J 78
8057 Zürich
+41 (0) 44 635 50 56
	[[alternative HTML version deleted]]



More information about the R-sig-dynamic-models mailing list