You are on page 1of 1

this makes it less stable and more likely to diverge than most implementations

They will thus execute much faster than pure Python code

to ensure high flexibility at optimal performance

Additionally to the basic solver algorithms listed above, odeint also provides some functionality helping
to reduce the implementation effort for ODE simulations:

integrate routines with observables and different observation strategies.

iterator interface to the solvers.

configurable data types.

configurable computational backends.

automatic memory management.

The fact that most algorithms in odeint have configurable data types and configurable computational
backends ensures the library's high flexibility. For example, it is possible to work directly with ODEs
defined for complex numbers, or to use arbitrary precision types if the usual double arithmetic is not
sufficient. It is also possible to utilize the SIMD capabilities of modern CPUs, e.g. with help of the NT2
SIMD library within odeint. Furthermore, on can employ computations with physical units by using the
Boost.Units library in combination with odeint. Another important point is its interoperability with linear
algebra packages. In fact, odeint readily supports Eigen, Boost.Ublas, MTL4, and Blaze. Other
frameworks can be adapted as well.
Despite its modularized design and high flexibility, odeint still maintains competitive
performance. Figure 2 shows a comparison of odeint with plain C and Fortran90 code. The
performance is measured as the runtime required to perform 200,000 Runge-Kutta4 steps for
the Lorenz system. On Intel hardware (Core i5 and Xeon E5), all implementations have virtually
equivalent performance, while for the Opteron the Intel compiler produces the fastest code.
Overall, this test shows that odeint provides competitive performance and that on modern
compilers the abstraction and modularization induce no run-time costs.

You might also like