]> git.proxmox.com Git - ceph.git/blob - ceph/src/boost/libs/numeric/odeint/doc/details_steppers.qbk
bump version to 12.2.2-pve1
[ceph.git] / ceph / src / boost / libs / numeric / odeint / doc / details_steppers.qbk
1 [/============================================================================
2 Boost.odeint
3
4 Copyright 2011-2013 Karsten Ahnert
5 Copyright 2012 Mario Mulansky
6 Copyright 2012 Sylwester Arabas
7
8 Use, modification and distribution is subject to the Boost Software License,
9 Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
10 http://www.boost.org/LICENSE_1_0.txt)
11 =============================================================================/]
12
13 [section Steppers]
14
15 [import ../examples/stepper_details.cpp]
16
17 Solving ordinary differential equation numerically is usually done iteratively, that is a given state of an ordinary differential equation is iterated forward ['x(t) -> x(t+dt) -> x(t+2dt)]. The steppers in odeint perform one single step. The most general stepper type is described by the __stepper concept. The stepper concepts of odeint are described in detail in section __concepts, here we briefly present the mathematical and numerical details of the steppers. The __stepper has two versions of the `do_step` method, one with an in-place transform of the current state and one with an out-of-place transform:
18
19 `do_step( sys , inout , t , dt )`
20
21 `do_step( sys , in , t , out , dt )`
22
23 The first parameter is always the system function - a function describing the
24 ODE. In the first version the second parameter is the step which is here
25 updated in-place and the third and the fourth parameters are the time and step
26 size (the time step). After a call to `do_step` the state `inout` is updated
27 and now represents an approximate solution of the ODE at time ['t+dt]. In the
28 second version the second argument is the state of the ODE at time ['t], the
29 third argument is t, the fourth argument is the approximate solution at time
30 ['t+dt] which is filled by `do_step` and the fifth argument is the time step.
31 Note that these functions do not change the time `t`.
32
33 [* System functions]
34
35 Up to now, we have nothing said about the system function. This function
36 depends on the stepper. For the explicit Runge-Kutta steppers this function
37 can be a simple callable object hence a simple (global) C-function or a
38 functor. The parameter syntax is `sys( x , dxdt , t )` and it is assumed that
39 it calculates ['dx/dt = f(x,t)].
40 The function structure in most cases looks like:
41
42 [system_function_structure]
43
44 Other types of system functions might represent Hamiltonian systems or systems which also compute the Jacobian needed in implicit steppers. For information which stepper uses which system function see the stepper table below. It might be possible that odeint will introduce new system types in near future. Since the system function is strongly related to the stepper type, such an introduction of a new stepper might result in a new type of system function.
45
46 [section Explicit steppers]
47
48 A first specialization are the explicit steppers. Explicit means that the new
49 state of the ode can be computed explicitly from the current state without
50 solving implicit equations. Such steppers have in common that they evaluate the system at time ['t] such that the result of ['f(x,t)] can be passed to the stepper. In odeint, the explicit stepper have two additional methods
51
52 `do_step( sys , inout , dxdtin , t , dt )`
53
54 `do_step( sys , in , dxdtin , t , out , dt )`
55
56 Here, the additional parameter is the value of the function ['f] at state ['x] and time ['t]. An example is the Runge-Kutta stepper of fourth order:
57
58 [explicit_stepper_detail_example]
59
60 In fact, you do not need to call these two methods. You can always use the
61 simpler `do_step( sys , inout , t , dt )`, but sometimes the derivative of the
62 state is needed externally to do some external computations or to perform some statistical analysis.
63
64 A special class of the explicit steppers are the FSAL (first-same-as-last)
65 steppers, where the last evaluation of the system function is also the first
66 evaluation of the following step. For such steppers the `do_step` method are
67 slightly different:
68
69 `do_step( sys , inout , dxdtinout , t , dt )`
70
71 `do_step( sys , in , dxdtin , out , dxdtout , t , dt )`
72
73 This method takes the derivative at time `t` and also stores
74 the derivative at time ['t+dt]. Calling these functions subsequently iterating
75 along the solution one saves one function call by passing the result for dxdt
76 into the next function call.
77 However, when using FSAL steppers without supplying derivatives:
78
79 `do_step( sys , inout , t , dt )`
80
81 the stepper internally satisfies the FSAL property which means it remembers
82 the last `dxdt` and uses it for the next step.
83 An example for a FSAL stepper is the Runge-Kutta-Dopri5 stepper. The FSAL trick is sometimes also referred as the Fehlberg trick. An example how the FSAL steppers can be used is
84
85 [fsal_stepper_detail_example]
86
87 [caution The FSAL-steppers save the derivative at time ['t+dt] internally if
88 they are called via `do_step( sys , in , out , t , dt )`. The first call of
89 `do_step` will initialize `dxdt` and for all following calls it is assumed
90 that the same system and the same state are used. If you use the FSAL stepper
91 within the integrate functions this is taken care of automatically. See the __using_steppers section for more details or look into the table below to see which stepper have an internal state.]
92
93
94 [endsect]
95
96 [section Symplectic solvers]
97
98 As mentioned above symplectic solvers are used for Hamiltonian systems. Symplectic solvers conserve the phase space volume exactly and if the Hamiltonian system is energy conservative they also conserve the energy approximately. A special class of symplectic systems are separable systems which can be written in the form ['dqdt/dt = f1(p)], ['dpdt/dt = f2(q)], where ['(q,p)] are the state of system. The space of ['(q,p)] is sometimes referred as the phase space and ['q] and ['p] are said the be the phase space variables. Symplectic systems in this special form occur widely in nature. For example the complete classical mechanics as written down by Newton, Lagrange and Hamilton can be formulated in this framework. The separability of the system depends on the specific choice of coordinates.
99
100 Symplectic systems can be solved by odeint by means of the symplectic_euler stepper and a symplectic Runge-Kutta-Nystrom method of fourth order. These steppers assume that the system is autonomous, hence the time will not explicitly occur. Further they fulfill in principle the default Stepper concept, but they expect the system to be a pair of callable objects. The first entry of this pair calculates ['f1(p)] while the second calculates ['f2(q)]. The syntax is `sys.first(p,dqdt)` and `sys.second(q,dpdt)`, where the first and second part can be again simple C-functions of functors. An example is the harmonic oscillator:
101
102 [symplectic_stepper_detail_system_function]
103
104 The state of such an ODE consist now also of two parts, the part for q (also called the coordinates) and the part for p (the momenta). The full example for the harmonic oscillator is now:
105
106 [symplectic_stepper_detail_example]
107
108 If you like to represent the system with one class you can easily bind two public method:
109
110 [symplectic_stepper_detail_system_class]
111
112 [symplectic_stepper_detail_system_class_example]
113
114 Many Hamiltonian system can be written as ['dq/dt=p], ['dp/dt=f(q)] which is computationally much easier than the full separable system. Very often, it is also possible to transform the original equations of motion to bring the system in this simplified form. This kind of system can be used in the symplectic solvers, by simply passing ['f(p)] to the `do_step` method, again ['f(p)] will be represented by a simple C-function or a functor. Here, the above example of the harmonic oscillator can be written as
115
116 [simplified_symplectic_stepper_example]
117
118 In this example the function `harm_osc_f1` is exactly the same function as in the above examples.
119
120 Note, that the state of the ODE must not be constructed explicitly via `pair< vector_type , vector_type > x`. One can also use a combination of `make_pair` and `ref`. Furthermore, a convenience version of `do_step` exists which takes q and p without combining them into a pair:
121
122 [symplectic_stepper_detail_ref_usage]
123
124 [endsect]
125
126 [section Implicit solvers]
127
128 [caution This section is not up-to-date.]
129
130 For some kind of systems the stability properties of the classical Runge-Kutta are not sufficient, especially if the system is said to be stiff. A stiff system possesses two or more time scales of very different order. Solvers for stiff systems are usually implicit, meaning that they solve equations like ['x(t+dt) = x(t) + dt * f(x(t+1))]. This particular scheme is the implicit Euler method. Implicit methods usually solve the system of equations by a root finding algorithm like the Newton method and therefore need to know the Jacobian of the system ['J[subl ij] = df[subl i] / dx[subl j]].
131
132 For implicit solvers the system is again a pair, where the first component computes ['f(x,t)] and the second the Jacobian. The syntax is `sys.first( x , dxdt , t )` and `sys.second( x , J , t )`. For the implicit solver the `state_type` is `ublas::vector` and the Jacobian is represented by `ublas::matrix`.
133
134 [important Implicit solvers only work with ublas::vector as state type. At
135 the moment, no other state types are supported.]
136
137 [endsect]
138
139 [section Multistep methods]
140
141 Another large class of solvers are multi-step method. They save a small part of the history of the solution and compute the next step with the help of this history. Since multi-step methods know a part of their history they do not need to compute the system function very often, usually it is only computed once. This makes multi-step methods preferable if a call of the system function is expensive. Examples are ODEs defined on networks, where the computation of the interaction is usually where expensive (and might be of order O(N^2)).
142
143 Multi-step methods differ from the normal steppers. They save a part of their history and this part has to be explicitly calculated and initialized. In the following example an Adams-Bashforth-stepper with a history of 5 steps is instantiated and initialized;
144
145 [multistep_detail_example]
146
147 The initialization uses a fourth-order Runge-Kutta stepper and after the call
148 of `initialize` the state of `inout` has changed to the current state, such
149 that it can be immediately used by passing it to following calls of `do_step`. You can also use you own steppers to initialize the internal state of the Adams-Bashforth-Stepper:
150
151 [multistep_detail_own_stepper_initialization]
152
153 Many multi-step methods are also explicit steppers, hence the parameter of `do_step` method do not differ from the explicit steppers.
154
155 [caution The multi-step methods have some internal variables which depend on
156 the explicit solution. Hence after any external changes of your state (e.g. size) or
157 system the initialize function has to be called again to adjust the internal
158 state of the stepper. If you use the integrate functions this will
159 be taken into account. See the __using_steppers section for more details.]
160
161
162 [endsect]
163
164 [section Controlled steppers]
165
166 Many of the above introduced steppers possess the possibility to use adaptive step-size control. Adaptive step size integration works in principle as follows:
167
168 # The error of one step is calculated. This is usually done by performing two steps with different orders. The difference between these two steps is then used as a measure for the error. Stepper which can calculate the error are __error_stepper and they form an own class with an separate concept.
169 # This error is compared against some predefined error tolerances. Are the tolerance violated the step is reject and the step-size is decreases. Otherwise the step is accepted and possibly the step-size is increased.
170
171 The class of controlled steppers has their own concept in odeint - the __controlled_stepper concept. They are usually constructed from the underlying error steppers. An example is the controller for the explicit Runge-Kutta steppers. The Runge-Kutta steppers enter the controller as a template argument. Additionally one can pass the Runge-Kutta stepper to the constructor, but this step is not necessary; the stepper is default-constructed if possible.
172
173 Different step size controlling mechanism exist. They all have in common that
174 they somehow compare predefined error tolerance against the error and that
175 they might reject or accept a step. If a step is rejected the step size is
176 usually decreased and the step is made again with the reduced step size. This
177 procedure is repeated until the step is accepted. This algorithm is
178 implemented in the integration functions.
179
180 A classical way to decide whether a step is rejected or accepted is to calculate
181
182 ['val = || | err[subl i] | / ( __epsilon[subl abs] + __epsilon[subl rel] * ( a[subl x] | x[subl i] | + a[subl dxdt] | | dxdt[subl i] | )|| ]
183
184 ['__epsilon[subl abs]] and ['__epsilon[subl rel]] are the absolute and the relative error tolerances, and ['|| x ||] is a norm, typically ['||x||=(__Sigma[subl i] x[subl i][super 2])[super 1/2]] or the maximum norm. The step is rejected if ['val] is greater then 1, otherwise it is accepted. For details of the used norms and error tolerance see the table below.
185
186 For the `controlled_runge_kutta` stepper the new step size is then calculated via
187
188 ['val > 1 : dt[subl new] = dt[subl current] max( 0.9 pow( val , -1 / ( O[subl E] - 1 ) ) , 0.2 )]
189
190 ['val < 0.5 : dt[subl new] = dt[subl current] min( 0.9 pow( val , -1 / O[subl S] ) , 5 )]
191
192 ['else : dt[subl new] = dt[subl current]]
193
194 Here, ['O[subl S]] and ['O[subl E]] are the order of the stepper and the error stepper. These formulas also contain some safety factors, avoiding that the step size is reduced or increased to much. For details of the implementations of the controlled steppers in odeint see the table below.
195
196 [include controlled_stepper_table.qbk]
197
198 To ease to generation of the controlled stepper, generation functions exist which take the absolute and relative error tolerances and a predefined error stepper and construct from this knowledge an appropriate controlled stepper. The generation functions are explained in detail in __generation_functions.
199
200 [endsect]
201
202 [section Dense output steppers]
203
204 A fourth class of stepper exists which are the so called dense output steppers. Dense-output steppers might take larger steps and interpolate the solution between two consecutive points. This interpolated points have usually the same order as the order of the stepper. Dense-output steppers are often composite stepper which take the underlying method as a template parameter. An example is the `dense_output_runge_kutta` stepper which takes a Runge-Kutta stepper with dense-output facilities as argument. Not all Runge-Kutta steppers provide dense-output calculation; at the moment only the Dormand-Prince 5 stepper provides dense output. An example is
205
206 [dense_output_detail_example]
207
208 Dense output stepper have their own concept. The main difference to usual
209 steppers is that they manage the state and time internally. If you call
210 `do_step`, only the ODE is passed as argument. Furthermore `do_step` return
211 the last time interval: `t` and `t+dt`, hence you can interpolate the solution between these two times points. Another difference is that they must be initialized with `initialize`, otherwise the internal state of the stepper is default constructed which might produce funny errors or bugs.
212
213 The construction of the dense output stepper looks a little bit nasty, since in the case of the `dense_output_runge_kutta` stepper a controlled stepper and an error stepper have to be nested. To simplify the generation of the dense output stepper generation functions exist:
214
215 [dense_output_detail_generation1]
216
217 This statement is also lengthy; it demonstrates how `make_dense_output` can be used with the `result_of` protocol. The parameters to `make_dense_output` are the absolute error tolerance, the relative error tolerance and the stepper. This explicitly assumes that the underlying stepper is a controlled stepper and that this stepper has an absolute and a relative error tolerance. For details about the generation functions see __generation_functions. The generation functions have been designed for easy use with the integrate functions:
218
219 [dense_output_detail_generation2]
220
221 [endsect]
222
223 [section Using steppers]
224
225 This section contains some general information about the usage of the steppers in odeint.
226
227 [* Steppers are copied by value]
228
229 The stepper in odeint are always copied by values. They are copied for the creation of the controlled steppers or the dense output steppers as well as in the integrate functions.
230
231 [* Steppers might have a internal state]
232
233 [caution Some of the features described in this section are not yet implemented]
234
235 Some steppers require to store some information about the state of the ODE between two steps. Examples are the multi-step methods which store a part of the solution during the evolution of the ODE, or the FSAL steppers which store the last derivative at time ['t+dt], to be used in the next step. In both cases the steppers expect that consecutive calls of `do_step` are from the same solution and the same ODE. In this case it is absolutely necessary that you call `do_step` with the same system function and the same state, see also the examples for the FSAL steppers above.
236
237 Stepper with an internal state support two additional methods: `reset` which resets the state and `initialize` which initializes the internal state. The parameters of `initialize` depend on the specific stepper. For example the Adams-Bashforth-Moulton stepper provides two initialize methods: `initialize( system , inout , t , dt )` which initializes the internal states with the help of the Runge-Kutta 4 stepper, and `initialize( stepper , system , inout , t , dt )` which initializes with the help of `stepper`. For the case of the FSAL steppers, `initialize` is `initialize( sys , in , t )` which simply calculates the r.h.s. of the ODE and assigns its value to the internal derivative.
238
239 All these steppers have in common, that they initially fill their internal state by themselves. Hence you are not required to call initialize. See how this works for the Adams-Bashforth-Moulton stepper: in the example we instantiate a fourth order Adams-Bashforth-Moulton stepper, meaning that it will store 4 internal derivatives of the solution at times `(t-dt,t-2*dt,t-3*dt,t-4*dt)`.
240
241 ``
242 adams_bashforth_moulton< 4 , state_type > stepper;
243 stepper.do_step( sys , x , t , dt ); // make one step with the classical Runge-Kutta stepper and initialize the first internal state
244 // the internal array is now [x(t-dt)]
245
246 stepper.do_step( sys , x , t , dt ); // make one step with the classical Runge-Kutta stepper and initialize the second internal state
247 // the internal state array is now [x(t-dt), x(t-2*dt)]
248
249 stepper.do_step( sys , x , t , dt ); // make one step with the classical Runge-Kutta stepper and initialize the third internal state
250 // the internal state array is now [x(t-dt), x(t-2*dt), x(t-3*dt)]
251
252 stepper.do_step( sys , x , t , dt ); // make one step with the classical Runge-Kutta stepper and initialize the fourth internal state
253 // the internal state array is now [x(t-dt), x(t-2*dt), x(t-3*dt), x(t-4*dt)]
254
255 stepper.do_step( sys , x , t , dt ); // make one step with Adam-Bashforth-Moulton, the internal array of states is now rotated
256 ``
257
258 In the stepper table at the bottom of this page one can see which stepper have an internal state and hence provide the `reset` and `initialize` methods.
259
260
261 [* Stepper might be resizable]
262
263 Nearly all steppers in odeint need to store some intermediate results of the
264 type `state_type` or `deriv_type`. To do so odeint need some memory management
265 for the internal temporaries. As this memory management is typically related
266 to adjusting the size of vector-like types, it is called resizing in
267 odeint. So, most steppers in odeint provide an additional template parameter
268 which controls the size adjustment of the internal variables - the resizer. In
269 detail odeint provides three policy classes (resizers) `always_resizer`,
270 `initially_resizer`, and `never_resizer`. Furthermore, all stepper have a
271 method `adjust_size` which takes a parameter representing a state type and
272 which manually adjusts the size of the internal variables matching the size of
273 the given instance. Before performing the actual resizing odeint always checks
274 if the sizes of the state and the internal variable differ and only resizes if
275 they are different.
276
277 [note You only have to worry about memory allocation when using dynamically
278 sized vector types. If your state type is heap allocated, like `boost::array`,
279 no memory allocation is required whatsoever.]
280
281 By default the resizing parameter is `initially_resizer`, meaning that the
282 first call to `do_step` performs the resizing, hence memory allocation.
283 If you have changed the size of your system and your state you have to
284 call `adjust_size` by hand in this case. The second resizer is the
285 `always_resizer` which tries to resize the internal variables at every call of
286 `do_step`. Typical use cases for this kind of resizer are self expanding
287 lattices like shown in the tutorial (__resizing_lattice_example) or partial differential equations with an
288 adaptive grid. Here, no calls of `adjust_size` are required, the steppers manage
289 everything themselves. The third class of resizer is the `never_resizer` which
290 means that the internal variables are never adjusted automatically and always
291 have to be adjusted by hand .
292
293 There is a second mechanism which influences the resizing and which controls if a state type is at least resizeable - a meta-function `is_resizeable`. This meta-function returns a static Boolean value if any type is resizable. For example it will return `true` for `std::vector< T >` but `false` for `boost::array< T >`. By default and for unknown types `is_resizeable` returns `false`, so if you have your own type you need to specialize this meta-function. For more details on the resizing mechanism see the section __adapt_state_types.
294
295
296
297 [* Which steppers should be used in which situation]
298
299 odeint provides a quite large number of different steppers such that the user is left with the question of which stepper fits his needs. Our personal recommendations are:
300
301 * `runge_kutta_dopri5` is maybe the best default stepper. It has step size control as well as dense-output functionality. Simple create a dense-output stepper by `make_dense_output( 1.0e-6 , 1.0e-5 , runge_kutta_dopri5< state_type >() )`.
302 * `runge_kutta4` is a good stepper for constant step sizes. It is widely used and very well known. If you need to create artificial time series this stepper should be the first choice.
303 * 'runge_kutta_fehlberg78' is similar to the 'runge_kutta4' with the advantage that it has higher precision. It can also be used with step size control.
304 * `adams_bashforth_moulton` is very well suited for ODEs where the r.h.s. is expensive (in terms of computation time). It will calculate the system function only once during each step.
305
306 [endsect]
307
308 [section Stepper overview]
309
310 [include stepper_table.qbk]
311
312 [endsect]
313
314
315 [section Custom steppers]
316
317 [import ../examples/stochastic_euler.cpp]
318
319 Finally, one can also write new steppers which are fully compatible with odeint. They only have to fulfill one or several of the stepper __concepts of odeint.
320
321 We will illustrate how to write your own stepper with the example of the stochastic Euler method. This method is suited to solve stochastic differential equations (SDEs). A SDE has the form
322
323 ['dx/dt = f(x) + g(x) __xi(t)]
324
325 where ['__xi] is Gaussian white noise with zero mean and a standard deviation ['__sigma(t)]. ['f(x)] is said to be the deterministic part while [' g(x) __xi] is the noisy part. In case ['g(x)] is independent of ['x] the SDE is said to have additive noise. It is not possible to solve SDE with the classical solvers for ODEs since the noisy part of the SDE has to be scaled differently then the deterministic part with respect to the time step. But there exist many solvers for SDEs. A classical and easy method is the stochastic Euler solver. It works by iterating
326
327 ['x(t+__Delta t) = x(t) + __Delta t f(x(t)) + __Delta t[super 1/2] g(x) __xi(t)]
328
329 where __xi(t) is an independent normal distributed random variable.
330
331 Now we will implement this method. We will call the stepper
332 `stochastic_euler`. It models the __stepper concept. For simplicity, we fix
333 the state type to be an `array< double , N >` The class definition looks like
334
335 [stochastic_euler_class_definition]
336
337 The types are needed in order to fulfill the stepper concept. As internal state and deriv type we use simple arrays in the stochastic Euler, they are needed for the temporaries. The stepper has the order one which is returned from the `order()` function.
338
339 The system functions needs to calculate the deterministic and the stochastic part of our stochastic differential equation. So it might be suitable that the system function is a pair of functions. The first element of the pair computes the deterministic part and the second the stochastic one. Then, the second part also needs to calculate the random numbers in order to simulate the stochastic process. We can now implement the `do_step` method
340
341 [stochastic_euler_do_step]
342
343 This is all. It is quite simple and the stochastic Euler stepper implement here is quite general. Of course it can be enhanced, for example
344
345 * use of operations and algebras as well as the resizing mechanism for maximal flexibility and portability
346 * use of `boost::ref` for the system functions
347 * use of `boost::range` for the state type in the `do_step` method
348 * ...
349
350 Now, lets look how we use the new stepper. A nice example is the Ornstein-Uhlenbeck process. It consists of a simple Brownian motion overlapped with an relaxation process. Its SDE reads
351
352 ['dx/dt = - x + __xi]
353
354 where __xi is Gaussian white noise with standard deviation ['__sigma]. Implementing the Ornstein-Uhlenbeck process is quite simple. We need two functions or functors - one for the deterministic and one for the stochastic part:
355
356 [stochastic_euler_ornstein_uhlenbeck_def]
357
358 In the stochastic part we have used the Mersenne twister for the random number generation and a Gaussian white noise generator `normal_distribution` with standard deviation ['__sigma]. Now, we can use the stochastic Euler stepper with the integrate functions:
359
360 [ornstein_uhlenbeck_main]
361
362 Note, how we have used the `make_pair` function for the generation of the system function.
363
364 [endsect]
365
366 [section Custom Runge-Kutta steppers]
367
368 [import ../examples/heun.cpp]
369
370 odeint provides a C++ template meta-algorithm for constructing arbitrary
371 Runge-Kutta schemes [footnote M. Mulansky, K. Ahnert, Template-Metaprogramming
372 applied to numerical problems, [@http://arxiv.org/abs/1110.3233 arxiv:1110.3233]]. Some schemes are predefined in odeint, for
373 example the classical Runge-Kutta of fourth order, or the
374 Runge-Kutta-Cash-Karp 54 and the Runge-Kutta-Fehlberg 78 method.
375 You can use this meta algorithm to construct you own solvers. This has the
376 advantage that you can make full use of odeint's algebra and operation system.
377
378 Consider for example the method of Heun, defined by the following Butcher tableau:
379
380 [pre
381 c1 = 0
382
383 c2 = 1/3, a21 = 1/3
384
385 c3 = 2/3, a31 = 0 , a32 = 2/3
386
387 b1 = 1/4, b2 = 0 , b3 = 3/4
388 ]
389
390 Implementing this method is very easy. First you have to define the constants:
391
392 [heun_define_coefficients]
393
394 While this might look cumbersome, packing all
395 parameters into a templatized class which is not immediately evaluated has the
396 advantage that you can change the `value_type` of your stepper to any type you
397 like - presumably arbitrary precision types. One could also instantiate
398 the coefficients directly
399
400 ``
401 const boost::array< double , 1 > heun_a1 = {{ 1.0 / 3.0 }};
402 const boost::array< double , 2 > heun_a2 = {{ 0.0 , 2.0 / 3.0 }};
403 const boost::array< double , 3 > heun_b = {{ 1.0 / 4.0 , 0.0 , 3.0 / 4.0 }};
404 const boost::array< double , 3 > heun_c = {{ 0.0 , 1.0 / 3.0 , 2.0 / 3.0 }};
405 ``
406
407 But then you are nailed down to use doubles.
408
409 Next, you need to define your stepper, note that the Heun method has 3 stages
410 and produces approximations of order 3:
411
412 [heun_stepper_definition]
413
414 That's it. Now, we have a new stepper method and we can use it, for example with the Lorenz system:
415
416 [heun_example]
417
418 [endsect]
419
420
421 [endsect]