next up previous
Next: Example Up: Numerical Methods Previous: Higher Order Methods: Runge-Kutta

Stability and Stiff Equations

So far, our choice of h has been dictated only by accuracy. We will now see that stability must also be considered. This is especially critical in problems with multiple time scales.

As we saw in Eq. 5 Euler applied to $y^{\prime} = \lambda y$gives $y_N = y_0 (1 + h\lambda)^N$, which converges to $y_0 e^{\lambda t}$as $h \rightarrow 0$. The error decreases like h. However, if $\lambda <0$, then not only will the error be large if h is taken too big, but the numerical solution will grow exponentially instead of decaying exponentially like the true solution. For example, if $\lambda=-300$ which should lead to a rapidly decaying solution, then if we take h=0.1 we see that yN=y0(-2)N which is a rapidly growing solution oscillating between positive and negative values. In order to guarantee a decaying solution h must satisfy
\vert 1 + h\lambda \vert < 1\end{displaymath} (6)
0 < h < \frac{-2}{\lambda}\end{displaymath} (7)

An alternative that avoids this difficulty is to use an implicit method, backward Euler:

yn+1 = yn + hf(tn+1, yn+1).


In general, a non-linear equation must be solved for yn+1, but for our linear example we get the following recursion:
y_{n+1} = \frac{1}{1 - h\lambda} y_n\end{displaymath} (9)
This agrees with forward Euler to first order, so it too will converge to the solution with global error O(h). Furthermore, the solution will always decay for any $\lambda$ which is negative. So, the solution may be inaccurate, but it will never blow up. In fact, if h is very large, the solution will be damped even more rapidly. That is, the method pushes the decaying solution prematurely towards its steady-state value of 0.

Thus, backward Euler is unconditionally stable for any equation with decaying exponential solutions, whereas forward Euler is stable conditioned on restricting h. (On the other hand, the backward Euler solution grows when $\lambda \gt$, but so does the real solution, so we can't complain.)

An alternative method, exponential Euler, sometimes used to avoid this type of instability in linear equations of the form

\frac{dy}{dt} = -A(t) y + B(t)\end{displaymath} (10)
makes the next iterate a convex combination of the current value and the steady-state:  
y_{n+1} = y_n e^{-Ah} + \frac{B}{A} (1 - e^{-Ah})\end{displaymath} (11)
This is the method most commonly used in neuron models due to the fact that they are generally of the right form. As $h \rightarrow 0$, this reduces to regular Euler. For large h the solution remains bounded, but the accuracy is that of Euler's method O(h) so while taking large steps may be possible, it is not always accurate.

Gear's method provides an adaptive higher order implicit method so that it has the high order advantages of Runge-Kutta and avoids the problems of instability due to stiffness (the quality of widely disparate time scales).

next up previous
Next: Example Up: Numerical Methods Previous: Higher Order Methods: Runge-Kutta
G. Bard Ermentrout