So far, our choice of h has been dictated only by accuracy. We will now see that stability must also be considered. This is especially critical in problems with multiple time scales.
As we saw in Eq. 5 Euler applied to gives , which converges to as . The error decreases like h. However, if , then not only will the error be large if h is taken too big, but the numerical solution will grow exponentially instead of decaying exponentially like the true solution. For example, if which should lead to a rapidly decaying solution, then if we take h=0.1 we see that y_{N}=y_{0}(-2)^{N} which is a rapidly growing solution oscillating between positive and negative values. In order to guarantee a decaying solution h must satisfy
(6) |
(7) |
An alternative that avoids this difficulty is to use an implicit method, backward Euler:
y_{n+1} = y_{n} + hf(t_{n+1}, y_{n+1}). | (8) |
(9) |
Thus, backward Euler is unconditionally stable for any equation with decaying exponential solutions, whereas forward Euler is stable conditioned on restricting h. (On the other hand, the backward Euler solution grows when , but so does the real solution, so we can't complain.)
An alternative method, exponential Euler, sometimes used to avoid this type of instability in linear equations of the form
(10) |
(11) |
Gear's method provides an adaptive higher order implicit method so that it has the high order advantages of Runge-Kutta and avoids the problems of instability due to stiffness (the quality of widely disparate time scales).