In this lab we consider solution methods for ordinary differential equations (ODEs). We will be looking at two classes of methods that excel when the equations are smooth and derivatives are not too large. This lab will take two class sessions. If you print this lab, you may prefer to use the pdf version.
The lab begins with an introduction to Euler's method for ODEs. Euler's method is the simplest approach to computing a numerical solution of an initial value problem. However, it has about the lowest possible accuracy. If we wish to compute very accurate solutions, or solutions that are accurate over a long interval, then Euler's method requires a large number of small steps. Since most of our problems seem to be computed ``instantly,'' you may not realize what a problem this can become when solving a ``real'' differential equation.
Applications of ODEs are divided between ones with space as the independent
variable and ones with time as the independent variable. We will
use as independent variable consistently. Sometimes it will
be interpreted as a space variable (
-axis) and sometimes as
time.
A number of methods have been developed in the effort to get solutions that are more accurate, less expensive, or more resistant to instabilities in the problem data. Typically, these methods belong to ``families'' of increasing order of accuracy, with Euler's method (or some relative) often being the member of the lowest order.
In this lab, we will look at ``explicit'' methods, that is, methods
defined by an explicit formula for , the approximate solution
at the next time step, in terms of quantities derivable from previous
time step data. In a later lab, we will address ``implicit''
methods that require the solution of an equation in order to find
. We will consider the Runge-Kutta and the Adams-Bashforth
families of methods. We will talk about some of the problems of
implementing the higher order versions of these methods. We will try
to compare the accuracy of different methods applied to the same
problem, and using the same number of steps.
Runge-Kutta methods are ``single-step'' methods while Adams-Bashforth
methods are ``multistep'' methods. Multistep methods require
information from several preceding steps in order to find
and are a little more difficult to use. Nonetheless, both single and
multistep methods have been very successful and there are very
reliable Matlab routines (and libraries for other languages) available
to solve ODEs using both types of methods.
A very simple ordinary differential equation (ODE) is the explicit
scalar first-order initial value problem:
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
An analytic solution of an ODE is a formula ,
that we can evaluate, differentiate, or analyze in any way we want.
Analytic solutions can only be determined for a small
class of ODE's.
A ``numerical solution'' of an ODE is simply a table of
abscissæ and approximate values
that approximate the value of an analytic solution. This
table is usually accompanied by some rule for interpolating
solution values between the abscissæ.
With rare exceptions, a numerical solution is always wrong; the
important question is, how wrong is it? One way to pose this question
is to determine how close the computed values
are to the analytic solution, which we might write as
.
The simplest method for producing a numerical solution of an ODE
is known as Euler's method. Given a solution value
,
we estimate the solution at the next abscissa by:
Matlab note: In the following function, the name
of the function that evaluates is arbitrary. Recall that
if you do not know the actual name of a function, but it is
contained in a Matlab variable (I often use the variable name ``f'')
then you can evaluate the function using the Matlab function
``feval.'' Supposing you have a Matlab function m-file named
my_ode.m and its signature line looks like
function yprime=my_ode(x,y)and suppose this name is contained in the variable f in your m-file. Then you can call the function using the syntax
yprime=feval(f,x,y);
Some students have difficulty distinguishing when to enclose names in quotes and mistakenly use the syntax
feval('f',x,y) % WRONG! WRONG! WRONG!This syntax tells Matlab to use the function whose name is f and that can be found in the file named f.m. In contrast, the correct syntax, feval(f,x,y), tells Matlab to find the function whose name is the value of the variable f. These are very different things.
Typically, Euler's method will be applied to systems of ODEs rather than a single ODE. This is because higher order ODEs can be written as systems of first order ODEs. The following Matlab function m-file implements Euler's method for a system of ODEs.
function [ x, y ] = euler ( f, xRange, yInitial, numSteps ) % [ x, y ] = euler ( f, xRange, yInitial, numSteps ) uses % Euler's explicit method to solve a system of first-order ODEs % y'=f(x,y). % f = name of an m-file with signature % yprime = f(x,y) % to compute the right side of the ODE as a row vector % xRange = [x1,x2] where the solution is sought on x1<=x<=x2 % yInitial = column vector of initial values for y at x1 % numSteps = number of equally-sized steps to take from x1 to x2 % x = row vector of values of x % y = matrix whose k-th row is the approximate solution at x(k). x=zeros(numSteps+1,1); x(1) = xRange(1); h = ( xRange(2) - xRange(1) ) / numSteps; y(1,:) = transpose(yInitial); for k = 1 : numSteps x(k+1) = x(k) + h; y(k+1,:) = y(k,:) + h * transpose(feval ( f, x(k), y(k,:) )); end
Warning: In the above code, the initial value is a column vector, and the function that gets called returns a column vector; however, the values are returned in the rows of the matrix m! The function transpose is used instead of a prime because a prime without a dot means ``Hermitian'' or ``adjoint'' or ``conjugate-transpose,'' but only a true transpose is needed here. The .' operator could have been used, but is harder to read.
In the following exercise, you will use euler.m to
find the solution of the initial value problem
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
function yprime = expm_ode ( x, y ) % yprime = expm_ode ( x, y ) is the right side function for % the ODE y'=-y+2*x % x is the independent variable % y is the dependent variable % yprime is y' yprime = -y+2*x;
yInit = 1.0; [ x, y ] = euler ( 'expm_ode', [ 0.0, 2.0 ], yInit, numSteps );for each of the values of numSteps in the table below. Use at least four significant figures when you record your numbers, and you can use the first line as a check on the correctness of the code. In addition, compute the error as the difference between your approximate solution and the exact solution at x=2, y=3*exp(-2)+2, and compute the ratios of the error for each value of nstep divided by the error for the succeeding value of nstep. As the number of steps increases, your errors should become smaller and the ratios should tend to a limit.
Euler's explicit method numSteps Stepsize Euler Error Ratio 10 0.2 2.32212254 8.3883e-02 __________ 20 0.1 __________ __________ _________ 40 0.05 __________ __________ _________ 80 0.025 __________ __________ _________ 160 0.0125 __________ __________ _________ 320 0.00625 __________ __________
Based on the ratios in the table, estimate the order of
accuracy of the method, i.e., estimate the exponent
in the error estimate
, where h is the step size.
is an integer in this case.
The ``Euler halfstep'' or ``RK2'' method is a variation of Euler's
method. It is the second-simplest of a family of methods called ``Runge-Kutta''
methods. As part of each step of the method, an
auxiliary solution, one that we don't really care about, is computed
halfway, using Euler's method:
Although this method uses Euler's method, it ends up having a higher
order of convergence. Loosely speaking, the initial half-step provides
additional information: an estimate of the derivative in the middle of
the next step. This estimate is presumably a better estimate of the
overall derivative than the value at the left end point.
The per-step error is and, since there are
steps to reach the end of the range,
overall. Keep
in mind that we do not regard the auxiliary points as being part of
the solution. We throw them away, and make no claim about their
accuracy. It is only the whole-step points that we want.
In the following exercise we compare the results of RK2 with Euler.
function [ x, y ] = rk2 ( f, xRange, yInitial, numSteps ) % comments including the signature, meanings of variables, % math methods, your name and the date x=zeros(numSteps+1,1); x(1) = xRange(1); h = ( xRange(2) - xRange(1) ) / numSteps; y(1,:) = transpose(yInitial); for k = 1 : numSteps xa = ??? ; ya = ??? ; x(k+1) = x(k) + h; y(k+1,:) = y(k,:) + h * transpose(feval( ??? )); end
RK2 numSteps Stepsize RK2 Error Ratio 10 0.2 2.41234409 6.3382e-03 _________ 20 0.1 __________ __________ __________ 40 0.05 __________ __________ __________ 80 0.025 __________ __________ __________ 160 0.0125 __________ __________ __________ 320 0.00625 __________ __________
Euler numSteps Stepsize Error RK2 Error 10 0.2 __________ __________ 20 0.1 __________ __________ 40 0.05 __________ __________ 80 0.025 __________ __________ 160 0.0125 __________ __________ 320 0.00625 __________ __________
The idea in Euler's halfstep method is to ``sample the water''
between where we are and where we are going. This gives us a much
better idea of what is doing, and where our new
value of
ought to be. Euler's method (``RK1'') and
Euler's halfstep method (``RK2'') are the junior members of a
family of ODE solving methods known as ``Runge-Kutta'' methods.
To develop a higher order Runge-Kutta method, we sample the
derivative function at even more ``auxiliary points''
between our last computed solution and the next one. These points are
not considered part of the solution curve; they are just a
computational aid. The formulas tend to get complicated, but let me
describe the next one, at least.
The third order Runge Kutta method ``RK3,'' given
,
and a stepsize
, computes two intermediate points:
The global accuracy of this method is , and
so we say it has ``order'' 3 method. Higher order Runge Kutta
methods have been computed, with those of order 4 and 5 the most
popular.
function [ x, y ] = rk3 ( f, xRange, yInitial, numSteps ) % comments including the signature, meanings of variables, % math methods, your name and the datethat implements the above algorithm. You can use rk2.m as a model.
RK3 numSteps Stepsize RK3 Error Ratio 10 0.2 2.40568816 3.1769e-04 __________ 20 0.1 __________ __________ __________ 40 0.05 __________ __________ __________ 80 0.025 __________ __________ __________ 160 0.0125 __________ __________ __________ 320 0.00625 __________ __________
norm(ysystem(:,1)-yscalar) % should be roundoff or zero norm(ysystem(:,2)-yscalar) % should be roundoff or zero
The equation describing the motion of a pendulum can be
described by the single dependent variable
representing the angle the pendulum makes with the vertical.
The coefficients of the equation depend on the length of the
pendulum, the mass of the bob, and the gravitational constant.
Assuming a coefficient value of 2.5, the equation is
![]() |
![]() |
![]() |
|
![]() |
![]() |
0 |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
function yprime = pendulum_ode(x,y) % comments including the signature, meanings of variables, % math methods, your name and the dateBe sure to put comments after the signature line and wherever else they are needed. Be sure that you return a column vector.
x Euler RK3 0.00 1.00 0.00 1.00 0.00 10.00 __________ __________ __________ __________ 20.00 __________ __________ __________ __________ 25.00 __________ __________ __________ __________
You have seen a considerable amount of theory about stability of methods for ODEs, and you will see more in the future. Explicit methods generally are conditionally stable, and require that the step size be smaller than some critical value in order to converge. It is interesting to see what happens when this stability limit is approached and exceeded. In the following exercise you will drive Euler's method and Runge-Kutta third-order methods unstable using the expm_ode function in a previous exercise. You should observe very similar behavior of the two methods. This behavior, in fact, is similar to that of most conditionally stable methods.
The message you should get from the previous exercise is that you can observe poor solution behavior when you are near the stability boundary. The ``poor behavior'' appears as a ``plus-minus'' oscillation that can shrink, grow, or remain of constant amplitude (unlikely, but possible). It can be tempting to accept solutions with small ``plus-minus'' oscillations that die out, but it is dangerous, especially in nonlinear problems, where the oscillations can cause the solution to move to a nearby curve with different initial conditions that has very different qualitative behavior from the desired solution.
Like Runge-Kutta methods, Adams-Bashforth methods want to estimate the behavior of the solution curve, but instead of evaluating the derivative function at new points close to the next solution value, they look at the derivative at old solution values and use interpolation ideas, along with the current solution and derivative, to estimate the new solution. This way they don't compute solutions at auxiliary points and then throw the auxiliary values away. The savings can result in increased efficiency.
Looked at in this way, the Euler method is the first order Adams-Bashforth method, using no old points at all, just the current solution and derivative. The second order method, which we'll call ``AB2,'' adds the derivative at the previous point into the interpolation mix. We might write the formula this way:
The AB2 method requires
derivative values at two previous points, but we only have one.
If we simply used an Euler step, we would pick up a relatively large
error on the first step, which would pollute all our results. In
order to get a reasonable starting value, we should use the RK2
method, whose per-step error is order , the same
as the AB2 method.
The following is a complete version of Matlab code for the Adams-Bashforth second-order method.
function [ x, y ] = ab2 ( f, xRange, yInitial, numSteps ) % [ x, y ] = ab2 ( f, xRange, yInitial, numSteps ) uses % Adams-Bashforth second-order method to solve a system % of first-order ODEs y'=f(x,y). % f = name of an m-file with signature % yprime = f(x,y) % to compute the right side of the ODE as a row vector % % xRange = [x1,x2] where the solution is sought on x1<=x<=x2 % yInitial = column vector of initial values for y at x1 % numSteps = number of equally-sized steps to take from x1 to x2 % x = column vector of values of x % y = matrix whose k-th row is the approximate solution at x(k). x=zeros(numSteps+1,1); x(1) = xRange(1); h = ( xRange(2) - xRange(1) ) / numSteps; y(1,:) = transpose(yInitial); yprime(1,:) = transpose(feval ( f, x(1), y(1,:) )); k = 1; xhalf = x(k) + 0.5 * h; yhalf = y(k,:) + 0.5 * h * yprime(k,:); yprime1 = transpose(feval ( f, xhalf, yhalf )); x(k+1) = x(k) + h; y(k+1,:) = y(k,:) + h * yprime1; yprime(k+1,:) = transpose(feval ( f, x(k+1), y(k+1,:) )); for k = 2 : numSteps x(k+1) = x(k) + h; y(k+1,:) = y(k,:) + ... h * ( 3.0 * yprime(k,:) - yprime(k-1,:) ) / 2.0; if k<numSteps yprime(k+1,:) = transpose(feval ( f, x(k+1), y(k+1,:) )); end end
% The Adams-Bashforth algorithm starts here % The Runge-Kutta algorithm starts hereBe sure to include a copy of the commented code in your summary.
AB2 numSteps Stepsize AB2(x=2) Error(x=2) Ratio 10 0.2 2.42020989 1.4204e-02 __________ 20 0.1 __________ __________ __________ 40 0.05 __________ __________ __________ 80 0.025 __________ __________ __________ 160 0.0125 __________ __________ __________ 320 0.00625 __________ __________
Adams-Bashforth methods try to squeeze information out of old solution points. For problems where the solution is smooth, these methods can be highly accurate and efficient. Think of efficiency in terms of how many times we evaluate the derivative function. To compute numSteps new solution points, we only have to compute roughly numSteps derivative values, no matter what the order of the method (remember that the method saves derivative values at old points). By contrast, a third order Runge-Kutta method would take roughly 3*numSteps derivative values. In comparison with the Runge Kutta method, however, the old solution points are significantly further away from the new solution point, so the data is less reliable and a little ``out of date.'' So Adams-Bashforth methods are often unable to handle a solution curve which changes its behavior over a short interval or has a discontinuity in its derivative. It's important to be aware of this tradeoff between efficiency and reliability!
Back to MATH2071 page.