MATH2071: LAB 9: Implicit ODE methods

Introduction Exercise 1
Stiff Systems Exercise 2
Direction Field Plots Exercise 3
The Backward Euler Method Exercise 4
Newton's method Exercise 5
The Trapezoid Method Exercise 6
Matlab ODE solvers Exercise 7
  Exercise 8
  Exercise 9
  Exercise 10


The explicit methods that we discussed last time are well suited to handling a large class of ODE's. These methods perform poorly, however, for a class of ``stiff'' problems, that occur all too frequently in applications. We will examine implicit methods that are suitable for such problems. We will find that the implementation of an implicit method has a complication we didn't see with the explicit method: solving the nonlinear equation that generally arises.

The term ``stiff'' as applied to ODE's does not have a precise definition. Loosely, it means that there is a very wide range between the most rapid and least rapid (with $ x$) changes in solution components. A reasonably good rule of thumb is that if Runge-Kutta or Adams-Bashforth or other similar methods require much smaller steps than you would expect, based on your expectations for the solution accuracy, then the system is probably stiff. The term itself arose in the context of solution for the motion of a stiff spring when it is being driven at a modest frequency of oscillation. The natural modes of the spring have relatively high frequencies, and the ODE solver must be accurate enough to resolve these high frequencies, despite the low driving frequency.

This lab will take three sessions. If you print this lab, you may prefer to use the pdf version.

Stiff Systems

I've warned you that there are problems that defeat the explicit Runge-Kutta and Adams-Bashforth methods. In fact, for such problems, the higher order methods perform even more poorly than the low order methods. These problems are called ``stiff'' ODE's. We will only look at some very simple examples.

Consider the differential system

$\displaystyle y'$ $\displaystyle =\lambda(-y+\sin x)$    
$\displaystyle y(0)$ $\displaystyle =0$ (1)

whose solution is
$\displaystyle y(x)$ $\displaystyle =$ $\displaystyle Ce^{ - \lambda x }+
\frac{\lambda^{2}\sin x-\lambda\cos x}{1+\lambda^{2}}$  
  $\displaystyle =$ $\displaystyle Ce^{-\lambda x}+\frac{\lambda^2}{1+\lambda^2}\sin x
-\frac{\lambda }{1+\lambda^2}\cos x$  

for $ C$ a constant. For the initial condition $ y=0$ at $ x=0$, the constant $ C$ can easily be seen to be

$\displaystyle C=\frac{\lambda }{1+\lambda^2}$

The ODE becomes stiff when $ {\lambda}$ gets large: at least $ {\lambda}=10$, but in practice the equivalent of $ {\lambda}$ might be a million or more. One key to understanding stiffness is to make the following observations. In other words, the interesting solution has modest derivative and should be easy to approximate, but nearby solutions have large derivatives and are hard to approximate.

We will be using this stiff differential equation in the exercises below, and in the next section you will see a graphical display of stiffness for this equation.

Direction Field Plots

One way of looking at a differential equation is to plot its ``direction field.'' At any point $ (x,y)$, we can plot a little arrow $ \bf p$ equal to the slope of the solution at $ (x,y)$. This is effectively the direction that the differential equation is ``telling us to go,'' sort of the ``wind direction.'' How can you find $ \bf p$? If the differential equation is $ y'=f(x,y)$, and $ \bf p$ represents $ y'$, then $ {\bf p}=(dx,dy)$, or $ {\bf p}=h(1,f(x,y))$ for a sensibly-chosen value $ h$. Matlab has some built-in functions to generate this kind of plot.

Exercise 1: In this exercise, you will see a graphical illustration of why a differential equation is ``stiff.''
  1. Copy the following lines into a file called stiff2_ode.m:
    function f = stiff2_ode ( x, y )
    % f = stiff2_ode ( x, y ) 
    % computes the right side of the ODE
    %   dy/dx=f(x,y)=lambda*(-y+sin(x)) for lambda = 2
    % x is independent variable
    % y is dependent variable
    % output, f is the value of f(x,y).
    f = LAMBDA * ( -y + sin(x) );
  2. Also copy the following lines into a second file called stiff2_solution.m.
    function y = stiff2_solution ( x )
    % y = stiff2_solution ( x )
    % computes the solution of the ODE
    %   dy/dx=f(x,y)=lambda*(-y+sin(x)) for lambda = 2
    % and initial condition y=0 at x=0
    % x is the independent variable
    % y is the solution value
    y = (LAMBDA^2/(1+LAMBDA^2))*sin(x) + ...
        (LAMBDA  /(1+LAMBDA^2))*(exp(-LAMBDA*x)-cos(x));
  3. Create new versions of these files, two using LAMBDA=50 and named stiff50_ode.m and stiff50_solution.m, and two using LAMBDA=10000 and named stiff10000_ode.m and stiff10000_solution.m.
  4. The solution to our stiff ODE is roughly $ \sin x$, so we are interested in values of $ x$ between 0 and $ 2\pi$, and values of $ y$ between -1 and 1. The Matlab meshgrid command is designed for that (it is kind of a two-dimensional linspace). To evaluate the direction vector $ {\bf p}=(p_x,p_y)=(dx)(1,f(x,y))$, $ p_x$ will be all 1's (use the Matlab ones function), and $ p_y$ comes from our right hand side function. Finally, we use the special Matlab command quiver to display the vector plot. Use the following commands to plot the direction field for the $ [0,2\pi]\times[-1,1]$ range of (x,y) values:
      h = 0.1;  % mesh size
      [x,y] = meshgrid ( 0:h:2*pi, -1:h:1 );
      px = ones ( size ( x ) );
      py = stiff2_ode ( x, y );
      quiver ( x, y, px, py ) 
      axis equal  %this command makes equal x and y scaling
  5. Finally, to see how the direction field relates to the approximate solution, plot the function $ \sin x$ on the same frame.
      hold on
      plot(x1,y1,'r')  % solution will come out red.
    Send me a copy of your plot (print -djpeg ex1.jpg, or File $ \rightarrow$Export and choose jpeg).
Look at your direction field. You can see the solution in red and all the arrows point toward it. Basically, no matter where you start in the rectangle, you head very rapidly toward $ \sin x$, and then follow that curve as it varies slowly. Some numerical methods overshoot the solution curve in their enthusiasm to reach it and avoiding this overshoot characterizes methods for stiff systems.

Exercise 2: In this exercise, you will be seeing a numerical illustration of why an ODE is ``stiff.''
  1. Using LAMBDA=10000 to represent a stiff equation, how many points would be needed in the interval [0,2*pi] to make a pleasing plot of stiff10000_solution(x)?
    x=linspace(0,2*pi,10);  % try 10, 20, 40, etc.
    Try it, but I think you will agree with me that it takes about 40 evenly-spaced points to make a reasonable curve. Do not send me copies of your plots.
  2. Now, choose either of euler.m, or rk3.m from last lab and attempt to solve stiff10000_ode.m for 40 steps over the interval [0,2*pi], starting from the initial condition y=0 at x=0. Your solutions blow up (are unstable), don't they? (If you are plotting the solutions, look at the scale!) Multiply the number of steps by 10 repeatedly until you get something reasonable, i.e., try 40, 400, 4000, etc. steps. How many steps does it take to get a reasonable plot? It takes many more steps to get a reasonable solution than you would expect, based on solution accuracy, and this behavior is characteristic of stiff systems.

The Backward Euler Method

The Backward Euler method is an important variation of Euler's method. Before we say anything more about it, let's take a hard look at the algorithm:

$\displaystyle x_{k+1}$ $\displaystyle =$ $\displaystyle x_k+h$  
$\displaystyle y_{k+1}$ $\displaystyle =$ $\displaystyle y_{k} + h f(x_{k+1},y_{k+1})$ (2)

You might think there is no difference between this method and Euler's method. But look carefully-this is not a ``recipe,'' the way some formulas are. It is an equation that must be solved for $ y_{k+1}$, i.e., the equation defining $ y_{k+1}$ is implicit. It turns out that implicit methods are much better suited to stiff ODE's than explicit methods.

If we plan to use Backward Euler to solve our stiff ode equation, we need to address the method of solution of the implicit equation that arises. Before addressing this issue in general, we can treat the special case:

$\displaystyle x_{k+1}$ $\displaystyle =$ $\displaystyle x_k+h$  
$\displaystyle y_{k+1}$ $\displaystyle =$ $\displaystyle y_{k} + h \lambda (-y_{k+1} + \sin x_{k+1} )$  

This equation can be solved for $ y_{k+1}$ easily enough! The solution is
$\displaystyle x_{k+1}$ $\displaystyle =$ $\displaystyle x_k+h$  
$\displaystyle y_{k+1}$ $\displaystyle =$ $\displaystyle (y_{k} + h \lambda \sin x_{k+1} )/(1 + h \lambda)$  

In the following exercise, we will write a version of the backward Euler method that implements this solution, and only this solution. Later, we will look at more general cases.

Exercise 3:
  1. Write a function m-file called back_euler_lam.m with signature line
    function [x,y]=back_euler_lam(xRange,yInitial,numSteps)
    % [x,y]=back_euler_lam(xRange,yInitial,numSteps)
    % comments
    % your name and the date.
    that implements the above algorithm (with $ {\lambda}=$10000). You may find it helpful to start out from a copy of your euler.m file, but if you do, be sure to realize that this function does not have the input parameter f that euler.m does because the function is built-in.
  2. Be sure to include comments following the signature line.
  3. Test back_euler_lam by solving the system (1) for 40 steps over the interval $ [0,2\pi]$, starting from the initial condition $ y=0$ at x=0. Your solution should not blow up and its plot should look reasonable. Please include this plot with your summary.
  4. For the purpose of checking your work, the first few values of y are y=[ 0.0, 0.156334939127, 0.308919855800, 0.453898203657, ...].

In the next exercise, we will compare backward Euler with (forward) Euler for accuracy. Of course, backward Euler gives results when the stepsize is large and Euler does not, but we are curious about the case that there are enough steps to get answers. In addition, we would like to compare the computed and exact solutions over the whole range, and we will use the Matlab norm function to do it. Because it would require too many steps to run this test with $ \lambda=$1.e4, you will be using $ \lambda=$50.

Exercise 4:
  1. Change the value to LAMBDA=50 in back_euler_lam.
  2. Compute the error in the solution versus the exact solution as
      abs( y(end) - stiff50_solution(x(end)) )
    Compute the ratios of the errors for each value of numSteps divided by the error for the succeeding value of numSteps. Use back_euler_lam to fill in the following table, over the interval [0,2*pi] and starting from yInit=0.
    numSteps back_euler_lam error   ratio
       40    ____________________  __________
       80    ____________________  __________
      160    ____________________  __________
      320    ____________________  __________
      640    ____________________  __________
     1280    ____________________  __________
     2560    ____________________
  3. Based on the ratios in the table, estimate the order of accuracy of the method, i.e., estimate the exponent $ p$ in the error estimate $ Ch^p$. $ p$ is an integer in this case.
  4. Repeat this experiment using euler, and fill in the following table. Note that the errors using euler end up being about the same size as those using back_euler_lam, once euler starts to behave.
               stiff50 Error comparison
    numSteps  back_euler_lam        euler      
       40    ___________________   _________________
       80    ___________________   _________________
      160    ___________________   _________________
      320    ___________________   _________________
      640    ___________________   _________________
     1280    ___________________   _________________
     2560    ___________________   _________________
    (In filling out this table, please include at least four significant figures in your numbers. If necessary, use format long to get extra precision printed.)
  5. Compare the rate of convergence using euler with that using back_euler_lam.

Newton's method

You should be convinced that implicit methods are worth while. How, then, can the resulting implicit equation (usually it is nonlinear) be solved? The Newton (or Newton-Raphson) method is a good choice. (We saw Newton's method last semester in Math 2070, Lab 4, you can also find it in Iserles, Section 7.2, or in a Wikipedia article's_method.) Briefly, Newton's method is a way to solve equations by successive iterations. It requires a reasonably good starting point and it requires that the equation to be solved be differentiable. Suppose you want to solve the nonlinear (vector) equation

$\displaystyle {\bf F(Y)}={\bf0}$ (3)

and you know a good starting place $ {\bf Y}_{0}$. The following iteration is called Newton iteration.

$\displaystyle {\bf Y}^{(n+1)}-{\bf Y}^{(n)}=\Delta {\bf Y}^{(n)}= -\left(\frac{...
... {\bf F}}{\partial {\bf Y}} ({\bf Y}^{(n)})\right)^{-1}({\bf F}({\bf Y}^{(n)}))$ (4)

and it (usually) converges quite rapidly. (In the case that $ {\bf Y}$ is a vector, the partial derivative must be interpeted as the Jacobian matrix, whose components are $ J_{ij}=\frac{\partial F_i}{\partial Y_j}$.) The superscript $ (n)$ is used here to distinguish it from the step number, k.

There is an easy way to remember the formula for Newton's method. Write the finite difference formula for the derivative as

$\displaystyle \frac{\mathbf{F}(\mathbf{Y}^{(n+1)})-\mathbf{F}(\mathbf{Y}^{(n)})...

and then take $ \mathbf{F}(\mathbf{Y}^{(n+1)})=0$, because that is what you are wishing for, and solve for $ \mathbf{Y}^{(n+1)}$.

On each step, the backward Euler method requires a solution of the equation

$\displaystyle y_{k+1}=y_k+hf(x_{k+1},y_{k+1})$

so that we can take $ y_{k+1}$ to be $ \bf Y$, and we want to solve the system $ {\bf F(Y)}={\bf0}$, where

$\displaystyle {\bf F(Y)}=y_k+hf(x_{k+1},{\bf Y})-{\bf Y}.$ (5)

When we have found a satisfactorily approximate solution $ \bf Y$ to (5), then we take $ y_{k+1}={\bf Y}$ and proceed with the backward Euler method. We think of each of the values $ {\bf Y}^{(n)}$ as successive corrections to $ y_{k+1}$.

I call your attention to a difficulty that has arisen. For the Euler, Adams-Bashforth and Runge-Kutta methods, we only needed a function that computed the right side of the differential equation. In order to carry out the Newton iteration, however, we will also a function that computes the partial derivative of the right side with respect to $ y$. Last semester we assumed that the derivative of the function was returned as a second variable, and we will use the same convention here.

To summarize, on each time step, start off the iteration by predicting the solution using an explict method. Forward Euler is appropriate in this case. Then use Newton's method to generate successive correction steps. In the following exercise, you will be implementing this method.

Exercise 5:
  1. Change your stiff10000_ode.m so that it has the signature
    function [f, dfdy]=stiff10000_ode(x,y)
    % [f, dfdy]=stiff10000_ode(x,y)
    % comments
    % your name and the date
    and computes the partial derivative of stiff10000_ode with respect to y. To do this, write the derivative of the formula for stiff10000_ode out by hand, then program it.
  2. Similarly, change the signature for stiff50_ode. There is no need to change stiff2_ode because you won't be using it again.
  3. Copy the following function to a file named back_euler.m using cut-and-paste.
    function [x,y]=back_euler(f,xRange,yInitial,numSteps)
    % [x,y]=back_euler(f,xRange,yInitial,numSteps) computes
    % the solution to an ODE by the backward Euler method
    % xRange is a two dimensional vector of beginning and
    %   final values for x
    % yInitial is a column vector for the initial value of y
    % numSteps is the number of evenly-spaced steps to divide
    %   up the interval xRange
    % x is a column vector of selected values for the 
    %   independent variable
    % y is a matrix. The k-th row of y is the transpose of
    %   the approximate solution at x(k)
    % your name and the date
    % force x to be a column vector
    x(1) = xRange(1);
    h = ( xRange(2) - xRange(1) ) / numSteps;
    y(1,:) = transpose(yInitial);
    for k = 1 : numSteps
      x(k+1) = x(k) + h;
      Y = transpose(y(k,:)) + ...
          h * feval( f, x(k), transpose(y(k,:)));
      [Y,isConverged]= ...
      if ~ isConverged
        error(['back_euler failed to converge at step ', ...
      y(k+1,:) = transpose(Y);
  4. Copy the following function to a file named newton4euler.m (Read this name as ``Newton for Euler.'')
    function [Y,isConverged]=newton4euler(f,x,ytranspose,Y,h)
    % [Y,isConverged]=newton4euler(f,x,ytranspose,Y,h)
    % special function to evaluate Newton's method for back_euler
    % your name and the date
    TOL = 1.e-6;
    MAXITS = 10;
    isConverged= (1==0);  % starts out FALSE
    for n=1:MAXITS
      [fValue fPartial] = feval( f, x, Y);
      F = ytranspose + h * fValue - Y;
      dFdY = h * fPartial - eye(length(Y));
      Y = Y - increment;
      if norm(increment,inf) < TOL*norm(Y,inf)
        isConverged= (1==1);  % turns TRUE here
  5. Insert the following comments into either back_euler.m or newton4euler.m where they belong. Some comments belong in both files. Include copies of both files when you send your summary to me.
    % f is a function whose signature is ???
    % TOL = ??? (explain in words what TOL is used for)
    % MAXITS = ??? (explain in words what MAXITS is used for)
    % The Matlab "\" symbol means ???
    % The Matlab function "eye" is used for ???
    % The following for loop performs the spatial stepping (on x)
    % The following statement computes the initial guess for Newton
  6. The equation (4) includes the quantities $ \left(\frac{\partial {\bf F}}{\partial {\bf Y}}
({\bf Y}^{(n)})\right)$, $ \Delta {\bf Y}$, and $ {\bf F}({\bf Y}^{(n)})$. What are the names of the variables in the code representing these mathematical quantities?
  7. Insert a comment in the code indicating which lines implement Equation (4).
  8. Insert a comment in the code indicating which lines implement Equation (5). (Note: This line is specific to the implicit Euler method, and will have to be changed when the method were changed.)
  9. In the case that length(Y)>1, is Y a row vector or a column vector?
  10. If f='stiff10000_ode', x=1.0, y=3.0, h=0.1, and the initial guess for Y=1, write out by hand the (linear) equation that newton4euler solves. What is the value of the solution of this linear equation?
  11. Verify that newton4euler is correct by showing it yields the correct solution in the case that f='stiff10000_ode', x=1.0, y=3.0, h=0.1, and the initial guess for Y=1. (If you discover that the Newton iteration fails to converge, you probably have the derivative in stiff10000_ode wrong.)
  12. In order to check that everything is programmed correctly, solve the same problem you solved for Exercise 3 using back_euler_lam except use back_euler. Solve the ODE using stiff10000_ode, on the interval $ [0,2\pi]$, with initial value 0, for 40 steps. Be sure that LAMBDA=10000 in back_euler_lam.m You can compare all 40 values by comparing the two solutions generated by back_euler_lam and back_euler. The two solutions should agree to ten or more significant digits. (Testing code by comparison with previous code is called ``regression'' testing.)

One simple nonlinear equation with quite complicated behavior is van der Pol's equation. This equation is written

$\displaystyle z'' + a(z^2-1) z' + z = e^{-x}

were $ e^{-x}$ has been chosen as a forcing term that dies out as $ x$ gets large. When $ z>1$, this equation behaves somewhat as a damped oscillator, but when $ z$ is small then it looks like an oscillator with positive feedback (amplification). When $ z$ is small, it grows. When $ z$ is large, it dies off. The result is a non-linear oscillator. Physical systems that behave somewhat as a van der Pol oscillator include electrical circuits with semiconductor devices such as tunnel diodes in them (see this web page from Duke,) and some biological systems, such as the beating of a heart, or the firing of neurons. In fact, vander Pol's original paper was, ``The Heartbeat considered as a Relaxation oscillation, and an Electrical Model of the Heart,'' Balth. van der Pol and J van der Mark, Phil. Mag. Suppl. 6(1928) pp 763-775. More information is available in an interesting Scholarpedia article.

As you know, van der Pol's equation can be written as a system of first-order ODEs in the following way.

$\displaystyle y'_1$ $\displaystyle =$ $\displaystyle f_1(x,y_1,y_2)=y_2$  
$\displaystyle y'_2$ $\displaystyle =$ $\displaystyle f_2(x,y_1,y_2)=-a(y_1^2-1)y_2-y_1+e^{-x}$ (6)

where $ z=y_1$ and $ z'=y_2$.

The matrix generalization of the partial derivative in Newton's method is the Jacobian matrix:

$\displaystyle J=\left[\begin{array}{cc} \frac{\partial f_1}{\partial y_1} & \fr...
...rtial f_2}{\partial y_1} & \frac{\partial f_2}{\partial y_2} \end{array}\right]$ (7)

Exercise 6: In this exercise you will be solving van der Pol's equation with $ a=10$ using back_euler.
  1. Write an m-file to compute the right side of the van der Pol system (6) and its Jacobian (7). The m-file should be named vanderpol_ode.m and should have the signature
    function [f, J]=vanderpol_ode(x,y)
    % [f, J]=vanderpol_ode(x,y)
    % more comments
    % your name and the date
    if length(y) ~=2
      error('vanderpol_ode: y must be a vector of length 2!')
    f = ???
    df1dy1 = 0;
    df1dy2 = ???
    df2dy1 = ???
    df2dy2 = -a*(y(1)^2-1);
    J=[df1dy1 df1dy2
       df2dy1 df2dy2];
    be sure that f is a column vector and that the value of the parameter $ a=10$. (Note: If you get a message saying that convergence failed, you probably have an error in your calculation of the derivatives in J. Fix it.)
  2. Solve the van der Pol system starting from y=[0;0] over the interval from x=0 to x=2 using 40 intervals using euler and also using back_euler. You can see by plotting the solutions that euler does not give a good solution while back_euler does. Send me plots of both solutions with your summary.
  3. Solve the system again, this time using 640 intervals. This time both methods should remain stable and the answers should be close. Please include plots of both solutions on a single frame with your summary. (Use hold on and hold off.)

    The reason that it takes so many intervals to achieve reasonable accuracy is because the the solution remains relatively flat for ``a long time'' before ``taking off.'' Small errors in the flat portion translate into fairly large errors in the location of the peak. The next peak (the solution is asymptotically periodic) occurs near x=17 and requires over 10,000 intervals to predict accurately. (At least twice the 320 intervals per unit of $ x$ used here for the first.) More sophisticated algorithms use variable-sized steps and can solve such problems much more efficiently.

The Trapezoid Method

You have looked at the (forward) Euler and backward Euler methods. These methods are simple and involve only function or Jacobian values at the beginning and end of a step. They are both first order, though. It turns out that the trapezoid method also involves only values at the beginning and end of a step and is second order accurate, a substantial improvement. This method is also called ``Crank-Nicolson,'' especially when it is used in the context of partial differential equations. As you will see, this method is appropriate only for mildly stiff systems.

The trapezoid method can be derived from the trapezoid rule for integration. It has a simple form:

$\displaystyle x_{k+1}$ $\displaystyle =$ $\displaystyle x_k+h$  
$\displaystyle y_{k+1}$ $\displaystyle =$ $\displaystyle y_{k} + \frac{h}2 ( f(x_{k},y_{k}) + f(x_{k+1},y_{k+1}))$ (8)

from which you can see that this is also an ``implicit'' formula. The backward Euler and Trapezoid methods are the first two members of the ``Adams-Moulton'' family of ODE solvers.

In the exercise below, you will write a version of the trapezoid method using Newton's method to solve the per-timestep equation, just as with back_euler. As you will see in later exercises, the trapezoid method is not so appropriate when the equation gets very stiff, and Newton's method is overkill when the system is not stiff. The method can be successfully implemented using an approximate Jacobian or by computing the Jacobian only occasionally, resulting in greater computational efficiency. We are not going to pursue this alternative in this lab, however.

Exercise 7:
  1. Make a copy of your back_euler.m file, naming the copy trapezoid.m. Modify it to have the signature:
    function [ x, y ] = trapezoid ( f, xRange, yInitial, numSteps )
    % comments
  2. Add appropriate comments below the signature line.
  3. Replace the lines
      Y = transpose(y(k,:)) + ...
          h * feval( f, x(k), transpose(y(k,:)));
      [Y,isConverged]= ...
    with the new lines
      f_xk_yk= feval( f, x(k), transpose(y(k,:)));
      Y = transpose(y(k,:)) + h * f_xk_yk;
      [Y,isConverged]=newton4trapezoid( ...
  4. Make a copy of your newton4euler.m, naming the copy newton4trapezoid.m.
  5. Change the signature of newton4trapezoid to
    function [Y,isConverged]=newton4trapezoid(f,x,ytranspose,Y,h,f_xk_yk)

  6. Increase the iteration limit MAXITS=30;

  7. In order to implement the trapezoid method, you need to write the function $ F(Y)$ that appears in (4) and (5) so it is valid for the trapezoid rule. It is currently written for the backward Euler method (5). To do this, consider (8), repeated here

    $\displaystyle y_{k+1}=y_{k} + \frac{h}2 ( f(x_{k},y_{k}) + f(x_{k+1},y_{k+1}))$

    and replace $ y_{k+1}$ with $ Y$ and bring everything to the right. Then write

    $\displaystyle F(Y)=0=$   right side$\displaystyle .$

    Modify newton4trapezoid.m to solve this function. Do not forget to modify the Jacobian dFdY $ =\frac{\partial F}{\partial Y}$ to reflect the new function F. Remember, if you have the Jacobian wrong, the Newton iteration will fail.
  8. Test your newton4trapezoid code for the case f='stiff50_ode', x=1.0, y=3.0, h=0.1, $ f(x_{k},y_{k}))$=f_xk_yk=1, and the initial guess for Y=1.
  9. Write out by hand the (linear) equation that newton4trapezoid solves in the case f='stiff50_ode', x=1.0, y=3.0, h=0.1, f_xk_yk=1. Does the solution to this equation agree with the one from newton4trapezoid?
  10. Test the accuracy of the trapezoid method by computing the numerical solution of stiff50_ode.m (starting from $ y=0$ over the interval $ x\in[0,2\pi]$, and fill in the following table. Compute ``error'' as the difference between the computed and exact values at $ x=2\pi$.
    numSteps  trapezoid error         ratio
      10     ____________________  __________
      20     ____________________  __________
      40     ____________________  __________
      80     ____________________  __________
     160     ____________________  __________
     320     ____________________
  11. Are your results consistent with the theoretical $ O(h^2)$ convergence rate of the trapezoid rule? If not, you have a bug you need to fix.

In the following exercise, you are going to see that the difference in accuracy between the trapezoid method and the backward Euler method can improve the solution of the vander Pol equation.

Exercise 8:
  1. Use trapezoid.m to solve the van der Pol equation starting from the value [0;0] on the interval [0,10] using 100, 200, 400, and 800 steps, and plot all four solutions on one plot. You should see that the solution for 400 steps is pretty much the same as the one for 800 steps, and the others are different. We will assume that the final two approximate solutions represent the correct solution. Please send me this plot with your summary. Hint: you can generate this plot easily with the following sequence of commands.
    hold off
    hold on
    hold off
  2. Use back_euler.m to solve the van der Pol equation starting from the value [0;0] on the interval [0,10] using 400, 800, 3200, and 12800 steps, and plot all four solutions on one plot. The larger number of points may take a minute. You should see a progression of increasing accuracy in the second ``pulse.'' Please send me this plot with your summary.

The trapezoid method is unconditionally stable, but this fact does not mean that it is good for very stiff systems, however. In the following exercise, you will apply the trapezoid method to a very stiff system so you will see that numerical errors arising from the initial rapid transient persist when using the trapezoid rule but not for backwards Euler.

Exercise 9:
  1. Solve the stiff10000_ode system starting from yInitial=0.1 on the interval [0,10] using 100 steps. Note that the initial condition is not zero. Use both the trapezoid method and backwards Euler, and plot both solutions on the same plot. Send me this plot with your summary file.
  2. Use the trapezoid method to solve the same case using 200, 400, 800, and 1600 steps. Plot each solution on its own plot. You should see that the effect of the initial condition is not easily eliminated. Please include the 1600 step case when you send your summary file.

Matlab ODE solvers

Matlab has a number of built-in ODE solvers. These include:

Matlab ODE solvers
ode23 non-stiff, low order
ode113 non-stiff, variable order
ode15s stiff, variable order, includes DAE
ode23s stiff, low order
ode23t trapezoid rule
ode23tb stiff, low order
ode45 non-stiff, medium order (Runge-Kutta)
All of these functions use the very best methods, are highly reliable, use adaptive step size control, and allow close control of errors and other parameters. As a bonus, they can be ``told'' to precisely locate interesting events such as zero crossings and will even allow user-written functions to be called when certain types of events occur. There is very little reason to write your own ODE solvers unless you are actively researching new methods. In the following exercise you will see how to use these solvers in a very simple case.

The ODE solvers we have written have four parameters, the function name, the solution interval, the initial value, and the number of steps. The Matlab solvers use a good adaptive stepping algorithm, so there is no need for the fourth parameter.

Exercise 10:
  1. You can watch a solution evolve if you call the solver without any output variables. Use ode45 to solve the same van der Pol problem we have been solving on the interval [0,10], starting from [0;0]. Use the following Matlab command:
    Please include this plot with your summary.
  2. You can see the difference between a stiff solver and a non-stiff solver on this moderately stiff problem by comparing ode45 with ode15s. (These are my personal favorites.) You can have both plots on the screen at once using the following commands
    You should be able to see that the step density (represented by the little circles on the curves) is somewhat less for ode15s. The difference would be much larger if you were to try a stiffer problem, such as a=1000 in vanderpol_ode.m. You do not need to send me these plots.
  3. If you wish to examine the solution in more detail, or manipulate it, you need the solution values, not a picture. You can get the solution values with commands similar to the following:
    If you wish, you can plot the solution, compute its error, etc. For this solution, what is the value of $ y_1$ at x=10? How many steps did it take? (The length of x is one more than the number of steps.)
  4. Suppose you want a very accurate solution. The default tolerance is .001 relative accuracy in Matlab, but suppose you want a relative accuracy of 1.e-8? There is an extra variable to provide options to the solver. It works in the following manner:
    (The semicolon is left off the odeset call so you can see some of the other options and their defaults. Use help odeset for more detail.) How many steps did it take this time? What is the value of $ y_1$ at x=10?

Back to MATH2071 page.

Mike Sussman 2009-02-04