MATH2070: LAB 9: Legendre Polynomials and Approximation

Introduction

With interpolation we were given a formula or data about a function , and we made a model that passed through a given set of data points. We now consider approximation, in which we are still trying to build a model, but specify some condition of closeness'' that the model must satisfy. It is still likely that will be equal to the function at some points, but we will not know in advance which points.

As with interpolation, we build this model out of a set of basis functions. The model is then a linear combination with coefficients that specify how much of each basis function to use when building the model.

In this lab we will consider four different selections of basis functions in the space . The first is the usual monomials , , , and so on. In this case, the coefficients are exactly the coefficients Matlab uses to specify a polynomial. The second is the set of Legendre polynomials, which will yield the same approximations but will turn out to have better numerical behavior. The third selection is the trigonometric functions, and the final selection is a set of piecewise constant functions. Advantages and disadvantages of each for numerical computations will be presented.

Once we have our basis set, we will consider how we can determine the approximating function as the best possible'' approximate for the given basis functions, and we will look at the behavior of the approximation error. Since we are working in , we will use the norm to measure error.

It turns out that approximation by monomials results in a matrix similar to the Hilbert matrix whose inversion can be quite inaccurate, even for small sizes. This inaccuracy translates into poor approximations. Use of orthogonal polynomials such as the Legendre polynomials, results in a diagonal matrix that can be inverted almost without error, but the right side can be difficult and slow to compute to high accuracy. In addition, roundoff accumulation in function reconstruction can be a serious problem. Fourier approximation substantially reduces the roundoff errors, but is slow to compute and evaluate, although fast methods (not discussed in this lab) can improve speed dramatically. Approximation by piecewise constants is not subject to these sources of error until ridiculously large numbers of pieces are employed, but can be slow to converge.

We will be attempting to approximate several functions in this lab, all on the interval [-1,1]. These functions include:

• The Runge example function, , runge.m.
• The function

 (1)

For the purpose of this lab, this function will be called partly quadratic.'' It was chosen because it is simple, continuous and satisfies , but is not differentiable. A simple Matlab function m-file to compute this partly quadratic'' function can be found by copying the following code:
function y=partly_quadratic(x)
% input x (possibly a vector or matrix)
% output y, where
% for x<=0, y=0
% for x>0,  y=x(1-x)

y=(heaviside(x)-heaviside(x-1)).*x.*(1-x);

Remark: The heaviside'' function (sometimes called the unit step function'') is named after Oliver Heaviside and is defined as

 (2)

The heaviside function is part of Matlab.
• A third function is a function whose graph is shaped like the teeth of a saw, similar to one used in Lab 6:

 (3)

A simple Matlab function m-file to compute this sawshape function can be found by downloading sawshape9.m from the web site or by copying the following code:
function y=sawshape9(x)
% y=sawshape9(x)
% input x (possibly a vector or matrix)
% output y, where
%   y=(x+1) for -1<=x<0
%   y=0     for x=0
%   y=(x-1) for 0<x<=1

y= heaviside(-x).*(x+1) + heaviside(x).*(x-1);


Remark: In Lab 6, you saw a similar function defined using the Matlab find command, while heaviside is used here. There is no essential reason for this change.

This lab will take four sessions. If you print this lab, you may prefer to use the pdf version.

Matlab remarks

This kind of approximation requires evaluation of integrals. We will use the Matlab numerical integration function integral. Since we will be computing fairly small errors, will replace the default error tolerances with smaller ones. The Matlab command is

q=integral(func,-1,1,'AbsTol',1.e-14,'RelTol',1.e-12);

where func is a function handle to a function written using vector (array) syntax. This command will result in an approximation, , satisfying

WARNING: The integral function was introduced into Matlab in 2012. If your version of Matlab is older than that, you can use the quadgk function for this lab. The calling sequence for that function is
q=quadgk(func,-1,1,'AbsTol',1.e-14,'RelTol',1.e-12, ...
'MaxIntervalCount',15000);


ntgr8, an abbreviation function

Since these integration functions involve extra parameters and extra typing, You should create an abbreviation for them for use in this lab. Create an m-file named ntgr8 in the following way:

function q=ntgr8(func)
% q=ntgr8(func) integrates func over [-1,1] with tight tolerances

q=integral(func,-1,1,'AbsTol',1.e-14,'RelTol',1.e-12);

with a similar function if you are using quadgk. This function will save you some typing for this lab. The name looks strange in print, but it is easy to remember because it is a pun. The letter n can be pronounced en,'' the letter t can be pronounced tee'' and gr8 can be pronounced grate.'' Put them together and it sounds like integrate.''

Least squares approximations in

The problem of approximation can be described in the following way.

Problem: Given a function , and a (complete) set of functions , , then for a given , find the set of values so that

 (4)

and the approximation is best in the sense of having the smallest error.

Quarteroni, Sacco, and Saleri, in Section 10.7, discuss least-squares approximation in function spaces such as . The idea is to minimize the norm of the difference between the given function and the approximation. Given a function and a set of approximating functions (such as the monomials ), for each vector of numbers define a functional

This continuous functional becomes large when is large and it is bounded below by 0, so it must have a minimum, and because the function is differentiable as a function of , the minimum must occur when

for

This expression is evaluated here for the case of quadratic approximations ().

Consider the functional

where is the function to be approximated on the interval . Taking partial derivatives with respect to yields the equations

and setting each of these to zero yields the system of equations

or

 (5)

(Since the interval of integration is symmetric about the origin, the integral of an odd monomial is zero.)

Equation (5) is related to Equations (10.1) and (10.2) in Quarteroni, Sacco, and Saleri, but their presentation focusses on orthogonal polynomials. For an arbitrary value of , Equation (5) can be written in the following way, where the indexing corresponds with Matlab indexing (starting with 1 instead of 0) and with the Matlab convention for coefficients of polynomials (first coefficient is for the highest power of .)

 (6)

The matrix in (6),

is closely related to the Hilbert matrix. In fact, if the derivation above were done over the interval [0,1] instead of [-1,1], the matrix that arose would be the Hilbert matrix. The Hilbert matrix is notorious for having a very poor condition number and being difficult to invert without losing accuracy. The following exercise illustrates this difficulty and its implication for approximation.

Examining Equation (6), there are two sets of integrals that need to be evaluated in order to compute the coefficients . On the left side, the integrands involve the products . On the right side, the integrands involve the products . Please note that these expressions are made more complicated by the fact that they are indexed backwards.'' This is done to be consistent with Matlab's numbering scheme for coefficients. Later in the lab when we switch to Legendre polynomials and are free to number the coefficients as we wish, we will switch to a simpler numbering scheme.

Once the coefficients have been found, the Matlab polyval function can be used to evaluate the resulting polynomials.

Exercise 1: In this exercise, you will be writing a function m-file to compute the coefficient vector of the best approximation to a function using Equation (6) above. This m-file will have the signature
function c=coef_mon(func,n)
% c=coef_mon(func,n)
% func is a function handle

% your name and the date

and be called coef_mon.m. It will solve the system (6) by constructing the matrix and right side and solving the resulting system for . In coef_mon.m, func refers to a function handle.
1. Begin coef_mon.m with the above code.
2. .
for k=1:n
% force b to be a column vector with second index
b(k,1)=ntgr8(@(x) func(x).*x.^(n-k));
end

Warning: You should already have created the abbreviation function ntgr8.m according to instructions in Section 1.2 above on Matlab remarks contained in the introduction to this lab.
3. Complete the following code to compute the matrix elements H(k,ell) using the formula . Your code will be similar to the above code for b(k).
for k=1:n
for ell=1:n
H(k,ell)=ntgr8( ??? )
end
end

Note 1: I try not to use the letter l'' as a variable because it looks so much like the number 1. Instead, I use ell.
Note 2: Time could be saved in the above code by taking advantage of the fact that H is a symmetric matrix.
Note 3: It is not necessary to write code to compute the quantities because you can easily write out the values of the integrals . I prefer that you use the approach described above, though, to help you understand it.
4. Solve for the coefficients with
c=H\b;

Do not be surprised if Matlab warns you that H is poorly conditioned for larger values of n.
5. Verify your code is correct by computing the best 3-term approximation by monomials for the polynomial . The result should be the coefficient vector for the polynomial itself.
6. Write a script m-file named test_mon.m containing code similar to the following
func=@partly_quadratic;
c=coef_mon(func,n);

xval=linspace(-1,1,10000);
yval=polyval(c,xval);
yexact=func(xval);
plot(xval,yval,xval,yexact)

% relative Euclidean norm is approximating
% the relative integral least-squares (L2 norm)
% using an approximate trapezoid rule
relativeError=norm(yexact-yval)/norm(yexact)

and use it to evaluate the approximation for n=1 and n=5. Look at the plot and estimate by eye if the area between the exact and approximate curves is divided equally between above'' and below.'' Further, the error for n=5 should be smaller than for n=1 and the plot should look much better, but still far from perfect.
7. You do not need to send me copies of the plots, but fill in the following table using the Runge example function. When you report errors, please use at least three significant digits. One easy way to get good precision is to use format short e. You should find that the error gets smaller for early values of n and then deteriorates. At what value of n does the smallest relative error occur? (You may get warnings that the matrix H is almost singular.)
     Runge
n    relative error
1    _______________
2    _______________
3    _______________
4    _______________
5    _______________
6    _______________
10    _______________
20    _______________
30    _______________
40    _______________

8. You should notice that the errors in the Runge case for n=1 and n=2 are the same, as are the errors for n=3 and n=4, as well as n=5 and n=6. Explain why this should occur.
9. Fill in the following table for the partly quadratic function.
     partly_quadratic
n    relative error
1    _______________
2    _______________
3    _______________
4    _______________
5    _______________
6    _______________
10    _______________
20    _______________
30    _______________
40    _______________

10. Fill in the following table for the sawshape9 function.
       sawshape9
n    relative error
1    _______________
2    _______________
3    _______________
4    _______________
5    _______________
6    _______________
10    _______________
20    _______________
30    _______________
40    _______________


You should notice that much smaller errors are associated with the smooth Runge example function than with the non-differentiable partly quadratic function and with the discontinuous sawshape9 function.

You should see that the errors do not seem to be decreasing much for the final values of , and wonder why the method becomes poor as gets large. It may not be obvious, but the matrices (5) and (6) are related to the Hilbert matrix and are extremely difficult to invert. The reason inversion is difficult is because the monomials all start to look the same as gets larger, that is, they become almost parallel in the sense. Even if the integration could be performed without error, you would observe roundoff errors in evaluating the resulting high-order polynomial.

One way to make the approximation problem easier might be to pick a better set of functions than monomials. The following section discusses a good alternative choice of polynomials. These polynomials allow much larger values of .

Legendre Polynomials

The Legendre polynomials form an -orthogonal set of polynomials. You will see below why orthogonal polynomials make particularly good choices for approximation. In this section, we are going to write m-files to generate the Legendre polynomials and we are going to confirm that they form an orthogonal set in . Throughout this section, we will be representing polynomials as vectors of coefficients, in the usual way in Matlab.

The Legendre polyonomials are a basis for the set of all polynomials, just as the usual monomial powers of are. They are appropriate for use on the interval [-1,1] because they are orthogonal when considered as members of . Polynomials that are orthogonal are discussed by Quarteroni, Sacco, and Saleri in Chapter 10, with Legendre polynomials discussed in Section 10.1.2. The first few Legendre polynomials are:

 (7)

The value at of any Legendre polynomial can be determined using the following recursion:

 and,

The following recursive Matlab function computes the values of the Legendre polynomial.

function yval = recursive_legendre ( k , xval )
% yval = recursive_legendre ( k , xval )
% yval = values of the k-th Legendre polynomial
% at values xval

if k<0
error('recursive_legendre: k must be nonnegative.');
elseif k==0    % WARNING: no space between else and if!
yval = ones(size(xval));
elseif k==1    % WARNING: no space between else and if!
yval = xval;
else
yval = ((2*k-1)*xval.*recursive_legendre(k-1,xval) ...
- (k-1)*recursive_legendre(k-2,xval) )/k;
end


Unfortunately, this recursive function is too slow to be used in this lab. The alternative to recursive calculation of Legendre polynomials is one that uses loops. It is a general fact that any recursive algorithm can be implemented using a loop. The code for the loop is typically more complicated than the recursive formulation. In the following exercise, you will write an algorithm using loops for Legendre polynomials.

Exercise 2: Write a function m-file legen.m first to find the values of the Legendre polynomial using a loop. The strategy will be to first compute the values of and from their formulæ, then compute the values of for larger subscripts by building up from lower subscripts, stopping at . You should note is that if is larger than 2, you only need to retain the values of and in order to compute the values of .
1. Use the signature
function yval = legen ( n , xval)
% yval = legen ( n , xval)

% your name and the date

2. Use if tests to define the cases n<0, n==0 and n==1, and use the formulæ for these cases to compute the vector of values yval.
3. When n is larger than 1, compute the vector of values of and call it ykm1 (ykm1 for y sub k minus 1''), and compute the vector of values of and call it yk.
4. Write a loop for k=2:n in which you first put the value of ykm1 into ykm2 (y sub k minus 2'') and then the value of yk into ykm1. You do this because you are changing the value of k to be one larger. Then compute , calling it yk, using the values ykm1 and ykm2. This line will be similar to the corresponding line in recursive_legendre.
5. When the loop is complete, k has the value n. Set yval=yk;
6. Test and verify your legen function for with the following code.
xval=linspace(0,1,10);
norm( legen(3,xval)-(5*xval.^3-3*xval)/2 )

and showing that the difference is of roundoff size.
7. Your legen function and recursive_legendre should agree. Test this agreement for n=10 with the following code.
xval=linspace(0,1,20);
norm( legen(10,xval) - recursive_legendre(10,xval) )

The difference should be of roundoff size.

Orthogonality and Integration

The Legendre polynomials form a basis for the linear space of polynomials. One good characteristic of any set of basis vectors is to be orthogonal. For functions, we use the standard dot product, and say that two functions and are orthogonal if their dot product

is equal to zero. In Matlab, you could use integral or quadgk via the abbreviation ntgr8 to compute this quantity in the following way:
q=ntgr8(@(x) ffunc(x).*gfunc(x) );

where ffunc gives the function and gfunc gives .

Legendre polynomial approximation

Legendre polynomial approximation in follows the same recipe as monomial approximation:

1. Compute the matrix . This matrix is diagonal (as opposed to the Hilbert matrix in the monomial case), with diagonal entries , so integration is not necessary!
2. Compute the right side values .
3. Solve using the formula .
4. The approximation can be evaluated as

 (8)

The coefficients are not the same as the monomial coefficients computed earlier, and Equation (8) must be used rather than polyval to evaluate the resulting approximations.

Exercise 3:
1. Write a function m-file named coef_legen.m with signature
function d=coef_legen(func,n)
% d=coef_legen(func,n)

% your name and the date

to compute the coefficients of the approximation as . You should use the ntgr8 abbreviation function in the same way as used in Exercise 2 Note: The factor comes from the inverse of the diagonal matrix .
2. Verify that coef_legen is correct by computing the best Legendre approximation to the Legendre function , where . (Recall that is the number of terms, not the degree of the polynomial.) The values you get for are the coefficients in Equation (8), not the coefficients of the polynomial.
3. Write a function m-file called eval_legen.m to be used to evaluate Legendre polynomials. It should evaluate Equation (8) and have the signature
function yval=eval_legen(d,xval)
% yval=eval_legen(d,xval)

% your name and the date

4. Verify that eval_legen is correct by choosing d as the coefficients of (you computed d for this case above) and comparing the results of eval_legen and legen at the five values [0,1,2,3,4].
5. Write an m-file called test_legen.m, similar to the test_mon.m file you wrote above. It should use eval_legen and produce the relative error of the approximation. It is instructive if you plot the approximation as well, but you do not need to send me the plots.
6. Place the Matlab command tic; at the beginning of the script and the Matlab command toc; at the end. This pair of commands will measure the elapsed time taken and print it.
7. Fill in the following table for the Runge example function.
     Runge
n    relative error
1    _______________
2    _______________
3    _______________
4    _______________
5    _______________
6    _______________
10    _______________
20    _______________
30    _______________
40    _______________
50    _______________

8. Fill in the following table for the partly quadratic function.
              partly_quadratic
n    relative error      elapsed time
5    _______________
10    _______________
20    _______________
40    _______________
80    _______________
160    _______________    _______________
320    _______________    _______________
640    _______________    _______________

9. Fill in the following table for the sawshape9 function.
                 sawshape9
n    relative error      elapsed time
5    _______________
10    _______________
20    _______________
40    _______________
80    _______________
160    _______________    _______________
320    _______________    _______________
640    _______________    _______________

10. Based on the above data, roughly estimate the value of where elapsed time is proportional to . Is ? ? ? ? ?

You should find the same values as for approximation by monomials for small n, and you can accurately compute with larger values of n using Legendre polynomials than using monomials. However, using large values of n can result in computing times that grow more rapidly than expected as n increases because the integral function must compensate for roundoff error arising from rapid oscillations. In fact, the time does grow more rapidly than , where you just estimated .

Fourier series

There is another set of functions that is orthogonormal in the sense. This is the set of trigonometric functions

and they can be used for approximating functions. We have seen trigonometric polynomials before in the context of interpolation using for . Using complex exponentials is equivalent to and but the trigonometric functions are orthogonormal on while the complex exponentials are not.

The sum of the first terms of the Fourier series for a function is given as

 (9)

As usual, the coefficients can be found by multiplying both sides by , , or and integrating. Orthonormality leads to the expressions

(terms involving are zero).

Exercise 4:
1. Write a function m-file named coef_fourier.m that is similar to coef_legen and has signature
function [z,s,c]=coef_fourier(func,n)
% [z,s,c]=coef_fourier(func,n)

% your name and the date

to compute the first coefficients of the Fourier series using Equation (10).
2. Test your coefficient function by using , and , with . Of course, you should get , and all others zero in the first case, and all others zero in the second case, and with all others zero in the third case.
Warning: The integral function requires that its integrand be a function that returns a vector when its argument (x) is a vector. Be sure that you use such a function in the case! You can check your function by computing and checking that it equals 1.
3. Write a function m-file called eval_fourier.m to evaluate Equation (9) and have the signature
function yval=eval_fourier(z,s,c,xval)
% yval=eval_fourier(z,s,c,xval)

% your name and the date

4. Test eval_fourier.m using the same three functions you used above: , and . In each case, use eval_fourier.m with the appropriate choices of coefficients z, s and c, and compare the approximate values at a selection of values against the true values. Describe the values you chose and the results you obtained.
5. Write an m-file called test_fourier.m, similar to the test_mon.m and test_legen.m file you wrote above. It should use eval_fourier and produce the relative error of the approximation. It is instructive if you plot the approximation as well, but you do not need to send me the plots.
6. Fill in the following table for the Runge example function.
     Runge
n    relative error      elapsed time
1    _______________
2    _______________
3    _______________
4    _______________
5    _______________
6    _______________
10    _______________
50    _______________
100    _______________    ______________
200    _______________    ______________
400    _______________    ______________
800    _______________    ______________

7. Fill in the following table for the partly quadratic function.
     partly_quadratic
n    relative error      elapsed time
1    _______________
2    _______________
3    _______________
4    _______________
5    _______________
6    _______________
10    _______________
50    _______________
100    _______________    ______________
200    _______________    ______________
400    _______________    ______________
800    _______________    ______________

8. When you used trigonometric polynomial interpolation in Lab 6, you looked at the error for a sawshape function and saw the Gibb's phenomenon, which kept the error from going to zero. You have seen good performance of Fourier approximation on differentiable and continuous functions above. A discontinuous function exhibits the Gibb's phenomonon, but when convergence is measured using an integral norm it doesn't prevent convergence (although it slows it down). Fill in the following table for the sawshape9 function.
       sawshape9
n    relative error      elapsed time
1    _______________
2    _______________
3    _______________
4    _______________
5    _______________
6    _______________
10    _______________
50    _______________
100    _______________    ______________
200    _______________    ______________
400    _______________    ______________
800    _______________    ______________

9. Based on the above data, roughly estimate the value of where elapsed time is proportional to . Is ? ? ?
You should be convinced that these series do not converge very rapidly, with execution times becoming much too large to acheive high accuracy. This increase in execution time is due to the adaptive quadrature used by integral requiring progressively more points. It turns out that increasing beyond about 2000 results in failure of integral to achieve the desired accuracy. Thus, the sawshape9 and partly quadratic functions cannot be integrated to the accuracy that the Runge example function can achieve.

Piecewise constant approximation

We have learned that approximation is best done using matrices that are easy to invert accurately, like diagonal matrices. This is the reason for using sets of orthogonal basis functions. We would also like to be able to perform the right side integrals easily as well. A large part of the reason that the orders of approximations in the exercises above have been restricted is that the integrals are difficult to perform accurately because they wiggle'' a lot, a major source of inaccuracy in the approximation. Using the integral function hides the inaccuracy, but you pay for it because the integrations require substantial time for higher values of n, as you may have noticed.

In this section, we will look at approximation by piecewise constant functions. Approximation by piecewise linears or higher are also useful, but all the important steps are covered with piecewise constants. Furthermore, piecewise constants are easy to extend to higher dimensions.

Suppose that a number is given and that the interval is divided into equal subintervals and points , . For , a function can be defined as

These functions clearly satisfy

 (11)

This orthogonality immediately implies linear independence. In addition, any function in can be approximated as a sum of them (this is a deep theorem). As it turns out, these theoretical facts are not compromised by numerical difficulties and for reasonable values of can be used for numerical approximation.

If a vector of coefficients can be found to represent the piecewise constant approximation to a function , then the approximation can be evaluated as

 (12)

where is the index satisfying .

In the following exercise, we will follow the same basic recipe as before to compute the coefficients and the approximation to .

Exercise 5: In this exercise you will be working with these piecewise constant (pc) functions. You may assume that is even so that for , that for and for .
1. Write a function m-file named coef_pc.m with signature
function a=coef_pc(func,Npc)
% a=coef_pc(func,Npc)

% your name and the date

to compute the coefficients of the approximation as

Use integral (or, if you are using an older version of Matlab, quadgk), not ntgr8 to compute these integrals, because the interval of integration is not [-1,1]. To write this function, you will need to use linspace to generate the points . Be careful not to confuse the number of points with the number of intervals!
2. Test your coef_pc on the function that is equal to one for all values of x. In Matlab, this can be done with y=ones(size(x)). Use Npc=10. Of course, all .
3. Test coef_pc with Npc=10 on the function . You should get

4. We have already used a function called bracket.m that determines the values of for which . You may use the one you downloaded earlier or download bracket.m again from the web site. Use bracket to write a function m-file called eval_pc.m to evaluate the piecewise constant approximation to using Equation (12) and has the signature
function yval=eval_pc(a,xval)
% yval=eval_pc(a,xval)

% your name and the date

As in coef_pc, you will need to use linspace to generate the points , and bracket to find the values of corresponding to the values of xval.
5. Generate the coefficients a using the function and Npc=10 that you used above. Test eval_pc using
xval=[-0.95,-0.65,-0.45,-0.25,-0.05,0.15,0.35,0.55,0.75,0.95]

6. Write an m-file called test_pc.m similar to test_mon.m and test_legen.m above. You should use vector (componentwise) statements whenever possible or the calculations might take a long time. test_pc.m should:
1. Confirm that Npc is an even number (and call error if not),
2. Evaluate the coefficients (a) of the approximation using coef_pc.m,
3. Use eval_pc.m to evaluate the approximation and then compare the approximation against the exact solution, Because we will be using large values of Npc, choose at least 20000 test points.
4. Use tic and toc to measure the time taken by computing the coefficients, computing the approximate and exact solutions, and computing the error.
It will be valuable to plot the approximation because it will help you debug your work and it will illustrate the process.

Look carefully and critically at the plot for the Runge function with Npc=8. You should be able to justify to yourself visually that no other piecewise constant function would produce a better approximation. Send me the plot for the Runge function with Npc=8.

7. Fill in the following table for the Runge example function:
           Runge example
Npc    relative error      elapsed time
4    _______________
8    _______________
16    _______________
64    _______________
256    _______________
1024    _______________    ______________
4096    _______________    ______________
16384    _______________    ______________

8. Fill in the following table for the partly quadratic function:
     partly quadratic
Npc    relative error      elapsed time
4    _______________
8    _______________
16    _______________
64    _______________
256    _______________
1024    _______________    ______________
4096    _______________    ______________
16384    _______________    ______________

9. Fill in the following table for the sawshape9 function:
           sawshape9
Npc    relative error      elapsed time
4    _______________
8    _______________
16    _______________
64    _______________
256    _______________
1024    _______________    ______________
4096    _______________    ______________
16384    _______________    ______________

10. Based on the above data, roughly estimate the integer where relative error is proportional to .
11. Based on the above data, roughly estimate the value of where elapsed time is proportional to .
This approximation may take a while to compute, but it does not deteriorate as Npc gets large! In fact, you should observe linear convergence. Further, the time required does not grow very quickly. For any of these three functions, accuracy higher than can be achieved.

Remark: As you might imagine, approximation using piecewise linear functions will converge more rapidly than using piecewise constants. There are alternative approaches for using piecewise linears: piecewise linear functions on each interval with jumps at interval endpoints, as the piecewise constant functions have; and, piecewise linear functions that are continuous throughout whole interval. The first retains orthogonality and the diagonal form of the coefficient matrix . The second sacrifices the diagonal form for a banded form that is almost as easy to solve, as you may see in the next exercise. Continuity, however, can be worth the sacrifice, depending on the application. Even higher order piecewise polynomial approximation is possible, if the application can benefit.

Remark: In Exercises 3, 4, and 5 you were asked to estimate the growth of the running time with n. You found in Exercises 3 and 4 that the running time increased more rapdily than linearly (superlinear'') as n increases, but in Exercise 5 it increased linearly. The distinction between superlinear and linear is important when choosing an algorithm to use. When running time increases too fast, algorithms can become too time-consuming to be useful. Since it is clear that approximation algorithms must scale at least linearly in n, Exercise 5 shows that piecewise constant approximation can be an attractive choice. In the extra credit problem below, you will find a similar result for piecewise linear approximation.

Extra credit: Piecewise linear approximation (8 points)

Piecewise linear approximations improve the rate of convergence over piecewise constant approximations, at the cost of increased work. In addition, piecewise linear approximations are commonly used in finite element approximations to differential equations. In this extra credit exercise, you will see how the same approach you saw above can be extended to piecewise linear approximations.

For this presentation, you again break the interval into equal subintervals. Denote the number of intervals by , although it is the same as above. Thus, there are points defined over the interval according to the Matlab function x=linspace(-1,1,Npl+1). For each , define a set of hat'' functions as

so that is a continuous piecewise linear function that is 1 at and zero at all the other points for . It is possible to show that these functions are linearly independent when considered as members of the Hilbert space . Further, an equation analogous to (11) is

 (13)

There are functions !

There is an equation analogous to (4), (8), (9), and (12):

 (14)

Exercise 6:
1. To see why the functions are called hat'' functions, plot the function when . If you do this using Matlab, be sure that the points you use to plot includes the points or your plot will not look right. You might find that forcing space around the plot makes it look more like a hat. You could use axis([-1.1, 1.1, -0.1, 1.1]) .
2. Write a function coef_plin to compute the coefficients in (14). The system of equations you will need to generate and solve are analogous to (6) and can be constructed by replacing with in (14), multiplying by and integrating. Do not forget to construct a matrix analogous to . Devise a simple test meant for debugging and test coef_plin.
Hint: The functions have limited support, especially for large pl. Adjust the limits of integration to reflect the support in order to save time (and, it turns out) improve accuracy. You will need to use the integral or quadgk commands directly instead of through ntgr8 since ntgr8 assumes an integration interval of [-1,1].
3. Write a function eval_plin. Devise a test (I suggest one using or ) and test eval_plin.
4. Write a function test_plin, including timing, and apply it to the three functions we have been using: runge partly_quadratic and sawshape9. Use values of Npl=[4,8,16,64,256,1024]. In each case, estimate the rate of convergence and rate of increased time as Npl varies.
Note: You should observe improved convergence rate for the two continuous functions and a poorer rate for the discontinuous sawshape9.

Back to MATH2070 page.

Mike Sussman 2016-10-30