MATH2071: LAB #4: Factorizations

Introduction Exercise 1
Orthogonal Matrices Exercise 2
A Set of Questions Exercise 3
The Gram Schmidt Method Exercise 4
Gram-Schmidt Factorization Exercise 5
Householder Matrices Exercise 6
Householder Factorization Exercise 7
The QR Method for Linear Systems Exercise 8
The singular value decomposition Exercise 9
  Exercise 10


Introduction

We have seen one factorization-PLU-that can be used to solve a linear system provided that the system is square, and that it is nonsingular, and that it is not too badly conditioned. However, if we want to handle problems with a bad condition number, or that are singular, or even rectangular, we are going to need to come up with a different approach. In this lab, we will look at two versions of the QR factorization:

$\displaystyle A = Q R
$

where $ Q$ is an ``orthogonal'' matrix, and $ R$ is upper triangular, and also the ``singular value decomposition'' (SVD) that factors a matrix into the form

$\displaystyle A=USV^T.$

We will see that the QR factorization can be used to solve the same problems that the PLU factorization handles, but can be extended to do several other tasks for which the PLU factorization is useless. In situations where we are concerned about controlling roundoff error, the QR factorization can be more accurate than the PLU factorization, and is even a little easier to use when solving linear systems, since the inverse of $ Q$ is simply $ Q^{T}$.

We will also see that the SVD can be used to compress data, to identify the range and nullspace of a linear operator, and even to solve non-square systems.

You may find it convenient to print the pdf version of this lab rather than the web page itself. This lab will take three sessions.


Orthogonal Matrices

Definition: An ``orthogonal matrix'' is a real matrix whose inverse is equal to its transpose.

By convention, an orthogonal matrix is usually denoted by the symbol $ Q$. The definition of an orthogonal matrix immediately implies that

$\displaystyle QQ^{T}=Q^{T}Q=I.
$

From the definition of an orthogonal matrix, and from the fact that the $ L^2$ vector norm of $ x$ can be defined by:

$\displaystyle \vert\vert{\bf x}\vert\vert _{2} = \sqrt{( {\bf x}^{T} {\bf x} )}
$

and the fact that

$\displaystyle (A{\bf x})^{T}={\bf x}^{T}A^{T}
$

you should be able to deduce that, for orthogonal matrices,

$\displaystyle \vert\vert Q{\bf x}\vert\vert _{2} = \vert\vert{\bf x}\vert\vert _{2}
$

If I multiply a two-dimensional vector $ {\bf x}$ by $ Q$ and its $ L^2$ norm doesn't change, then $ Q{\bf x}$ must lie on the circle whose radius is $ \Vert{\bf x}\Vert$. In other words, the only thing that has happened is that $ Q{\bf x}$ has been rotated around the origin by some angle, or reflected through the origin or about a diameter. That means that a two-dimensional orthogonal matrix represents a rotation or reflection. In N dimensions, orthogonal matrices are also often called rotations.

When matrices are complex, the term ``unitary'' is an analog to ``orthogonal.'' A matrix is unitary if $ UU^H=I$, where the $ H$ superscript refers to the ``Hermitian'' or ``conjugate-transpose'' of the matrix. In Matlab, the prime operator implements the Hermitian and the dot-prime operator implements the transpose. A real matrix that is unitary is orthogonal.


A Set of Questions

Suppose we have $ N$ vectors $ {\bf x}_i$, each of dimension $ M$. Here are some common and related questions:

Since we actually plan to do these calculations numerically we also need to be concerned about how we deal with numerical error. For instance, in exact arithmetic, it's enough to say a matrix is not singular. In numerical calculations, we would like to be able to say that a particular matrix, while not singular, is "nearly singular".


The Gram Schmidt Method

The ``Gram Schmidt method'' can be thought of as a process that analyzes a set of vectors $ {\cal X}$, producing an ``equivalent'' (and possibly smaller) set of vectors $ {\cal Q}$ that span the same space, have unit $ L^2$ norm, and are pairwise orthogonal. Note that if we can produce such a set of vectors, then we can easily answer many of the questions in our set of problems. In particular, the size of the set $ \cal Q$ tells us whether the set $ \cal X$ was linearly independent, and the dimension of the space spanned and the rank of a matrix constructed from the vectors. And of course, the vectors in $ \cal Q$ are the orthonormal basis we were seeking. So you should believe that being able to compute the vectors $ \cal Q$ is a valuable ability. In fact, the Gram-Schmidt method can help us with all the problems on our list.

We will be using the Gram-Schmidt process to factor a matrix, but the process itself pops up repeatedly whenever sets of vectors must be made orthogonal. You may see it, for example, in Krylov space methods for iteratively solving systems of equations and for eigenvalue problems.

In words, the Gram-Schmidt process goes like this:

  1. Start with no vectors in $ \cal Q$;
  2. Consider $ {\bf x}$, the next (possibly the first) vector in $ \cal X$;
  3. For every vector $ {\bf q}_i$ already in $ \cal Q$, compute the projection of $ {\bf q}_i$ on $ {\bf x}$ (i.e., $ {\bf q}_i\cdot{\bf x}$) and subtract it from $ {\bf x}$ to get a vector orthogonal to $ {\bf q}_i$;
  4. Compute the norm of (what's left of) $ {\bf x}$. If the norm is zero (or too small), discard $ {\bf x}$ from $ \cal Q$; otherwise, divide $ {\bf x}$ by its norm and move it from $ \cal X$ to $ \cal Q$.
  5. If there are more vectors in $ \cal X$, return to step 2.
  6. When you get here, $ \cal X$ is empty and you have an orthonormal set of vectors $ \cal Q$ spanning the same space as the original set $ \cal X$.
Here is a sketch of the Gram Schmidt process as an algorithm. Assume that $ n_x$ is the number of $ {\bf x}$ vectors:
    $ n_q$ = 0 % $ n_q$ will become the number of $ \bf q$ vectors
    for k = 1 to $ n_x$
        for $ \ell$ = 1 to $ n_q$         % if $ n_q=0$ the loop is skipped
             $ r_{\ell k}$ = $ {\bf q}_{\ell}\cdot{\bf x}_{k}$
             $ {\bf x}_{k}$ = $ {\bf x}_{k}$ - $ r_{\ell k}{\bf q}_{\ell }$
        end
        $ r_{kk}$ = $ \sqrt{ {\bf x}_{k}\cdot{\bf x}_{k} }$
        if $ r_{kk}$ > 0
            $ n_q$ = $ n_q$ + 1
             $ {\bf q}_{n_q}$ = $ {\bf x}_{k}/r_{kk}$
        end
    end

You should be able to match this algorithm to the previous verbal description. Can you see how the $ L^2$ norm of a vector is being computed?

Exercise 1:
  1. Implement the Gram-Schmidt process in an m-file called gram_schmidt.m. Your function should have the signature:
    function Q = gram_schmidt ( X )
    % Q = gram_schmidt ( X )
    % more comments
    
    % your name and the date
    

    Assume the $ \bf x$ vectors are stored as columns of the matrix X. Be sure you understand this data structure because it is a potential source of confusion. In the algorithm description above, the $ \bf x$ vectors are members of a set $ \cal X$, but here these vectors are stored as the columns of a matrix X. Similarly, the vectors in the set $ \cal Q$ are stored as the columns of a matrix Q. The first column of X corresponds to the vector $ {\bf x}_1$, so that the Matlab expression X(k,1) refers to the $ k^{\mbox{th}}$ component of the vector $ {\bf x}_1$, and the Matlab expression X(:,1) refers to the Matlab column vector corresponding to $ {\bf x}_1$. Similarly for the other columns. Include your code with your summary report.

  2. It is always best to test your code on simple examples for which you know the answers. Test your code using the following input:
      X = [ 1  1  1
            0  1  1
            0  0  1 ]
    
    You should be able to see that the correct result is the identity matrix. If you do not get the identity matrix, find your bug before continuing.
  3. Another simple test you can do in your head is the following:
      X = [ 1  1  1
            1  1  0
            1  0  0 ]
    
    The columns of Q should have $ L^2$ norm 1, and be pairwise orthogonal, and you should be able to confirm these facts ``by inspection.''
  4. Test your Gram-Schmidt code to compute a matrix Q using the following input:
      X = [  2  -1   0
            -1   2  -1  
             0  -1   2  
             0   0  -1 ]
    
    If your code is working correctly, you should compute approximately:
          [  0.8944   0.3586   0.1952
            -0.4472   0.7171   0.3904  
             0.0000  -0.5976   0.5855
             0.0000   0.0000  -0.6831 ]
    
    You should verify that the columns of Q have $ L^2$ norm 1, and are pairwise orthogonal. If you like programming problems, think of a way to do this check in a single line of Matlab code.
  5. Show that the matrix $ Q$ you just computed is not an orthogonal matrix, even though its columns form an orthornormal set. Are you surprised?


Gram-Schmidt Factorization

We need to take a closer look at the Gram-Schmidt process. Recall how the process of Gauss elimination could actually be regarded as a process of factorization. This insight enabled us to solve many other problems. In the same way, the Gram-Schmidt process is actually carrying out a different factorization that will give us the key to other problems. Just to keep our heads on straight, let me point out that we're about to stop thinking of $ X$ as a bunch of vectors, and instead regard it as a matrix. Since it is traditional for matrices to be called $ A$, that's what we'll call our set of vectors from now on.

Now, in the Gram-Schmidt algorithm, the numbers that we called $ r_{\ell k}$ and $ r_{kk}$, that we computed, used, and discarded, actually record important information. They can be regarded as the nonzero elements of an upper triangular matrix $ R$. The Gram-Schmidt process actually produces a factorization of the matrix $ A$ of the form:

$\displaystyle A = Q R
$

Here, the matrix $ Q$ has the same $ M\times N$ ``shape'' as $ A$, so it's only square if $ A$ is. The matrix $ R$ will be square ($ N\times N$), and upper triangular. Now that we're trying to produce a factorization of a matrix, we need to modify the Gram-Schmidt algorithm of the previous section. Every time we consider a vector $ {\bf x}_{k}$ at the beginning of a loop, we will now always produce a vector $ q_{k}$ at the end of the loop, instead of dropping some out. Hence, if $ r_{kk}$, the norm of vector $ {\bf x}_{k}$, is zero, we will simply set $ q_{k}$ to the zero vector.

Exercise 2:
  1. Make a copy of the m-file gram_schmidt.m and call it gs_factor.m. Modify it to compute the $ Q$ and $ R$ factors of a (possibly) rectangular matrix $ A$. It should have the signature
    function [ Q, R ] = gs_factor ( A )
    % [ Q, R ] = gs_factor ( A )
    % more comments
    
    % your name and the date
    
    Include a copy of your code with your summary.

    Recall: [Q,R]=gs_factor is the syntax that Matlab uses to return two matrices from the function. When calling the function, you use the same syntax:

    [Q,R]=gs_factor(...
    
    When you use this syntax, it is OK to leave out the comma between the Q and the R but leaving out that comma is a syntax error in the signature line of a function m-file.
  2. Compute the QR factorization of the following matrix.
      A = [ 1  1  1
            1  1  0
            1  0  0 ]
    
    You computed Q in an earlier exercise, and you should get the same matrix Q again. Check that $ A=QR$: if so, your code is correct.
  3. Compute the QR factorization of a $ 100\times100$ matrix of random numbers (A=rand(100,100)). (Hint: use norms to check the equalities.)
    • Is it true that $ Q^{T}Q=I$? (Hint: Does norm(Q'*Q-eye(100),'fro') equal 0?)
    • Is it true that $ QQ^{T}=I$?
    • Is $ Q$ orthogonal?
    • Is the matrix $ R$ upper triangular? (Hint: the Matlab functions tril or triu can help, so you don't have to look at all those numbers.)
    • Is it true that $ A=QR$?
  4. Compute the QR factorization of the following matrix.
      A = [ 1 2 3
            4 5 6
            7 8 9
            1 0 0 ]
    
    • Is it true that $ Q^{T}Q=I$?
    • Is it true that $ QQ^{T}=I$?
    • Is $ Q$ orthogonal?
    • Is the matrix $ R$ upper triangular?
    • Is it true that $ A=QR$?

Exercise 3: One step in the Gram-Schmidt process avoids dividing by zero by checking that $ {\bf r}_{k k}>0$. This is, in fact, very poor practice! The reason is that $ {\bf r}_{kk}$ might be of roundoff size. Find the matrices Q and R for the matrix
  A = [ 1 2 3+10*eps
        4 5   6
        7 8   9 ]
Are the columns of your Q matrix orthonormal? (Recall that eps is the smallest number you can add to 1.0 and change its value.)

The ``solution'' to this problem is more art than science, and is beyond the scope of this lab. One feasible way to fix the problem would be to replace zero with, say, eps*max(max(R)).


Householder Matrices

It turns out that it is possible to take a given vector $ \mathbf{d}$ and find a matrix $ H$ so that the product $ H\mathbf{d}$ is proportinal to the vector $ \mathbf{e}_1=(1,0,\ldots,0)^T$. That is, it is possible to find a matrix $ H$ that ``zeros out'' all but the first entry of $ \mathbf{d}$. The matrix is given by

$\displaystyle \mathbf{v}$ $\displaystyle =$ $\displaystyle \frac{\mathbf{d}-\Vert\mathbf{d}\Vert\mathbf{e}_1}
{\vert\vert(\mathbf{d}-\Vert\mathbf{d}\Vert\mathbf{e}_1)\vert\vert}$  
$\displaystyle H$ $\displaystyle =$ $\displaystyle I-2\mathbf{v}\mathbf{v}^T$ (1)

Note that $ \mathbf{v}^T\mathbf{v}$ is a scalar but $ \mathbf{v}\mathbf{v}^T$ is a matrix. To see why Equation (1) is true, denote $ \mathbf{d}=(d_1,d_2,\ldots)^T$ and do the following calculation.
$\displaystyle H\mathbf{d}$ $\displaystyle =$ $\displaystyle \mathbf{d}-2\frac{
(\mathbf{d}-\Vert\mathbf{d}\Vert \mathbf{e}_1)...
...\Vert \mathbf{e}_1)^T
(\mathbf{d}-\Vert\mathbf{d}\Vert \mathbf{e}_1)}\mathbf{d}$  
  $\displaystyle =$ $\displaystyle \mathbf{d}-\frac{2}{\Vert\mathbf{d}\Vert^2-2d_1\Vert\mathbf{d}\Ve...
...\mathbf{d}\Vert \mathbf{e}_1)
(\Vert\mathbf{d}\Vert^2-\Vert\mathbf{d}\Vert d_1)$  
  $\displaystyle =$ $\displaystyle \mathbf{d}-\frac{2 (\Vert\mathbf{d}\Vert^2-\Vert\mathbf{d}\Vert d...
...ert^2-2d_1\Vert\mathbf{d}\Vert)}
(\mathbf{d}-\Vert\mathbf{d}\Vert \mathbf{e}_1)$  
  $\displaystyle =$ $\displaystyle \Vert\mathbf{d}\Vert\mathbf{e}_1$  

You can find further discussion in several places on the web. For example, a Planet Math (planetmath.org) encyclopedia article.

There is a way to use this idea to take any column of a matrix and make those entries below the diagonal entry be zero. Repeating this sequentially for each column will result in an upper triangular matrix. A way to do this is the following algorithm.

Start with a column vector of length $ n$

$\displaystyle \mathbf{b}=
\left[\begin{array}{c}b_1 b_2 \vdots b_{k-1}\\
\overline{\hspace*{3em}}\\
b_k b_{k+1} \vdots b_n\end{array}\right],
$

and denote part of $ \mathbf{b}$ below the line as $ \mathbf{d}$, a column vector of length $ n-k+1$.

$\displaystyle \mathbf{d}=
\left[\begin{array}{c}
b_k b_{k+1} \vdots b_n\end{array}\right],
$

We are thinking of the column vector $ \mathbf{b}$ as the $ k^$th column of some matrix, so $ \mathbf{d}$ is the part of the column consisting of the diagonal and all values below the diagonal. The objective is to construct a matrix $ \mathbf{H}$ with the property that

$\displaystyle \mathbf{H b}=
\left[\begin{array}{c}b_1 b_2 \vdots b_{k-1}\\
\text{nonzero}\\
0\\
\vdots 0\end{array}\right],$

where the $ k^{\text{th}}$ component of the product, $ (H b)_k$, is non-zero.

The Householder matrix $ \mathbf{H}$ will be constructed from a vector

$\displaystyle \mathbf{w}=
\left[\begin{array}{c}0 \mathbf{v}\end{array}\right]$

where the vector $ \mathbf{v}$ is the same size as $ \mathbf{d}$.

The algorithm for constructing $ \mathbf{v}$ and hence $ \mathbf{w}$ can be summarized as follows.

  1. Set $ \alpha=\pm\Vert\mathbf{d}\Vert$ with signum$ (\alpha)=
-$signum$ (d_1)$ (The sign is arbitrary and chosen to minimize roundoff errors.)
  2. Set $ v_1=\sqrt{\frac12\left[1-\frac{d_1}{\alpha}\right]}$
  3. Set $ p=-\alpha v_1$
  4. Set $ v_j=\frac{d_j}{2p}$ for $ j=2,3,\ldots$
  5. Set $ \mathbf{w}=
\left[\begin{array}{c}0 \mathbf{v}\end{array}\right]$.
  6. Set $ H=I-2\mathbf{w}\mathbf{w}^T$.
In the following exercise you will write a Matlab function to implement this algorithm.

Exercise 4:
  1. Start a function m-file named householder.m with signature and beginning lines
    function H = householder(b, k)
    % H = householder(b, k)
    % more comments
    
    % your name and the date
    
    n = length(b);
    d(:,1) = b(k:n);
    if d(1)>=0
      alpha = -norm(d);
    else
      alpha =  norm(d);
    end
    
  2. Add comments at the beginning, being sure to explain the use of the variable k.
  3. In the case that alpha is exactly zero, the algorithm will fail because of a division by zero. For the case that alpha is zero, return
    H = eye(n);
    
    otherwise, complete the above algorithm.
  4. Test your function on the vector b=[10;9;8;7;6;5;4;3;2;1]; with k=1 and again with k=4. Check that H should be orthogonal and H*b should have zeros in positions below k. (Hint: Orthogonality of H is equivalent to the vector $ \mathbf{w}$ being a unit vector. If H is not orthogonal, check $ \mathbf{w}$.)

This householder function can be used for the QR factorization of a matrix by proceeding through a series of partial factorizations $ A=Q_{k}R_{k}$, where $ Q_0$ is the identity matrix, and $ R_0$ is the matrix $ A$. When we begin the $ k^{\mbox{th}}$ step of factorization, our factor $ R_{k-1}$ is only upper triangular in columns 1 to $ k-1$. Our goal on the $ k$-th step is to find a better factor $ R_{k}$ that is upper triangular through column $ k$. If we can do this process $ n-1$ times, we're done. Suppose, then, that we've partially factored the matrix $ A$, up to column $ k-1$. In other words, we have a factorization

$\displaystyle A = Q_{k-1} R_{k-1}
$

for which the matrix $ Q_{k-1}$ is orthogonal, but for which the matrix $ R_{k-1}$ is only upper triangular for the first $ k-1$ columns. To proceed from our partial factorization, we're going to consider taking a Householder matrix $ H$, and ``inserting'' it into the factorization as follows:
$\displaystyle A$ $\displaystyle =$ $\displaystyle Q_{k-1} R_{k-1}$  
  $\displaystyle =$ $\displaystyle Q_{k-1} H^{T} H R_{k-1}$  

Then, if we define
$\displaystyle Q_{k}$ $\displaystyle =$ $\displaystyle Q_{k-1} H^{T}$  
$\displaystyle R_{k}$ $\displaystyle =$ $\displaystyle H R_{k-1}$  

it will again be the case that:

$\displaystyle A = Q_{k} R_{k}
$

and we're guaranteed that $ Q_{k}$ is still orthogonal. Our construction of $ H$ guarantees that $ R_{k}$ is actually upper triangular all the way through column $ k$.

Exercise 5: In this exercise you will see how the Householder matrix can be used in the steps of an algorithm that passes through a matrix, one column at a time, and turns the bottom of each column into zeros.
  1. Define the matrix A to be the magic square of order 5, A=magic(5).
  2. Compute H1, the Householder matrix that knocks out the subdiagonal entries in column 1 of A, and then compute A1=H1*A. Are there any non-zero values below the diagonal in the first column?
  3. Now let's compute H2, the Householder matrix that knocks out the subdiagonal entries in column 2 of A1, and compute A2=H2*A1. This matrix should have subdiagonal zeros in column 2. You should be convinced that you can zero out any subdiagonal column you want. Since zeroing out one subdiagonal column doesn't affect the previous columns, you can proceed sequentially to zero out all subdiagonal columns.


Householder Factorization

For a rectangular $ M$ by $ N$ matrix $ A$, the Householder QR factorization has the form

$\displaystyle A = Q R
$

where the matrix $ Q$ is $ M\times M$ (hence square and truly orthogonal) and the matrix $ R$ is $ M\times N$, and upper triangular (or upper trapezoidal if you want to be more accurate.)

If the matrix $ A$ is not square, then this definition is different from the Gram-Schmidt factorization we discussed before. The obvious difference is the shape of the factors. Here, it's the $ Q$ matrix that is square. The other difference, which you'll have to take on faith, is that the Householder factorization is generally more accurate (smaller arithmetic errors), and easier to define compactly.



Householder QR Factorization Algorithm:
    Q = I;
    R = A;
    for k = 1:min(m,n)
        Construct the Householder matrix H for column k of the matrix R;
        Q = Q * H';
        R = H * R;
    end

Exercise 6:
  1. Write an m-file h_factor.m. It should have the form
    function [ Q, R ] = h_factor ( A )
    % [ Q, R ] = h_factor ( A )
    % more comments
    
    % your name and the date
    
    Use the routine householder.m in order to compute the H matrix that you need at each step.
  2. A simple test is the matrix
    A = [ 0 1 1
          1 0 1
          0 0 1 ]
    
    You should see that simply interchanging the first and second columns of A turns it into an upper triangular matrix, so you can see that Q is a permutation matrix and you can guess what the solution is. Be sure that Q*R is A.
  3. Test your code by computing the QR factorization of the Hilbert matrix of order 4. Compare your results with those of the Matlab qr function. Warning: Q remains an orthogonal matrix if one of its columns is multiplied by (-1). This is equivalent to multiplying by a matrix that is like the identity matrix except with one (-1) on the diagonal instead of all (+1). This matrix is clearly orthogonal and can be incorporated into the QR factorization.


The QR Method for Linear Systems

If we have computed the Householder QR factorization of a matrix without encountering any singularities, then it is easy to solve linear systems. We use the property of the $ Q$ factor that $ Q^{T}Q=I$:

$\displaystyle A {\bf x}$ $\displaystyle =$ $\displaystyle {\bf b}$  
$\displaystyle Q R {\bf x}$ $\displaystyle =$ $\displaystyle {\bf b}$  
$\displaystyle Q^{T} Q R {\bf x}$ $\displaystyle =$ $\displaystyle Q^{T} {\bf b}$  
$\displaystyle R {\bf x}$ $\displaystyle =$ $\displaystyle Q^{T} {\bf b}$ (2)

so all we have to do is form the new right hand side $ Q^{T}{\bf b}$ and then solve the upper triangular system. And that's easy because it's the same as the u_solve step of our old PLU solution algorithm from the previous lab.

Exercise 7: Make a copy of your file u_solve.m from the previous lab (upper-triangular solver), or get a copy of my version. Write a file called h_solve.m. It should have the signature
function x = h_solve ( Q, R, b )
% x = h_solve ( Q, R, b )
% more comments

% your name and the date
The matrix R is upper triangular, so you can use u_solve to solve (2) above. Assume that the QR factors come from the h_factor routine. Set up your code to compute $ Q^{T}{\bf b}$ and then solve the upper triangular system $ R x = Q^{T} {\bf b}$ using u_solve.

When you think your solver is working, test it out on a system as follows:

    n = 7
    A = magic ( n )
    x = [ 1 : n ]'
    b = A * x
    [ Q, R ] = h_factor ( A )
    x2 = h_solve ( Q, R, b )
    norm(x - x2) % should be close to zero.

Now you know another way to solve a square linear system. It turns out that you can also use the QR algorithm to solve non-square systems, but that task is better left to the singular value decomposition.


The singular value decomposition

Another very important matrix product decomposition is the singular value decomposition of a matrix into the product of three matrices. If $ A$ is an $ m\times n$ matrix, then there are three matrices $ U$, $ S$ and $ V$, with

$\displaystyle A=USV^T,$ (3)

where $ U$ is $ n\times n$ and unitary, and $ V$ is $ m\times m$ and unitary. Recall that unitary matrices are like orthogonal matrices but they could have complex entries. Unitary matrices with all real entries are orthogonal matrices. The matrix $ S$ is $ n\times m$ and is of the form

$\displaystyle S_{k\ell}=\begin{cases}
\sigma_k & \text{for } k=\ell\\
0 & \text{for } k\ne \ell
\end{cases}$

and $ \sigma_1\ge\sigma_2\ge\sigma_3\ge\dots\ge\sigma_r>0=0=\dots=0$. The number $ r$ is the rank of the matrix $ A$ and is necessarily no larger than $ \min(m,n)$. The ``singular values,'' $ \sigma_k$, are real and positive and are the eigenvalues of the Hermitian matrix $ A^HA$.

The singular value decomposition provides much useful information about the matrix $ A$. This information includes:

You will see these facts illustrated below.

Numerical methods for finding the singular value decomposition will be addressed in the next lab. One ``obvious'' algorithm involves finding the eigenvalues of $ A^HA$, but this is not really practical because of roundoff difficulties. Matlab includes a function called svd with signature [U S V]=svd(A) to compute the singular value decomposition and we will be using it in the following exercises. This function uses the Lapack subroutine dgesvd, so if you were to need it in a Fortran or C program, it would be available by linking against the Lapack library.

Supposing that $ r$ is the smallest integer so that $ \sigma_r=0$, then the SVD implies that the rank of the matrix is $ r-1$. Similarly, if the matrix $ A$ is $ n\times n$, then the rank of the nullspace of $ A$ is $ n-r+1$. The first $ r-1$ columns of $ U$ are an orthonormal basis of the range of $ A$, and the last $ n-r+1$ columns of $ V$ are an orthonormal basis for the nullspace of $ A$. Of course, in the presence of roundoff some judgement must be made as to what constitutes a zero singular value, but the SVD is the best (if expensive) way to discover the rank and nullity of a matrix. This is especially true when there is a large gap between the ``smallest of the large singular values'' and the ``largest of the small singular values.''

The SVD can also be used to solve a matrix system. Assuming that the matrix is non-singular, all singular values are strictly positive, and the SVD can be used to solve a system.

$\displaystyle b$ $\displaystyle =Ax$    
$\displaystyle b$ $\displaystyle =USV^Hx$    
$\displaystyle U^Hb$ $\displaystyle =SV^Hx$    
$\displaystyle S^+U^Hb$ $\displaystyle =V^Hx$ (4)
$\displaystyle VS^+U^Hb$ $\displaystyle =x$    

Where $ S^+$ is the diagonal matrix whose entries are $ 1/\sigma_k$. It turns out that Equation (4) is not a good way to solve nonsingular systems, but is an excellent way to ``solve'' systems that are singular or almost singular! To this end, the matrix $ S^+$ is the matrix with the same shape as $ S^T$ whose elements are given as

$\displaystyle S^+_{k\ell}=\begin{cases}
1/\sigma_k & \text{for } k=\ell \text{ and } \sigma_k>0\\
0 & \text{otherwise}
\end{cases}$

Further, if $ A$ is close to singular, a similar definition but with diagonal entries $ 1/\sigma_k$ for $ \sigma_k>\epsilon$ for some $ \epsilon$ can work very nicely. In these latter cases, the ``solution'' is the least-squares best fit solution and the matrix $ A^+$ is called the Moore-Penrose pseudo-inverse of $ A$.

Exercise 8: In this exercise you will use the Matlab svd function to solve for the best-fit linear function of several variables through a set of points. This is an example of ``solving'' a rectangular system.

Imagine that you have been given many ``samples'' of related data involving several variables and you wish to find a linear relationship among the variables that approximates the given data in the best-fit sense. This can be written as a rectangular system

\begin{displaymath}
\begin{array}{cccccccccc}
a_1 d_{11}&+&a_2d_{12}&+&a_3d_{13}...
...=0\\
\vdots & &\vdots & &\vdots & &\dots& &\vdots&
\end{array}\end{displaymath}

where $ d_{ij}$ represents the data and $ a_i$ the desired coefficients. The best-fit solution can be found using the SVD.

As we have done several times before, we will first generate a data set with known solution and then use svd to recover the known solution. Place Matlab code for the following steps into a script m-file called exer8.m

  1. Generate a data set consisting of twenty ``samples'' of each of four variables using the following Matlab code.
    N=20;
    d1=rand(N,1);
    d2=rand(N,1);
    d3=rand(N,1);
    d4=4*d1-3*d2+2*d3-1;
    
    It should be obvious that these vectors satisfy the equation

    $\displaystyle 4d_1-3d_2+2d_3-d_4=1,$

    but, if you were solving a real problem, you would not be aware of this equation, and it would be your objective to discover it, i.e., to find the coefficients in the equation.
  2. Introduce small ``errors'' into the data. The rand function produces numbers between 0 and 1.
    EPSILON=1.e-5;
    d1=d1.*(1+EPSILON*rand(N,1));
    d2=d2.*(1+EPSILON*rand(N,1));
    d3=d3.*(1+EPSILON*rand(N,1));
    d4=d4.*(1+EPSILON*rand(N,1));
    
  3. Regard the four vectors d1,d2,d3,d4 as data given to you, and construct the matrix consisting of the four column vectors, A=[d1,d2,d3,d4].
  4. Use the Matlab svd function
    [U S V]=svd(A);
    
    Please include the four non-zero values of S in your summary, but not the matrices U and V.
  5. Just to confirm that you have everything right, compute norm(A-U*S*V','fro'). This number should be roundoff-sized.
  6. Construct the matrix $ S^+$ (call it Splus) so that

    $\displaystyle S^+_{k\ell}=\begin{cases}
1/S(k,k) & \text{for } k=\ell \text{ and } S(k,k)>0\\
0 & \text{for } k\ne \ell \text{ or } S(k,k)=0
\end{cases}$

  7. We are seeking the coefficient vector $ x$ that

    $\displaystyle x_1d_1+x_2d_2+x_3d_3+x_4d_4=1.$

    Find this vector by setting b=ones(N,1) and then using Equation (4) above. Include your solution with your summary. It should be close to the known solution x=[4;-3;2;-1].

Sometimes the data you are given turns out to be deficient because the variables supposed to be independent are actually related. This dependency will cause the coefficient matrix to be singular or nearly singular. Dealing with dependencies of this sort is one of the greatest difficulties in using least-squares fitting methods. In the following exercise you will construct a deficient set of data and see that the singular value decomposition can find the solution nonetheless.

Exercise 9:
  1. Copy your m-file exer8.m to exer9.m. Replace the line
    d3=rand(N,1);
    
    with the line
    d3=d1+d2;
    
    This makes the data deficient and introduces nonuniqueness into the solution for the coefficients.
  2. After using svd to find U, S, and V, print S(1,1), S(2,2), S(3,3), S(4,4). You should see that S(4,4) is substantially smaller than the others. Treat it as if it were zero and find the coefficient vector x as before. The solution you found is probably not x=[4;-3;2;-1].
  3. Look at V(:,4). This is the vector associated with the singular value that you set to zero. This vector should be associated with the extra relationship d1+d2-d3=0 that was introduced in the first step. What multiple of V(:,4) can be added to your solution to yield [4;-3;2;-1]?
Note that the singular value decomposition allows you to discover deficiencies in the data without knowing they are there (or even if there are any) in the first place. This idea can be the basis of a fail-safe method for computing least-squares coefficients.

In the following exercise you will see how the singular value decomposition can be used to ``compress'' a graphical figure by representing the figure as a matrix and then using the singular value decomposition to find the closest matrix of lower rank to the original. This approach can form the basis of efficient compression methods.

Exercise 10:
  1. This photo from NASA is one of the pictures the Mars rovers sent back. Download it.
  2. Matlab provides various image processing utilities. In this case, read the image in and convert it to a matrix of real (``double'') numbers using the following commands.
    mars=double(imread('mars.jpg'));
    
    The variable mars will be a $ 500\times500$ matrix of integers between 0 and 255.
  3. Matlab has the notion of a ``colormap'' that determines how the values in the matrix will be colored. Use the command
    colormap(gray(256));
    
    to set a grayscale colormap. Now you can show the picture with the command
    image(mars)
    
    Please do not send me this plot.
  4. Use the command [U S V]=svd(mars); to perform the singular value decomposition. Do not include these matrices with your summary!
  5. Plot the singular values: plot(diag(S)). You should see that the final 300 singular values are essentially zero, and the singular values larger than 50 are still noticable. Please send me this plot with your summary.
  6. Construct the three new matrices
    mars200=U(:,1:200)*S(1:200,1:200)*V(:,1:200)';
    mars100=U(:,1:100)*S(1:100,1:100)*V(:,1:100)';
    mars25=U(:,1:25)*S(1:25,1:25)*V(:,1:25)';
    
    These matrices are of lower rank than the mars matrix, and can be stored in a more efficient manner. (The mars matrix is $ 500\times500$ and requires 250,000 numbers to store it. In contrast, mars100 requires subsets of $ U$ and $ V$ that are $ 500\times100$ and the diagonal of $ S$, for a total of 20,100. This is better than two to one savings in space.) Do not include these matrices with your summary!
  7. Plot all three files as images using the following commands;
    figure(1);
    colormap(gray(256));
    image(mars);
    title('Original mars');
    figure(2);
    colormap(gray(256));
    image(mars25);
    title('mars with 25 singular values');
    figure(3);
    colormap(gray(256));
    image(mars100);
    title('mars with 100 singular values');
    figure(4);
    colormap(gray(256));
    image(mars200);
    title('mars with 200 singular values');
    
    You should see essentially no difference between the original and the one with 200 singular values, while you should see degradation of the image in the case of 25 singular values. Please only include the 25 singular value case with your summary.



Back to MATH2071 page.

Mike Sussman 2009-01-10