# CSL361 Programming assignment 2:

1. Develop and test C/C++ functions (based on LU decomposition using partial pivoting, forward and backward substiution; column oriented gaxpy based, just to be different from what we did in the class) to solve the system where , and are matrices of dimension , and , respectively. Use the above function to compute . Also write a function to estimate the condition number of a matrix. In each case verify the results with standard matlab functions. Do not use any standard library.
2. Test the procedure developed above on the symmetric diagonal matrix of order 10:

The inverse of this matrix is known to be

3. Extend the LU decomposition procedure developed above to deal with rectangular matrices ( ). Compute the space of solutions as a set of basis vectors. Indicate if there is no solution.

[Note: Consider, first the lower triangular case when , i.e.,

where , , and . Assume that is lower triangular and nonsingular. If we apply forward substitution to , then solves the system provided . Otherwise, there is no solution and we have to apply least squares optimization.

Now, consider the case when . In this case we can apply forward substitution to the square system and assign an arbitrary value for . You may output the basis.]

4. Test your procedure on randomly generated square and rectangular matrices (choose moderate size matrices ). Compute the null space, range space, rank and nullity. Verify with standard matlab functions. For square matrices compute the inverse (if full rank) and verify whether . Explain any discrepancies that you may observe.
5. Investigate the difficulties in inverting the following matrix:

Estimate the condition number of the above matrix.
6. Use the LU decomposition procedure developed above to solve the following matrix equations. In each case explain the difficulties and errors. Estimate the condition number in each case and verify with matlab:
1. The Hilbert matrix of order is defined by for . Define . Then the solution to the system of equations for is . Verify this. Select some values of in the range and solve using the function developed above. Do the case by hand to see what difficulties may arise.
2. For a moderate value of (say ) solve the system where and . Guess the correct solution by looking at the output and verify.
3. For a fixed value of from 2 to 4, let

and

the vector is the ideal solution (verify).
7. Fit the function

to the data

 x 0.24 0.65 0.95 1.24 1.73 2.01 2.23 2.52 2.77 2.99 y 0.23 -0.26 -1.1 -0.45 0.27 0.1 -0.29 0.24 0.56 1

by least squares (use your own LU decomposition routines) and plot the function. Evaluate the residual error and the associated probablity. Corrupt the data with zero mean Gaussian noise with different values of (ranging from 0.01 to 0.1) and repeat. Also show the re-fits after rejecting the outliers.

8. Gaussian elimination with partial pivoting can be organized so that it is rich in matrix multiplications. Such a scheme may be useful when dealing with really large matrices. Following is a block outer product procedure (block dot product and block gaxpy based procedures are also possible).

Assume that and for clarity that . Partition as

1. Use scalar Gaussian elimination with partial pivoting (rectangular version) to compute permutations , unit lower triangular and upper triangular so that

2. Apply across the rest of

3. Solve the lower triangular multiple right hand side problem

4. Perform the level-3 update

With these computations we obtain the factorization

The process is then repeated on the first columns of .

In general, during step of the block algorithm we apply scalar Gaussian elimination to a matrix of size -by-. An -by- multiple right hand side system is solved and a level-3 update of size -by- is performed. The level-3 fraction for the overall process is approximately given by . Thus. for large , the procedure is rich in matrix multiplication.

Implement the block Gaussian elimination outlined above and test it on some of the (large size) examples of earlier problems.

9. Consider the multiple right hand side forward substitution problem (similarly backward substitution) of computing the solution to where is lower triangular and . Here is one way to solve this using block operations rich in matrix multiplication. Partition as (assuming that the diagonal blocks are square)

We can solve for and then remove it from the block equations

and recurse. Note that the procedure is block saxpy based (of course, you can also do it based on block dot product). Try it out along with the previous problem.

Subhashis Banerjee 2009-10-09