Email me or phone me if you have problems.

The first step is to get Homework 7 fully debugged and tested, and enhanced as follows. There are 3 possible ways the algorithm might terminate: (1) the optimality conditions are satisfied within the given tolerance; (2) the norm of x, y, z or w exceeds a given user-defined parameter M, indicating that the LP is unbounded (this parameter could have the value 1.0e+6); (3) a maximum number of iterations is exceeded (this could have the value 50). If more than 50 iterations are required, it probably means there is a bug in the program, or that the tolerance is too small (it should be substantially bigger than the machine precision 1e-16), or that the line search parameter r is too small (it should be at least 0.9), or that the centering parameter eta (called delta in the text) is too big (it should be less than or equal to 0.5). The algorithm should be coded as a function whose parameters are: the termination tolerance; the bound M; the maximum number of iterations; the steplength parameter r; and the centering parameter eta. If your code is not working, test it on very small problems and display all the variables to see what is going on. You must get it debugged before proceeding.

Once your code is working and fully tested you are ready to go on to the more efficient implementations. There are four of these:

J = sparse(2*(n+m), 2*(n+m)); % initialize J as a zero SPARSE matrix

J(1:m,1:n) = A; % top left block (A must be sparse too)

J(1:m,m+n+1:m+n+m) = speye(m); % sparse identity block

J(m+n+1:m+n+n,1:n) = sparse(diag(z)); % sparse Z block

etc. Observe how many flops this saves.

Test these on a small test problem before you go on to larger ones. If they don't generate almost exactly the same iterates as the original version using J, they are buggy, and you need to debug them before going further.

Run all 4 versions on the same test problems that you used in Homework 6, plus an additional test problem: add a dense column (say ones(m,1)) to the largest sparse example. Prepare a table as you did before, showing the number of ITERATIONS (should be same for all versions), the FLOP count, the ACCURACY of the final solution, and the approximate CONDITIONING of the final matrix which is factored (J for the first version, the KKT matrix for the second, and the matrices passed to "chol" for the third and fourth versions). The condition number is estimated by CONDEST, but do NOT do this inside the main loop and do NOT include this in the FLOP count, as computing the condition number is more work than solving the linear system. Just do it once at the end. Also compare the results with those for the simplex method in Homework 6. Prepare several tables experimenting with different values of the steplength parameter r (e.g. 0.9, 0.999 and one other) and eta (e.g. 0.25, 0.1 and at least one other). Summarize the results in a written discussion.

Important Note: I will expect you to be able to discuss the results at the oral final exam, so it is important to get both Homework 6 and this Homework working properly. Come to see me if you have trouble. Now for some good news: this is the last programming assignment.