Homework 8, due May 4.

Email me or phone me if you have problems.

  • Implement several EFFICIENT versions of the primal dual interior point algorithm given in Figure 17.1 on p. 274 of the text. The difference from Homework 7 is that we will reduce the size of the linear system to be solved at each step, and that we will take advantage of sparsity.

    The first step is to get Homework 7 fully debugged and tested, and enhanced as follows. There are 3 possible ways the algorithm might terminate: (1) the optimality conditions are satisfied within the given tolerance; (2) the norm of x, y, z or w exceeds a given user-defined parameter M, indicating that the LP is unbounded (this parameter could have the value 1.0e+6); (3) a maximum number of iterations is exceeded (this could have the value 50). If more than 50 iterations are required, it probably means there is a bug in the program, or that the tolerance is too small (it should be substantially bigger than the machine precision 1e-16), or that the line search parameter r is too small (it should be at least 0.9), or that the centering parameter eta (called delta in the text) is too big (it should be less than or equal to 0.5). The algorithm should be coded as a function whose parameters are: the termination tolerance; the bound M; the maximum number of iterations; the steplength parameter r; and the centering parameter eta. If your code is not working, test it on very small problems and display all the variables to see what is going on. You must get it debugged before proceeding.

    Once your code is working and fully tested you are ready to go on to the more efficient implementations. There are four of these:

  • The first version continues to factor the big matrix J, but exploits its sparsity. Change the construction of J to

    J = sparse(2*(n+m), 2*(n+m)); % initialize J as a zero SPARSE matrix
    J(1:m,1:n) = A; % top left block (A must be sparse too)
    J(1:m,m+n+1:m+n+m) = speye(m); % sparse identity block
    J(m+n+1:m+n+n,1:n) = sparse(diag(z)); % sparse Z block

    etc. Observe how many flops this saves.

  • The second version gets dx and dy from solving the "reduced KKT system" at the end of Section 18.1 of the text (1/3 of the way down p.286). The matrix on the left-hand side is symmetric but not positive definite. Store it as a sparse matrix, so as to exploit the sparsity in both A and the diagonal matrices. Solve the system using \ (which will invoke Gauss elimination). Then dw and dz are trivially obtained from dx and dy. Do NOT store the diagonal matrices X, Y etc. Store them as vectors x, y. Use .* and .\ to operate with these: thus XZe is the vector x.*z, X^{-1}e is obtained from x .\ ones(n,1), and X^{-1}Z is sparse(diag(x.\z)).

  • The third version gets dy from the primal normal equations (18.9) (also p.286). The matrix on the left-hand side is symmetric and positive definite (move the minus sign to the right-hand side), and should be solved using the Cholesky factorization (type "help chol"), which is more efficient than the LU factorization (Gauss elimination). Let's call the left-hand side matrix B. The Cholesky algorithm computes an upper triangular matrix R such that B = R'R. Then solving the system Bp = q is equivalent to solving R'R p = q, so you need to type R = chol(B), followed by p = R\(R'\q). After getting dy in this fashion, you get dx from (18.8). When computing the matrix B, make sure it is stored as a sparse matrix.

  • The fourth version gets dx from the dual normal equations (18.10) (p.287). Again the matrix on the left-hand side is symmetric and positive definite and should be factored using Cholesky factorization. Again make sure the matrix is stored as a sparse matrix. You then get dy from the equation above (18.10).

    Test these on a small test problem before you go on to larger ones. If they don't generate almost exactly the same iterates as the original version using J, they are buggy, and you need to debug them before going further.

    Run all 4 versions on the same test problems that you used in Homework 6, plus an additional test problem: add a dense column (say ones(m,1)) to the largest sparse example. Prepare a table as you did before, showing the number of ITERATIONS (should be same for all versions), the FLOP count, the ACCURACY of the final solution, and the approximate CONDITIONING of the final matrix which is factored (J for the first version, the KKT matrix for the second, and the matrices passed to "chol" for the third and fourth versions). The condition number is estimated by CONDEST, but do NOT do this inside the main loop and do NOT include this in the FLOP count, as computing the condition number is more work than solving the linear system. Just do it once at the end. Also compare the results with those for the simplex method in Homework 6. Prepare several tables experimenting with different values of the steplength parameter r (e.g. 0.9, 0.999 and one other) and eta (e.g. 0.25, 0.1 and at least one other). Summarize the results in a written discussion.

    Important Note: I will expect you to be able to discuss the results at the oral final exam, so it is important to get both Homework 6 and this Homework working properly. Come to see me if you have trouble. Now for some good news: this is the last programming assignment.