This homework is long, but it should be manageable, as long as you are following what is happening in class and becoming reasonably familiar with Matlab. If you find you are stuck and wasting too much time on it, PLEASE SEND ME EMAIL. We can arrange a time to talk when I return.

The original Matlab program simplx.m uses the "\" operation to solve the necessary systems of linear equations from scratch at every iteration, using Gauss elimination (which is equivalent to LU factorization). This does not take advantage of the fact that the basis matrix B is changing only by one column at each iteration. Modify simplx.m to implement three EFFICIENT versions of the primal simplex method, as follows. In all cases assume that b > 0 so a primal feasible point is available (original variables x=0, slack variables w=b) and that the initial basis matrix B is therefore the identity matrix.

There is another important aspect to the THIRD method: it must work efficiently when the input matrix A is stored using Matlab's sparse data structures. Type "help sparse" for more information on this. Most Matlab operations, including both the LU factorization and the solve operations using "\", work with both dense and sparse data. As far as I can see, the only addition you have to make to this program to accommodate sparse data is that, if A is sparse, you must use speye(m) (sparse identity) instead of eye(m) when initializing the basis matrix B. You can check whether A is sparse by "if issparse(A)..." (type "help issparse").

When you are writing your programs remember to use PLENTY OF COMMENTS, and use other good programming habits, such as indenting loops sensibly. using meaningful variable names, and generally writing readable code. The grader will be reading the source code. Start by debugging your programs on the small examples you have used before. Continue until you are sure the programs are working correctly. If you are using a version of Matlab with the optimization toolbox installed (which is available at Courant but probably not outside), you can check your answers by calling an interface for the Matlab LP solver: lpmatlab.m . Otherwise you will have to check them against simplx.m, remembering that my original version produces a permuted solution without keeping track of the permutations (you were supposed to have fixed this). Once you are sure your programs are working correctly, you should suppress all output by adding semicolons and turn your script files into function subprograms (type "help function" for information on how to do this). These functions should have the calling sequence [x,y] = LPname(A,b,c,maxit), and they should all return the optimal original vectors x and y (often called x_tilde and y_tilde in class) for the primal-dual pair

max c'x subject to Ax <= b, x >= 0

min b'y subject to A'y >= c, y >= 0

with the zero variables set accordingly and all vectors in correctly permuted order. You don't need to return the slack variables w and z since these are easily computed by w = b - Ax, z = A'y - c if they are needed. Your functions should generate user-friendly error messages if any of the following errors occur: the data dimensions don't match, the iteration count exceeds maxit, the initial b has a negative entry or the problem is found to be unbounded. See "help error".

Once everything is working correctly, test your programs on the following examples:

A = full(sprand(m,n,0.2)) + 10*eye(m,n); b = rand(m,1); c = rand(n,1);

This example generates a sparse matrix A with about 20% nonzero entries, adding 10 to the diagonal entries, and passing it to your codes as a full matrix, i.e. NOT taking account of the sparsity. Do "help sprand" and "help full" for more information. The first statement initializes the random number generator so we are all using the same example (assuming you are using Matlab 5).

A = sprand(m,n,0.2) + 10*speye(m,n); b = rand(m,1); c = rand(n,1);

This generates the SAME example except that now A is passed to your codes as a sparse matrix. This should make the third method far MORE efficient, since the LU factorization will take advantage of the sparsity in the matrix, as will the "solves" using the \ operator. Don't forget that your code must use eye, not speye if the data is sparse. You can also try passing the sparse data to the codes for the first and second methods. What happens?

A = sprand(m,n,0.2) + 10*speye(m,n); b = rand(m,1); c = rand(n,1);

This example has been chosen to be very difficult numerically. The Hilbert matrix A (type "help hilb") is notoriously ill-conditioned, meaning that the solution of Ax=b and Ap=q will have very different solutions x and p even if b is nearly equal to q. Another way of saying this is that the norm of the inverse of A is huge, or that A is nearly singular. The LP has been set up so the solution is x = y = A\b = A'\c = ones(8,1). Be sure to use "format long" so you can see all digits of your computed solution. How do your codes do on this difficult problem?

Compare the results of your codes with each other and with either lpmatlab.m or the fixed-up simplx.m on these problems in a nice table, showing number of ITERATIONS (should be same for all methods), FLOP count (do "help flops"; remember "flops(0)" resets the flop count), and ACCURACY (comparing with lpmatlab.m or the fixed-up simplx.m for all problems but the last, and comparing with the exact solution of all ones in the last case).

EXPERIMENT with various choices for K, which should affect BOTH the efficiency AND the accuracy (in the case of the Hilbert LP) of the third method. Try values of K in the range 5 to m for the random problems and 1 to 8 for the Hilbert LP. You should also experiment with some other dimensions for the examples, including at least one other in your table.

Finally, PROVE that the solution to the Hilbert LP is indeed x and y all ones. Hint: show that the optimality conditions hold for these x and y, by showing feasibility and complementarity.