Please check here before you ask a question about the homework:

Homework 7

  1. Q3 about SAT
    1. Part(a): What are the operators in a Boolean formula? We assume AND, OR and NOT only. Actually, we could admit anything you like as well, like XOR.
    2. Part(g): Question: What do you mean by "in analogy to a)"? Do you mean using the exact method we described in a) - i.e., can we just give a counterexample for the method in a), or do you want something more? Answer: The reference to a) is only suggestive.

I do not mean that you can actually do what was done in a). In particular, I am not saying that CNF will work for TAUT.

  1. Part(h): What is the definition of co-NP?
    If L is a language, then co-L is the set of strings not in L (you need to know the alphabet of L for this definition to be meaningful, but this is usually not a problem). If K is a class of languages, then co-K is the set of co-L where L is in K. Now apply this definition to K = NP. You can also talk about co-P, but this is not interesting because co-P is the same as P.
  1. Q4 about Integer Linear Programming
    1. Strong Hints for showing ILP is in NP: hints.pdf.
    2. Question: I'm a bit confused by the hint. To show that ILP is in NP, doesn't it suffice to say that given a proposed solution, we can verify it by performing a matrix-vector muliplication and comparing the components of the result to the components of b? It's pretty easy to show that this can be done in polynomial time.
      Answer: Good question. There is a condition that is often not explicit because it is obvious for most problems: namely, if there is a solution, then there is one that is not too big. Any notions of "polynomial verification" must address this issue. For instance, if the only solutions are, say, exponentially big, then you cannot check this solution in polynomial time.

Homework 6

  1. Q5: there are 2 missing lines in the subroutine Put2:
    It should be:
   Put2(i,j):
     k<- 0
     While (j>0)
        k++
        While (A[k]>0), k++
        j--
     A[k] <- i
  1. Q4(b):
    The term "optimal strategy" is perhaps misleading. Simply, what is the best you can do,
    knowing nothing about the game master's strategy.
  2. Q9:
    1. You should write the pseudo-code for computing the expected number of monochromatic K4's.
    2. I saw a bug in my Lecture 8 (p.45): I said that the probability of repeating the loop at least once is (f-1)/f. That is wrong. The probability is simply 1/f.

Homework 5

  1. Q1(a): Huffman tree: it should be "2n - O(log n)" instead of "2n - O(loglog n)"
  2. Q5: quadtree and interval tree:
    1. The operation Lookup(x) takes a number x in [0,1] if T is an interval tree, and x in [0,1]x[0,1] if T is a quadtree. WLOG, we may assume the root is [0,1] or [0,1]x[0,1] in these cases.
    2. Operation Adjacent(u) assumes u is a leaf. Sorry, this was an omission.
    3. For Q5(b), it is clear that the cost of Lookup(x) is Theta(d) if the depth is d, and amortization cannot help you (why not?). So we are more interested in amortizing the other operations. So, to be explicit: can you amortize the remaining operations to O(1) per operation?
    4. Let me provide a bit more information about "smooth Split" operation. Let "u->v" (read: u forces v) to mean that if u is split, then v must also be split to maintain smoothness. Thus, we can get a "forcing chain" of the form u->v1 -> v2 ->...-> vk for some k>=1. If vk->0 (i.e., vk does NOT force any leaf, then we call this a maximal chain. It turns out that if T is smooth, then for any u, there are at most two maximal chains starting from u. Then we can implement this smooth split very elegantly, using two while-loops.
    5. HINT for part(b). Continued from the above. Clearly, you cannot have amortized O(1) for the cost of Adjacent(u) (where u is leaf) either. But you can have amortized cost of 1+ O(number of adjacent nodes), if your maintain a pointer to one neighbor in each of the four compass directions (N,S,E,W). What if you have many adjacent neighbors in a particular direction? Well, maintain a pointer to the internal node that has the same depth as yourself. Then you can achieve this 1+O(number of adjacent nodes) cost. Not only amortized, but in the worst cast. But the problem is splitting. Try to give examples (for each n) such that a sequence of n splits does NOT have O(n) TOTAL cost. So it is impossible to have O(1) amortized cost.
  3. Q7: Hand simulations
    1. Part(4) on Optimal BST. The Lectures notes has 4 places where W^*(i,j;k) is written as "W(i,j;k)", which might be confusing. We are mainly interested in computing the matrix W^*, but clearly you need to keep track of matrix K as well in the computation.
  4. Q8: dynamic Huffman coding: the semicolons in the string are only for visualization, so the string contains only four letters: a, b, c, d.
    1. I personally do hand simulation to understand the emerging pattern (I think you will begin to see things after a^3b^3c^3d^3).
    2. Some students prefer to program -- let me know if you have an implementation of the external splay trees.
    3. Here is a hint from Francisca on doing the hand simulation: begin by enumerating all the 14 external BST's that you need. Then draw arrows labeled by the letters "a", "b", "c", "d" corresponding to the transitions among the trees. Someone else claimed that you only need 8 of these 14 trees (I have not verified this).