Extend the data structure by storing at each node a bit saying whether this node is the left or right child of its parent.
Problem set #3, problem 2a: How can we eliminate the bit just introduced? That is give a Θ(1) algorithm leftOrRight(T,v) that determines whether node v of binary tree T is a left or right child (it signals an error if v is the root).
With the new bit (or with your solution to problem 2a) we can find the closestKeyBefore (and friends).
Look at the figure to the right. What should we do if we arrived
at 10 and are
looking for the closest key before 5?
Ans: Get the internal node that occurs just before 10 in a preorder traversal.
What if we want the closest key before 15?
Ans: It is node 10.
What if we arrived at 30 and wanted closest key before 25?
Ans: It is the node 20.
What if we wanted the closest key before 35?
Ans: It is the node 30?
What if we wanted the closest key after something?
Ans: It is one of 10, 20, 30, or the next internal node after 30 in a preorder traversal.
To state this in general, assume that we are searching for a key k that is not present in a binary search tree T. The search will end at a leaf l. The last key comparison we made was between k and key(v), with v the parent of l.
Proof: We just prove the first; the second is similar.
We know k<key(v) and w contains the next smaller key. Note that w must be exactly as shown to the right (the dashed line can be zero length, i.e. x could be v). Since the search for k reached v, we went right at w so k>key(w) as desired.
Corollary: Finding the closest Key/Item Before/After simply requires a search followed by nextInternalInPreorder or prevInternalInPreorder.
Problem set #3, problem 2b:
Write the algorithm
nextInternalInPreorder(tree T, internal node v) that finds the next internal node after the internal node v in a preorder traversal (signal an error if v is the last internal node in a preorder traversal). You should use the extra bit added above to determine if a node is a left or right child.
Problem set #3, problem 2c (end of problem 2): Write an algorithm for the method closestKeyBefore(k).
To insert an item with key k, first execute w←TreeSearch(k,T.root()). Recall that if w is internal, k is already in w, and if w is a leaf, k "belongs" in w. So we proceed as follows.
Draw examples on the board showing both cases (leaf and internal returned).
Once again we perform a constant amount of work per level of the tree implying that the complexity is O(height).
This is the trickiest part, especially in one case as we describe below. The key concern is that we cannot simply remove an item from an internal node and leave a hole as this would make future searches fail. The beginning of the removal technique is familiar: w=TreeSearch(k,T.root()). If w is a leaf, k is not present, which we signal.
If w is internal, we have found k, but now the fun begins. Returning the element with key k is easy, it is the element stored in w. We need to actually remove w, but we cannot leave a hole. There are three cases.
|findElement, insertItem, removeElement||O(h)|
We have seen that findElement, insertItem, and removeElement have complexity O(height). It is also true, but we will not show it, that one can implement findAllElements and removeAllElements in time O(height+numberOfElements). You might think removeAllElements should be constant time since the resulting tree is just a root so we can make it in constant time. But removeAllElements must also return an iterator that when invoked must generate each of the elements removed.
In a sense that we will not make precise, binary search trees have logarithmic performance since `most' trees have logarithmic height.
Nonetheless we know that there are trees with height Θ(n). You produced several of them for problem set #2. For these trees binary search takes linear time, i.e., is slow. Our goal now is to fancy up the implementation so that the trees are never very high. We can do this since the trees are not handed to us. Instead they are build up using our insertItem method.Allan Gottlieb