To simplify the proof, we assume that there is an extra dummy vertex start; which is a predecessor to every other vertex, and from which we invoke tophelp. This invocation of tophelp visits the other vertices in the same order as they were visited when invoked from the original topsort. (All vertices are unvisited before the first invocation).
We will prove by induction on the size of the DFS tree under any vertex w, that the above property P holds for vertices in the DFS subtree under w.
Base case, size = 1: There are no vertices under w, so P holds.
Induction step, size = n > 1: We assume that P holds, whenever the size is strictly less than n.
Then let w1, w2, ... wk be the children of w in the DFS subtree *in that order*, and let T1, T2, ..., Tk be the corresponding subtrees rooted in those vertices. Then |Ti| < n, and any cross edge can only occur from a subtree with higher index to one with lower index.
To prove property P for the whole DFS subtree under w, let u and v be any two vertices such that v is a successor of u. Then either u and v are in the same Ti, or else, u is in Ti and v is in Tj, with j < i. In the first case, we use the induction hypothesis on Ti to argue that v gets printed before u. In the second case, we note that when tophelp is invoked on wi, the invocation on wj has already returned, and so v has already been printed. Since u gets printed after tophelp is invoked on wi, therefore, u gets printed after v.
This proves the induction step.
Thus property P holds for the DFS subtrees. Applying this to the dummy start node, for which the whole graph is the DFS subtree, we see that property P holds, and so topsort is correct.
We show the cost matrix as in page 212.
Cost Matrix: Expansions \ Vertex: 4 2 3 6 5 1 -------------------------------------------------- Expand 4: 0 5 Inf 10 11 Inf Expand 2: 0 5 8 10 11 15 Expand 3: 0 5 8 10 11 15 Expand 6: 0 5 8 10 11 12 Expand 5: 0 5 8 10 11 12 Expand 1: 0 5 8 10 11 12 Matrix of "parent" pointers (points back to the previous vertex in the shortest path): The entries in the matrix are just the previous vertices in the shortest path from the source (4). Expansions \ Vertex: 4 2 3 6 5 1 -------------------------------------------------- Expand 4: ? 4 Inf 4 4 Inf Expand 2: ? 4 2 4 4 2 Expand 3: ? 4 2 4 4 2 Expand 6: ? 4 2 4 4 6 Expand 5: ? 4 2 4 4 6 Expand 1: ? 4 2 4 5 6
//Dijkstra's algo with priority queue static void Dijkstra(Graph G, int s, int[] D, int [] parent) { int v; // current vertex DijkElem[] E = new DijkElem[G.e()]; DijkElem heap_elem; E[0] = new DijkElem(s, 0); MinHeap H = new MinHeap(E, 1, G.e()); // Create heap for (int i = 0; i < G.n(); i++){ D[i] = Integer.MAX_VALUE; parent[i] = -1; // -1 denotes undefined or unknown } D[s] = 0; for( int i = 0; i < G.n(); i++){ do { heap_elem = ((DijkElem)H.removemin());} while (G.getMark(heap_elem.vertex()) == VISITED); v = heap_elem.vertex(); G.setMark(v, VISITED); if(D[v] == Integer.MAX_VALUE) return; for (Edge w = G.first(v); G.isEdge(w); w = G.next(w)) if (D[G.v2(w)] > (D[v] + G.weight(w))) { D[G.v2(w)] = D[v] + G.weight(w); parent[G.v2(w)] = v; H.insert(new DijkElem(G.v2(w), D[G.v2(w)])); } } }
We don't need to keep the whole path, because whenever we need to find the whole path from s to v, we just need to follow the parent pointers back, until we reach s.
The only modification that needs to be made is that when we update any cost, we also update the parent pointer of that vertex.
The approach we use the following. We run depth first search repeatedly, as in the topological sort algorithm. Each time we run dfs from a vertex as root, we mark that vertex as a possible root of the whole DFS. If in the course of any DFS we visit any vertex that is the marked as a possible root, we remove that mark and continue. If at the end of the process, there remains exactly one vertex marked as a possible root, that vertex is the root, otherwise there is no root of the DAG.
In the following code, we assume that there are 3 possible marks:
The mark VISITED_ROOT denotes that the node has been visited and marked as a possible root.
static int find_root(Graph G){ int numberOfRoots; int possibleRoot; for(int i = 0; i < G.n(); i++) // Initialize the marks G.setMark(i, UNVISITED); for(int i = 0; i < G.n(); i++) // Look at all the vertices // to invoke DFS if(G.getMark(i) == UNVISITED){ dfs(G, i); G.setMark(i, VISITED_ROOT); } // Clean up after all the DFS-s return numberOfRoots = 0; possibleRoot = -1; for (int i = 0; i < G.n(); i++) if (G.getMark(i) == VISITED_ROOT) { numberOfRoots++; possibleRoot = i; } if (numberOfRoots == 1) return possibleRoot; else return -1; } static void dfs(Graph G, int v) { // DFS routine G.setMark(v, VISITED); for(Edge w = G.first(v); G.isEdge(w); w = G.next(w)){ if (G.getMark(G.v2(w)) == UNVISITED) dfs(G, G.v2(w)); // If we visit a previous root, mark it as merely visited else if (G.getMark(G.v2(w)) == VISITED_ROOT) G.setMark(G.v2(w), VISITED) } }
We just keep a check to see if we are swapping identical elements, as judged from the array indices.
Here is the modification:
static void selection(Elem[] array){ for (int i = 0; i < array.length-1; i++){ int lowindex = i; for (int j = array.length - 1; j > i; j--) if(array[j].key() < array[lowindex].key()) lowindex = j; if(lowindex != i) DSutil.swap(array, i, lowindex); } }
This improves the best case swaps to O(1). Best case running time is still O(n), and the worst and the average cases are not affected.
The modification consists in pushing the larger subproblem
first, and then the smaller subproblem. Thus, in the code in
page 243, we compare (l-i)
with
(j-l)
, and if (l-i)
is smaller, we
push the right partition first.
This ensures that the smaller subproblem gets recursed
first. Let SD(n) denote the maximum stack depth on a problem
of size n. By definition, we have SD(n-1) <= SD(n)
.
Let us extend this function to positive real numbers:
for any real number x>0
, let
SD(x)=SD(floor(x))
.
Thus we can write the following recurrence for SD(n):
SD(1) = 0 SD(2) = 1 SD(n) = max{ SD(n-1), 1 + SD(n/2) }We can now easily prove that
SD(n) <= lg(n)
by induction. The result is true for n <= 2
.
For n > 2
, we have
SD(n) = max{ SD(n-1), 1 + SD(n/2) } <= max{ lg (n-1), 1 + lg(n/2) } = lg(n).
B1=N1, B1<N1, B1>N1.Continue from there.
import java.math.*; import java.util.*; public class timing { private static int maxNumBits = 200; private static int minNumBits = 100; private static int incNumBits = 10; private static int maxTrials = 10000; private static void Usage(){ System.err.println("Usage: java timing --min--max " + "--inc --trials "); return; } public static void main (String [] args){ long start_time, end_time, totalTime; int numBits, iteration, i; double avTime, dimension, aTime1, n1; start_time = System.currentTimeMillis(); Random rnd_src = new Random (start_time); /* Use the starting time as seed for the ** ** Random number generation */ for(i = 0; i < args.length; i ++){ if(args[i].equals("--help") || args[i].equals("-help") || args[i].equals("-h")){ Usage(); return; } if(args[i].equals("--min")){ if( (i+1) >= args.length){ Usage(); return; } else{ try{ minNumBits = Integer.parseInt(args[i+1]); i++; }catch(Exception e){ Usage(); return; } } continue; } if(args[i].equals("--max")){ if( (i+1) >= args.length){ Usage(); return; } else{ try{ maxNumBits = Integer.parseInt(args[i+1]); i++; }catch(Exception e){ Usage(); return; } } continue; } if(args[i].equals("--inc")){ if( (i+1) >= args.length){ Usage(); return; } else{ try{ incNumBits = Integer.parseInt(args[i+1]); i++; }catch(Exception e){ Usage(); return; } } continue; } if(args[i].equals("--trials")){ if( (i+1) >= args.length){ Usage(); return; } else{ try{ maxTrials = Integer.parseInt(args[i+1]); i++; }catch(Exception e){ Usage(); return; } } continue; } Usage(); return; } System.out.println ("Maximum number of bits is " + maxNumBits); System.out.println ("Minimum number of bits is " + minNumBits); System.out.println ("Increment in number of bits is " + incNumBits); System.out.println ("Number of Trials " + maxTrials); aTime1 = 0.0; n1 = 0.0; for(numBits = minNumBits; numBits <= maxNumBits; numBits += incNumBits){ start_time = System.currentTimeMillis(); Random rnd = new Random (start_time); totalTime = 0; for(iteration = 0; iteration < maxTrials; iteration++){ BigInteger biggie1 = new BigInteger (numBits, rnd); BigInteger biggie2 = new BigInteger (numBits, rnd); start_time = System.currentTimeMillis(); BigInteger result = biggie1.multiply(biggie2); // BigInteger result = mult.multiply(biggie1, biggie2); end_time = System.currentTimeMillis(); totalTime += (end_time - start_time); } avTime = ((double) totalTime) / maxTrials; if (numBits == minNumBits){ aTime1 = Math.log(avTime); n1 = Math.log(numBits); dimension = 0; }else{ dimension = (Math.log(avTime) - aTime1) / (Math.log(((double) numBits)) - n1); } System.out.println("numBits = " + numBits + ", average time = " + avTime +", dimension = " + dimension); } } }