Remark: I changed my mind about homework. Too many to have each one really graded. We now have homeworks and problem sets as explained here.
If the asymptotic time complexity is bad, say n5, or horrendous, say 2n, then for large n, the algorithm will definitely be slow. Indeed for exponential algorithms even modest n's (say n=50) are hopeless.
Algorithms that are o(n) (i.e., faster than linear, a.k.a. sub-linear), e.g. logarithmic algorithms, are very fast and quite rare. Note that such algorithms do not even inspect most of the input data once. Binary search has this property. When you look a name in the phone book you do not even glance at a majority of the names present.
Linear algorithms (i.e., Θ(n)) are also fast. Indeed, if the time complexity is O(nlog(n)), we are normally quite happy.
Low degree polynomial (e.g., Θ(n2), Θ(n3), Θ(n4)) are interesting. They are certainly not fast but speeding up a computer system by a factor of 1000 (feasible today with parallelism) means that a Θ(n3) algorithm can solve a problem 10 times larger. Many science/engineering problems are in this range.
It really is true that if algorithm A is o(algorithm B) then for large problems A will take much less time than B.
Definition: If (the number of operations in) algorithm A is o(algorithm B), we call A asymptotically faster than B.
Example:: The following sequence of functions are
ordered by growth rate, i.e., each function is
little-oh of the subsequent function.
log(log(n)), log(n), (log(n))2, n1/3,
n1/2, n, nlog(n), n2/(log(n)), n2,
n3, 2n.
Modest multiplicative constants (as well as immodest additive constants) don't cause too much trouble. But there are algorithms (e.g. the AKS logarithmic sorting algorithm) in which the multiplicative constants are astronomical and hence, despite its wonderful asymptotic complexity, the algorithm is not used in practice.
See table 1.10 on page 20.
Homework: R-1.7
This is hard to type in using html. The book is fine and I will write the formulas on the board.
Definition: The sigma notation: ∑f(i) with i going from a to b.
Theorem: Assume 0<a≠1. Then ∑ai i from 0 to n = (1-an+1)/(1-a).
Proof: Cute trick. Multiply by a and subtract.
Theorem: ∑i from 1 to n = n(n+1)/2.
Proof: Pair the 1 with the n, the 2 with the (n-1), etc. This gives a bunch of (n+1)s. For n even it is clearly n/2 of them. For odd it is the same (look at it).
Recall that logba = c means that bc=a. b is called the base and c is called the exponent.
What is meant by log(n) when we don't specify the base?
I assume you know what ab is. (Actually this is not so obvious. Whatever 2 raised to the square root of 3 means it is not writing 2 down the square root of 3 times and multiplying.) So you also know that ax+y=axay.
Theorem: Let a, b, and c be positive real numbers. To ease writing, I will use base 2 often. This is not needed. Any base would do.
Homework: C-1.12
⌊x⌋ is the greatest integer not greater than x. ⌈x⌉ is the least integer not less than x.
⌊5⌋ = ⌈5⌉ = 5
⌊5.2⌋ = 5 and ⌈5.2⌉ = 6
⌊-5.2⌋ = -6 and ⌈-5.2⌉ = -5
To prove the claim that there is an n greater than 1000, we merely have to note that 1001 is greater than 1001.
To refute the claim that all n are greater than 1000, we merely have to note that 999 is not greater than 1000.
"P implies Q" is the same as "not Q implies not P". So to show that no prime is a square we note that "prime implies not square" is the same is "not (not square) implies not prime", i.e. "square implies not prime", which is obvious.
Assume what you want to prove is false and derive a contradiction.
Theorem: There are an infinite number of primes.
Proof: Assume not. Let the primes be p1 up to pk and consider the number A=p1p2…pk+1. A has remainder 1 when divided by any pi so cannot have any pi as a factor. Factor A into primes. None can be pi (A may or may not be prime). But we assumed that all the primes were pi. Contradiction. Hence our assumption that we could list all the primes was false.
The goal is to show the truth of some statement for all integers n≥1. It is enough to show two things.
Theorem: A complete binary tree of height h has 2h-1 nodes.
Proof:
We write NN(h) to mean the number of nodes in a complete binary tree
of height h.
A complete binary tree of height 1 is just a root so NN(1)=1 and
21-1 = 1.
Now we assume NN(k)=2k-1 nodes for all k<h and consider a complete
binary tree of height h. It is just a complete binary tree of height
h-1 with new leaf nodes added. How many new leaves?
Ans. 2h-1 (this could be proved by induction as a lemma, but
is fairly clear without induction).
Hence NN(h)=NN(h-1)+2h-1 = (2h-1-1)+2h-1 = 2(2h-1)-1=2h-1.
Homework: R-1.9
Very similar to induction. Assume we have a loop with controlling variable i. For example a "for i←0 to n-1" loop. We then associate with the loop a statement S(j) depending on j such that
I favor having array and loop indexes starting at zero. However, here it causes us some grief. We must remember that iteration j occurs when i=j-1.
Example:: Recall the countPositives algorithm
Algorithm countPositives Input: Non-negative integer n and an integer array A of size n. Output: The number of positive elements in A pos ← 0 for i ← 0 to n-1 do if A[i] > 0 then pos ← pos + 1 return pos
Let S(j) be "pos equals the number of positive values in the first j elements of A".
Just before the loop starts S(0) is true vacuously. Indeed that is the purpose of the first statement in the algorithm.
Assume S(j-1) is true before iteration j, then iteration j (i.e., i=j-1) checks A[j-1] which is the jth element and updates pos accordingly. Hence S(j) is true after iteration j finishes.
Hence we conclude that S(n) is true when iteration n concludes, i.e. when the loop terminates. Thus pos is the correct value to return.