Basic Algorithms

================ Start Lecture #3 ================

1.2.2 Relatives of the Big-Oh

Big-Omega and Big-Theta

Recall that f(n) is O(g(n)) if, for large n, f is not much bigger than g. That is g is some sort of upper bound on f. How about a definition for the case when g is (in the same sense) a lower bound for f?

Definition: Let f(n) and g(n) be real valued functions of an integer value. Then f(n) is Ω(g(n)) if g(n) is O(f(n)).

Remarks:

  1. We pronounce f(n) is Ω(g(n)) as "f(n) is big-Omega of g(n)".
  2. What the last definition says is that we say f(n) is not much smaller than g(n) if g(n) is not much bigger than f(n), which sounds reasonable to me.
  3. What if f(n) and g(n) are about equal, i.e., neither is much bigger than the other?

Definition: We write f(n) is Θ(g(n)) if both f(n) is O(g(n)) and f(n) is Ω(g(n)).

Remarks We pronounce f(n) is Θ(g(n)) as "f(n) is big-Theta of g(n)"

Examples to do on the board.

  1. 2x2+3x is Θ(x2).
  2. 2x3+3x is not θ(x2).
  3. 2x3+3x is Ω(x2).
  4. innerProductRecursive is Θ(n).
  5. binarySearch is Θ(log(n)). Unofficial for now.
  6. If f(n) is Θ(g(n)), the f(n) is &Omega(g(n)).
  7. If f(n) is Θ(g(n)), then f(n) is O(g(n)).

Homework: R-1.6

Little-Oh and Little-Omega

Recall that big-Oh captures the idea that for large n, f(n) is not much bigger than g(n). Now we want to capture the idea that, for large n, f(n) is tiny compared to g(n).

If you remember limits from calculus, what we want is that f(n)/g(n)→0 as n→∞. However, the definition we give does not use the word limit (it essentially has the definition of a limit built in).

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is o(g(n)) if for any c>0, there is an n0 such that f(n)≤cg(n) for all n>n0. This is pronounced as "f(n) is little-oh of g(n)".

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is ω(g(n) if g(n) is o(f(n)). This is pronounced as "f(n) is little-omega of g(n)".

Examples: log(n) is o(n) and x2 is ω(nlog(n)).

Homework: R-1.4. R-1.22

What is "fast" or "efficient"?

If the asymptotic time complexity is bad, say Ω(n8), or horrendous, say Ω(2n), then for large n, the algorithm will definitely be slow. Indeed for exponential algorithms even modest n's (say n=50) are hopeless.

Algorithms that are o(n) (i.e., faster than linear, a.k.a. sub-linear), e.g. logarithmic algorithms, are very fast and quite rare. Note that such algorithms do not even inspect most of the input data once. Binary search has this property. When you look up a name in the phone book you do not even glance at a majority of the names present.

Linear algorithms (i.e., Θ(n)) are also fast. Indeed, if the time complexity is O(nlog(n)), we are normally quite happy.

Low degree polynomial (e.g., Θ(n2), Θ(n3), Θ(n4)) are interesting. They are certainly not fast but speeding up a computer system by a factor of 1000 (feasible today with parallelism) means that a Θ(n3) algorithm can solve a problem 10 times larger. Many science/engineering problems are in this range.

1.2.3 The Importance of Asymptotics

It really is true that if algorithm A is o(algorithm B) then for large problems A will take much less time than B.

Definition: If (the number of operations in) algorithm A is o(algorithm B), we call A asymptotically faster than B.

Example:: The following sequence of functions are ordered by growth rate, i.e., each function is little-oh of the subsequent function.
log(log(n)), log(n), (log(n))2, n1/3, n1/2, n, nlog(n), n2/(log(n)), n2, n3, 2n.

What about those constants that we have swept under the rug?

Modest multiplicative constants (as well as immodest additive constants) don't cause too much trouble. But there are algorithms (e.g. the AKS logarithmic sorting algorithm) in which the multiplicative constants are astronomical and hence, despite its wonderful asymptotic complexity, the algorithm is not used in practice.

A Great Table

See table 1.10 on page 20.

Homework: R-1.7

1.3 A Quick Mathematical Review

This is hard to type in using html. The book is fine and I will write the formulas on the board.

1.3.1 Summations

Definition: The sigma notation: ∑f(i) with i going from a to b.

Theorem: Assume 0<a≠1. Then ∑ai i from 0 to n = (1-an+1)/(1-a).

Proof: Cute trick. Multiply by a and subtract.

Theorem: ∑i from 1 to n = n(n+1)/2.

Proof: Pair the 1 with the n, the 2 with the (n-1), etc. This gives a bunch of (n+1)s. For n even it is clearly n/2 of them. For odd it is the same (look at it).

Homework: R-1.14. (It would have been more logical to assign this last time right after I did R-1.13. In the future I will do so).

1.3.2 Logarithms and Exponents

Recall that logba = c means that bc=a. b is called the base and c is called the exponent.

What is meant by log(n) when we don't specify the base?

I assume you know what ab is. (Actually this is not so obvious. Whatever 2 raised to the square root of 3 means it is not writing 2 down the square root of 3 times and multiplying.) So you also know that ax+y=axay.

Theorem: Let a, b, and c be positive real numbers. To ease writing, I will use base 2 often. This is not needed. Any base would do.

  1. log(ac) = log(a)+log(c)
  2. log(a/c) = log(a) - log(c)
  3. log(ac) = c log(a)
  4. logc(a) = (log(a))/log(c): consider a = clogca and take log of both sides.
  5. clog(a) = a log(c): take log of both sides.
  6. (ba)c = bac
  7. babc = ba+c
  8. ba/bc = ba-c

Examples

Homework: C-1.12

Floor and Ceiling

⌊x⌋ is the greatest integer not greater than x. ⌈x⌉ is the least integer not less than x.

⌊5⌋ = ⌈5⌉ = 5

⌊5.2⌋ = 5 and ⌈5.2⌉ = 6

⌊-5.2⌋ = -6 and ⌈-5.2⌉ = -5

1.3.3 Simple Justification Techniques

By example

To prove the claim that there is a positive n satisfying nn>n+n, we merely have to note that 33>3+3.

By counterexample

To refute the claim that all positive n satisfy nn>n+n, we merely have to note that 11<1+1.

By contrapositive

"P implies Q" is the same as "not Q implies not P". So to show that in the world of positive integers "a2≥b2 implies that a≥b" we can show instead that "NOT(a≥b) implies NOT(a2≥b2)", i.e., that "a<b implies a2<b2", which is clear.

By contradiction

Assume what you want to prove is false and derive a contradiction.

Theorem: There are an infinite number of primes.

Proof: Assume not. Let the primes be p1 up to pk and consider the number A=p1p2…pk+1. A has remainder 1 when divided by any pi so cannot have any pi as a factor. Factor A into primes. None can be pi (A may or may not be prime). But we assumed that all the primes were pi. Contradiction. Hence our assumption that we could list all the primes was false.

By (complete) induction

The goal is to show the truth of some statement for all integers n≥1. It is enough to show two things.

  1. The statement is true for n=1
  2. IF the statement is true for all k<n, then it is true for n.

Theorem: A complete binary tree of height h has 2h-1 nodes.

Proof: We write NN(h) to mean the number of nodes in a complete binary tree of height h. A complete binary tree of height 1 is just a root so NN(1)=1 and 21-1 = 1. Now we assume NN(k)=2k-1 nodes for all k<h and consider a complete binary tree of height h. It is just two complete binary trees of height h-1 with new root to connect them.
So NN(h) = 2NN(h-1)+1 = 2(2h-1-1)+1 = 2h-1, as desired

Homework: R-1.9