Basic Algorithms

Part I: Fundamental Tools

Chapter 1: Algorithm Analysis

We are interested in designing good algorithms (a step-by-step procedure for performing some task in a finite amount of time) and good data structures (a systematic way of organizing and accessing data).

Unlike v22.102, however, we wish to determine rigorously just how good our algorithms and data structures really are and whether significantly better algorithms are possible.

1.1: Methodologies for Analyzing Algorithms

We will be primarily concerned with the speed (time complexity) of algorithms.

We will emphasize instead and analytic framework that is independent of input and hardware, and does not require an implementation. The disadvantage is that we can only estimate the time required.

Homework: Unless otherwise stated homework problems are from the last section in the current book chapter. R-1.1 and R-1.2.

1.1.1: Pseudo-Code

Designed for human understanding. Suppress unimportant details and describe some parts in natural language (English in this course).

1.1.2: The Random Access Machine (RAM) Model

The key difference from reality is the assumption of a very simple memory model: Accessing any memory element takes a constant amount of time. This ignores caching and paging for example. It also ignores the word-size of a computer (any size number can be stored in one word and accessed in one operation time).

The time required is simply a count of the primitive operations executed. Primitive operations include

  1. Assign a value to a variable (independent of the size of the value; but the variable must be a scalar).
  2. Method invocation, i.e., calling a function or subroutine.
  3. Performing a (simple) arithmetic operation (divide is OK, logarithm is not).
  4. Indexing into an array (for now just one dimensional; scalar access is free).
  5. Following an object reference.
  6. Returning from a method.

1.1.3: Counting Primitive Operations

Let's start with a simple algorithm (the book does a different simple algorithm, maximum).

Algorithm innerProduct
    Input: Non-negative integer n and two integer arrays A and B of size n.
    Output: The inner product of the two arrays

prod ← 0
for i ← 0 to n-1 do
    prod ← prod + A[i]*B[i]
return prod

The total is thus 1+1+5n+2n+(n+1)+1 = 8n+4.

Let's improve it (a very little bit)

Algorithm innerProductBetter
    Input: Non-negative integer n and two integer arrays A and B of size n.
    Output: The inner product of the two arrays

prod ← A[0]*B[0]
for i ← 1 to n-1 do
    prod ← prod + A[i]*B[i]
return prod

The cost is 4+1+5(n-1)+2(n-1)+n+1 = 8n-1

THIS ALGORITHM IS WRONG!!

If n=0, we access A[0] and B[0], which do not exist. The original version returns zero as the inner product of empty arrays, which is arguably correct. The best fix is perhaps to change Non-negative to Positive. Let's call this algorithm innerProductBetterFixed.

What about if statements?

Algorithm countPositives
    Input: Non-negative integer n and an integer array A of size n.
    Output: The number of positive elements in A

pos ← 0
for i ← 0 to n-1 do
    if A[i] > 0 then
        pos ← pos + 1
return pos

Let U be the number of updates done.

1.1.4: Analyzing Recursive Algorithms

Consider a recursive version of innerProduct. If the arrays are of size 1, the answer is clearly A[0]B[0]. If n>1, we recursively get the inner product of the first n-1 terms and then add in the last term.

Algorithm innerProductRecursive
    Input: Positive integer n and two integer arrays A and B of size n.
    Output: The inner product of the two arrays

if n=1 then
    return A[0]B[0]
return innerProductRecursive(n-1,A,B) + A[n-1]B[n-1]

How many steps does the algorithm require? Let T(n) be the number of steps required.

================ Start Lecture #2 ================

Homework: I should have given some last time. It is listed in the notes (search for homework). Also some will be listed this time. BUT, due to the Jewish holiday, none is officially assigned. You can get started if you wish since all will eventually be assigned, but none will be collected next class.

1.2: Asymptotic Notation

Now we are going to be less precise and worry only about approximate answers for large inputs.

1.2.1: The Big-Oh Notation

Definition: Let f(n) and g(n) be real-valued functions of a single non-negative integer argument. We write f(n) is O(g(n)) if there is a positive real number c and a positive integer n0 such that f(n)≤cg(n) for all n≤n0.

What does this mean?

For large inputs (n≤n0), f is not much bigger than g (f(n)≤cg(n)).

Examples to do on the board

  1. 3n-6 is O(n). Some less common ways of saying the same thing follow.
  2. 3x-6 is O(x).
  3. If f(y)=3y-6 and id(y)=y, then f(y) is O(id(y)).
  4. 3n-6 is O(2n)
  5. 9n4+12n2+1234 is O(n4).
  6. innerProduct is O(n)
  7. innerProductBetter is O(n)
  8. innerProductFixed is O(n)
  9. countPositives is O(n)
  10. n+log(n) is O(n).
  11. log(n)+5log(log(n)) is O(log(n)).
  12. 1234554321 is O(1).
  13. 3/n is O(1). True but not the best.
  14. 3/n is O(1/n). Much better.
  15. innerProduct is O(100n+log(n)+34.5). True, but awful.

A few theorems give us rules that make calculating big-Oh easier.

Theorem (arithmetic): Let d(n), e(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and e(n) is O(g(n)). Then

  1. ad(n) is O(f(n)) for any nonnegative a
  2. d(n)+e(n) is O(f(n)+g(n))
  3. d(n)e(n) is O(f(n)g(n))

Theorem (transitivity): Let d(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and f(n) is O(g(n)). Then d(n) is O(g(n)).

Theorem (special functions): (Only n varies)

  1. If f(n) is a polynomial of degree d, then f(n) is O(nd).
  2. nx is O(an) for any x>0 and a>1.
  3. log(nx) is O(log(n)) for any x>0
  4. (log(n))x is O(ny) for any x>0 and y>0.

Homework: R-1.19 R-1.20

Definitions: (Common names)

  1. If a function is O(log(n)) we call it logarithmic.
  2. If a function is O(n) we call it linear.
  3. If a function is O(n2) we call it quadratic.
  4. If a function is O(nk) with k≥1, we call it polynomial.
  5. If a function is O(an) with a>1, we call it exponential.
Remark: The last definitions would be better with a relative of big-Oh, namely big-Theta, since, for example 3log(n) is O(n2), but we do not call 3log(n) quadratic.

Homework: R-1.10 and R-1.12.

R-1.13: The outer (i) loop is done 2n times. For each outer iteration the inner loop is done i times. Each inner iteration is a constant number of steps so each inner loop is O(i), which is the time for the ith iteration of the outer loop. So the entire outer loop is σO(i) i from 0 to 2n, which is O(n2).

1.2.2: Relatives of the Big-Oh

Big-Omega and Big-Theta

Recall that f(n) is (g(n)) if for large n, f is not much smaller than g. That is g is some sort of upper bound on f. How about a definition for the case when g is (in the same sense) a lower bound for f?

Definition: Let f(n) and g(n) be real valued functions of an integer value. Then f(n) is Ω(g(n)) if g(n) is O(f(n)).

Remarks:

  1. We pronounce f(n) is Ω(g(n)) as "f(n) is big-Omega of g(n)".
  2. What the last definition says is that we say f(n) is not much smaller than g(n) if g(n) is not much bigger than f(n), which sounds reasonable to me.
  3. What if f(n) and g(n) are about equal, i.e., neither is much bigger than the other?

Definition: We write f(n) is Θ(g(n)) if both f(n) is O(g(n)) and f(n) is Θ(g(n)).

Remarks We pronounce f(n) is Θ(g(n)) as "f(n) is big-Theta of g(n)"

Examples to do on the board.

  1. 2x2+3x is θ(x2).
  2. 2x3+3x is not θ(x2).
  3. 2x3+3x is Ω(x2).
  4. innerProductRecutsive is Θ(n).
  5. binarySearch is Θ(log(n)). Unofficial for now.
  6. If f(n) is Θ(g(n)), the f(n) is &Omega(g(n)).
  7. If f(n) is Θ(g(n)), then f(n) is O(g(n)).

Homework: R-1.6

Little-Oh and Little-Omega

Recall that big-Oh captures the idea that for large n, f(n) is not much bigger than g(n). Now we want to capture the idea that, for large n, f(n) is tiny compared to g(n).

If you remember limits from calculus, what we want is that f(n)/g(n)→0 as n→∞. However, the definition we give does not use limits (it essentially has the definition of a limit built in).

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is o(g(n)) if for any c>0, there is an n0 such that f(n)≤g(n) for all n>n0. This is pronounced as "f(n) is little-oh of g(n)".

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is ω(g(n) if g(n) is o(f(n)). This is pronounced as "f(n) is little-omega of g(n)".

Examples: log(n) is o(n) and x2 is ω(nlog(n)).

Homework: R-1.4. R-1.22

================ Start Lecture #3 ================

Remark: I changed my mind about homework. Too many to have each one really graded. We now have homeworks and problem sets as explained here.

What is "fast" or "efficient"?

If the asymptotic time complexity is bad, say n5, or horrendous, say 2n, then for large n, the algorithm will definitely be slow. Indeed for exponential algorithms even modest n's (say n=50) are hopeless.

Algorithms that are o(n) (i.e., faster than linear, a.k.a. sub-linear), e.g. logarithmic algorithms, are very fast and quite rare. Note that such algorithms do not even inspect most of the input data once. Binary search has this property. When you look a name in the phone book you do not even glance at a majority of the names present.

Linear algorithms (i.e., Θ(n)) are also fast. Indeed, if the time complexity is O(nlog(n)), we are normally quite happy.

Low degree polynomial (e.g., Θ(n2), Θ(n3), Θ(n4)) are interesting. They are certainly not fast but speeding up a computer system by a factor of 1000 (feasible today with parallelism) means that a Θ(n3) algorithm can solve a problem 10 times larger. Many science/engineering problems are in this range.

1.2.3: The Importance of Asymptotics

It really is true that if algorithm A is o(algorithm B) then for large problems A will take much less time than B.

Definition: If (the number of operations in) algorithm A is o(algorithm B), we call A asymptotically faster than B.

Example:: The following sequence of functions are ordered by growth rate, i.e., each function is little-oh of the subsequent function.
log(log(n)), log(n), (log(n))2, n1/3, n1/2, n, nlog(n), n2/(log(n)), n2, n3, 2n.

What about those constants that we have swept under the rug?

Modest multiplicative constants (as well as immodest additive constants) don't cause too much trouble. But there are algorithms (e.g. the AKS logarithmic sorting algorithm) in which the multiplicative constants are astronomical and hence, despite its wonderful asymptotic complexity, the algorithm is not used in practice.

A Great Table

See table 1.10 on page 20.

Homework: R-1.7

1.3: A Quick Mathematical Review

This is hard to type in using html. The book is fine and I will write the formulas on the board.

1.3.1: Summations

Definition: The sigma notation: ∑f(i) with i going from a to b.

Theorem: Assume 0<a≠1. Then ∑ai i from 0 to n = (1-an+1)/(1-a).

Proof: Cute trick. Multiply by a and subtract.

Theorem: ∑i from 1 to n = n(n+1)/2.

Proof: Pair the 1 with the n, the 2 with the (n-1), etc. This gives a bunch of (n+1)s. For n even it is clearly n/2 of them. For odd it is the same (look at it).

1.3.2: Logarithms and Exponents

Recall that logba = c means that bc=a. b is called the base and c is called the exponent.

What is meant by log(n) when we don't specify the base?

I assume you know what ab is. (Actually this is not so obvious. Whatever 2 raised to the square root of 3 means it is not writing 2 down the square root of 3 times and multiplying.) So you also know that ax+y=axay.

Theorem: Let a, b, and c be positive real numbers. To ease writing, I will use base 2 often. This is not needed. Any base would do.

  1. log(ac) = log(a)+log(c)
  2. log(a/c) = log(a) - log(c)
  3. log(ac) = c log(a)
  4. logc(a) = (log(a))/log(c): consider a = clogca and take log of both sides.
  5. clog(a) = a log(c): take log of both sides.
  6. (ba)c = bac
  7. babc = ba+c
  8. ba/bc = ba-c

Examples

Homework: C-1.12

Floor and Ceiling

⌊x⌋ is the greatest integer not greater than x. ⌈x⌉ is the least integer not less than x.

⌊5⌋ = ⌈5⌉ = 5

⌊5.2⌋ = 5 and ⌈5.2⌉ = 6

⌊-5.2⌋ = -6 and ⌈-5.2⌉ = -5

1.3.3: Simple Justification Techniques

By example

To prove the claim that there is an n greater than 1000, we merely have to note that 1001 is greater than 1001.

By counterexample

To refute the claim that all n are greater than 1000, we merely have to note that 999 is not greater than 1000.

By contrapositive

"P implies Q" is the same as "not Q implies not P". So to show that no prime is a square we note that "prime implies not square" is the same is "not (not square) implies not prime", i.e. "square implies not prime", which is obvious.

By contradiction

Assume what you want to prove is false and derive a contradiction.

Theorem: There are an infinite number of primes.

Proof: Assume not. Let the primes be p1 up to pk and consider the number A=p1p2…pk+1. A has remainder 1 when divided by any pi so cannot have any pi as a factor. Factor A into primes. None can be pi (A may or may not be prime). But we assumed that all the primes were pi. Contradiction. Hence our assumption that we could list all the primes was false.

By (complete) induction

The goal is to show the truth of some statement for all integers n≥1. It is enough to show two things.

  1. The statement is true for n=1
  2. IF the statement is true for all k<n, then it is true for n.

Theorem: A complete binary tree of height h has 2h-1 nodes.

Proof: We write NN(h) to mean the number of nodes in a complete binary tree of height h. A complete binary tree of height 1 is just a root so NN(1)=1 and 21-1 = 1. Now we assume NN(k)=2k-1 nodes for all k<h and consider a complete binary tree of height h. It is just a complete binary tree of height h-1 with new leaf nodes added. How many new leaves?
Ans. 2h-1 (this could be proved by induction as a lemma, but is fairly clear without induction).

Hence NN(h)=NN(h-1)+2h-1 = (2h-1-1)+2h-1 = 2(2h-1)-1=2h-1.

Homework: R-1.9

Loop Invariants

Very similar to induction. Assume we have a loop with controlling variable i. For example a "for i←0 to n-1" loop. We then associate with the loop a statement S(j) depending on j such that

  1. S(0) is true (just) before the loop begins
  2. IF S(j-1) holds before iteration j begins, then S(j) will hold when iteration j ends.
By induction we see that S(n) will be true when the nth iteration ends, i.e., when the loop ends.

I favor having array and loop indexes starting at zero. However, here it causes us some grief. We must remember that iteration j occurs when i=j-1.

Example:: Recall the countPositives algorithm

Algorithm countPositives
    Input: Non-negative integer n and an integer array A of size n.
    Output: The number of positive elements in A

pos ← 0
for i ← 0 to n-1 do
    if A[i] > 0 then
        pos ← pos + 1
return pos

Let S(j) be "pos equals the number of positive values in the first j elements of A".

Just before the loop starts S(0) is true vacuously. Indeed that is the purpose of the first statement in the algorithm.

Assume S(j-1) is true before iteration j, then iteration j (i.e., i=j-1) checks A[j-1] which is the jth element and updates pos accordingly. Hence S(j) is true after iteration j finishes.

Hence we conclude that S(n) is true when iteration n concludes, i.e. when the loop terminates. Thus pos is the correct value to return.

================ Start Lecture #4 ================

1.3.4: Basic Probability

Skipped for now.

1.4: Case Studies in Algorithm Analysis

1.4.1 A Quadratic-Time Prefix Averages Algorithm

We trivially improved innerProduct (same asymptotic complexity before and after). Now we will see a real improvement. For simplicity I do a slightly simpler algorithm, prefix sums.

Algorithm partialSumsSlow
    Input: Positive integer n and a real array A of size n
    Output: A real array B of size n with B[i]=A[0]+…+A[i]

for i ← 0 to n-1 do
    s ← 0
    for j ← 0 to i do
        s ← s + A[j]
    B[i] ← s
return B

The update of s is performed 1+2+…+n times. Hence the running time is Ω(1+2+…+n)=&Omega(n2). In fact it is easy to see that the time is &Theta(n2).

1.4.2 A Linear-Time Prefix Averages Algorithm

Algorithm partialSumsFast
    Input: Positive integer n and a real array A of size n
    Output: A real array B of size n with B[i]=A[0]+…+A[i]

s ← 0
for i ← 0 to n-1 do
    s ← s + A[i]
    B[i] ← s
return B

We just have a single loop and each statement inside is O(1), so the algorithm is O(n) (in fact Θ(n)).

Homework: Write partialSumsFastNoTemps, which is also Θ(n) time but avoids the use of s (it still uses i so my name is not great).

1.5: Amortization

Often we have a data structure supporting a number of operations that will be applied many times. For some data structures, the worst-case running time of the operations may not give a good estimate of how long a sequence of operations will take.

If we divide the running time of the sequence by the number of operations performed we get the average time for each operation in the sequence,, which is called the amortized running time.

Why amortized?
Because the cost of the occasional expensive application is amortized over the numerous cheap application (I think).

Example:: (From the book.) The clearable table. This is essentially an array. The table is initially empty (i.e., has size zero). We want to support three operations.

  1. Add(e): Add a new entry to the table at the end (extending its size).
  2. Get(i): Return the ith entry in the table.
  3. Clear(): Remove all the entries by setting each entry to zero (for security) and setting the size to zero.

The obvious implementation is to use a large array A and an integer s indicating the current size of A. More precisely A is (always) of size N (large) and s indicates the extent of A that is currently in use.

We are ignoring a number of error cases.

We start with a size zero table and assume we perform n (legal) operations. Question: What is the worst-case running time for all n operations.

One possibility is that the sequence consists of n-1 add(e) operations followed by one Clear(). The Clear() takes Θ(n), which is the worst-case time for any operation (assuming n operations in total). Since there are n operations and the worst-case is Θ(n) for one of them, we might think that the worst-case sequence would take Θ(n2).

But this is wrong.

It is easy to see that Add(e) and Get(i) are Θ(n).

The total time for all the Clear() operations is O(n) since in total O(n) entries were cleared (since at most n entries were added).

Hence, the amortized time for each operation in the clearable ADT (abstract data type) is O(1), in fact Θ(1).

1.5.1: Amortization Techniques

The Accounting Method

Overcharge for cheap operations and undercharge expensive so that the excess charged for the cheap (the profit) covers the undercharge (the loss). This is called in accounting an amortization schedule.

Assume the get(i) and add(e) really cost one ``cyber-dollar'', i.e., there is a constant K so that they each take fewer than K primitive operations and we let a ``cyber-dollar'' be K. Similarly, assume that clear() costs P cyber-dollars when the table has P elements in it.

We charge 2 cyber-dollars for every operation. So we have a profit of 1 on each add(e) and we see that the profit is enough to cover next clear() since if we clear P entries, we had P add(e)s.

All operations cost 2 cyber-dollars so n operations cost 2n. Since we have just seen that the real cost is no more than the cyber-dollars spent, the total cost is Θ(n) and the amortized cost is Θ(1).

Potential Functions

Very similar to the accounting method. Instead of banking money, you increase the potential energy. I don't believe we will use this methods so we are skipping it.

1.5.2: Analyzing an Extendable Array Implementation

Want to let the size of an array grow dynamically (i.e., during execution). The implementation is quite simple. Copy the old array into a new one twice the size. Specifically, on an array overflow instead of signaling an error perform the following steps (assume the array is A and the size is N)

  1. Allocate a new array B of size 2N
  2. For i←0 to N-1 do B[i]←A[i]
  3. Make A refer to B (this is A=B in C and java).
  4. Deallocate the old A (automatic in java; error prone in C)

The cost of this growing operation is Θ(N).

Theorem: Given an extendable array A that is initially empty and of size N, the amortized time to perform n add(e) operations is Θ(1).

Proof: Assume one cyber dollar is enough for an add w/o the grow and that N cyber-dollars are enough to grow from N to 2N. Charge 2 cyber dollars for each add; so a profit of 1 for each add w/o growing. When you must do a grow, you had N adds so have N dollars banked.

1.6: Experimentation

1.6.1: Experimental Setup

Book is quite clear. I have little to add.

Choosing the question

You might want to know

Deciding what to measure

================ Start Lecture #5 ================

1.6.2: Data Analysis and Visualization

Ratio test

Assume you believe the running time t(n) of an algorithm is Θ(nd) for some specific d and you want to both verify your assumption and find the multiplicative constant.

Make a plot of (n, t(n)/nd). If you are right the points should tend toward a horizontal line and the height of this line is the multiplicative constant.

Homework: R-1.29

What if you believe it is polynomial but don't have a guess for d?
Ans: Use ...

The power test

Plot (n, t(n)) on log log paper. If t(n) is Θ(nd), say t(n) approaches bnd, then log(t(n)) approaches log(b)+d(log(n)).

So when you plot (log(n), log(t(n)) (i.e., when you use log log paper), you will see the points approach (for large n) a straight line whose slope is the exponent d and whose y intercept is the multiplicative constant d.

Homework: R-1.30