One could easily complain about the specific primitive operations we chose and about the amount we charge for each one. For example, perhaps we should charge one unit for accessing a scalar variable. Perhaps we should charge more for division than for addition. Some computers can multiply two numbers and add it to a third in one operation. What about the cost of loading the program?
Now we are going to be less precise and worry only about approximate answers for large inputs. Thus the rather arbitrary decisions made about how many units to charge for each primitive operation will not matter since our sloppiness will cover. Please note that the sloppiness will be very precise.
Big-OhNotation
Definition: Let f(n) and g(n) be real-valued functions of a single non-negative integer argument. We write f(n) is O(g(n)) if there is a positive real number c and a positive integer n0 such that f(n)≤cg(n) for all n≥n0.
What does this mean?
For large inputs (n≥n0), f is not much bigger than g (specifically, f(n)≤cg(n)).
Examples to do on the board
A few theorems give us rules that make calculating big-Oh easier.
Theorem (arithmetic): Let d(n), e(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and e(n) is O(g(n)). Then
Theorem (transitivity): Let d(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and f(n) is O(g(n)). Then d(n) is O(g(n)).
Theorem (special functions): (Only n varies)
Example: (log n)1000 is O(n0.001). This says raising log n to the 1000 is not (significantly) bigger than the thousandth root of n. Indeed raising log to the 1000 is actually significantly smaller than taking the thousandth root since n0.001) is not O((log n)1000).
So log is a VERY small (i.e., slow growing) function.
Homework: R-1.19 R-1.20
Example: Let's do problem R-1.10. Consider the following simple loop that computes the sum of the first n positive integers and calculate the running time using the big-Oh notation.
Algorithm Loop1(n) s ← 0 for i←1 to n do s ← s+iWith big-Oh we don't have to worry about multiplicative or additive constants so we see right away that the running time is just the number of iterates of the loop so the answer is O(n)
Homework: R-1.11 and R-1.12
Definitions: (Common names)
Homework: R-1.10 and R-1.12.
R-1.13: The outer (i) loop is done 2n times. For each outer iteration the inner loop is done i times. Each inner iteration is a constant number of steps so each inner loop is O(i), which is the time for the ith iteration of the outer loop. So the entire outer loop is ∑O(i) i from 0 to 2n, which is O(n2).