Consider a recursive version of innerProduct. If the arrays are of size 1, the answer is clearly A[0]B[0]. If n>1, we recursively get the inner product of the first n-1 terms and then add in the last term.
Algorithm innerProductRecursive Input: Positive integer n and two integer arrays A and B of size n. Output: The inner product of the two arrays if n=1 then return A[0]B[0] return innerProductRecursive(n-1,A,B) + A[n-1]B[n-1]
How many steps does the algorithm require? Let T(n) be the number of steps required.
One could easily complain about the specific primitive operations we chose and about the amount we charge for each one. For example, perhaps we should charge one unit for accessing a scalar variable. Perhaps we should charge more for division than for addition. Some computers can multiply two numbers and add it to a third in one operation. What about the cost of loading the program?
Now we are going to be less precise and worry only about approximate answers for large inputs. Thus the rather arbitrary decisions made about how many units to charge for each primitive operation will not matter since our sloppiness (i.e. the approximate forms of our answers) will be valid for any of these (reasonable) choice of primitive operations and the charges assigned to each. Please note that the sloppiness will be very precise.
Big-OhNotation
Definition: Let f(n) and g(n) be real-valued functions of a single non-negative integer argument. We write f(n) is O(g(n)) if there is a positive real number c and a positive integer n0 such that f(n)≤cg(n) for all n≥n0.
What does this mean?
For large inputs (n≥n0), f is not much bigger than g (specifically, f(n)≤cg(n)).
Examples to do on the board
A few theorems give us rules that make calculating big-Oh easier.
Theorem (arithmetic): Let d(n), e(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and e(n) is O(g(n)). Then
Theorem (transitivity): Let d(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and f(n) is O(g(n)). Then d(n) is O(g(n)).
Theorem (special functions): (Only n varies)
Example: (log n)1000 is O(n0.001). This says raising log n to the 1000 is not (significantly) bigger than the thousandth root of n. Indeed raising log to the 1000 is actually significantly smaller than taking the thousandth root since n0.001) is not O((log n)1000).
So log is a VERY small (i.e., slow growing) function.
Homework: R-1.19 R-1.20
Allan Gottlieb