So one thing you'll notice about this version of the master method is that it
only gives upper bounds.
So we only say that the solution to the recurrence is big-O of some function.
And that's because if you go back to our recurrence,
we used big-O rather than theta in the recurrence.
And this is in the spirit of the course, where as algorithm designers,
our natural focus is on upper bounds, on guarantees for
the worst case running time of an algorithm.
And we're not going to focus too much most of the time on proving stronger bounds in
terms of theta notation.
Now, a good exercise for
you, to check if you really understand the proof of the master method after we go
through it will be to show that if you strengthen the hypothesis and
you assume the recurrence has the form T of n equals a times T of n over b plus
theta of n to the d, then in fact, all three of these big-O's in the statement of
the master method become thetas and the solution becomes asymptotically exact.
So one final comment.
You'll notice that I'm being asymmetrically sloppy with the two
logarithms that appear in these formulas.
So let me just explain why.
In particular, you'll notice that in case one, with the logarithm,
I'm not specifying the base.
Why is that true?
Well, it's because the the logarithm, with respect to any two different bases,
differs by a constant factor.
So the logarithm base e, that is, the natural logarithm, and the logarithm base
2, for example, differ by only a constant factor independent of the argument n.
So you can switch this logarithm to whatever constant base you like,
it only changes the leading constant factor,
which of course is being suppressed in the big-O notation anyways.
On the other hand, in case three, where we have a logarithm in the exponent,
once it's in the exponent, we definitely care about that constant.
Constants is the difference between, say, linear time and quadratic time.
So we need to keep careful track of the logarithm base
in the exponent in case three, and that base is precisely b,
the factor by which the input shrinks with each recursive call.
So that's the precise statement of the master method,
and the rest of this lecture will work toward understanding the master method.
So first, in the next video, we'll look at a number of examples, including resolving
the running time of Gauss's recursive algorithm for integer multiplication.
Following those several examples, we'll prove the master method.
And I know now these three cases probably look super mysterious,
but if I do my job, by the end of the analysis,
these three cases will seem like the most natural thing in the world, as
will these somewhat exotic looking formula for exactly what the running time is.