In this video, we'll put the master method to use by instantiating it for

six different examples.

But first, let's recall what the master method says.

So the master method takes as input recurrences of a particular format,

in particular recurrences that are parameterized by three

different constants, A, B and D.

A refers to the number of recursive calls, or

the number of subproblems that get solved.

B is the factor by which the subproblem size is smaller than the original

problem size.

And D is the exponent and

the running time of the work done outside of the recursive calls.

So the recurrence has the form,

T(n), the running time on the input of size n, is no more than A,

the number of subproblems, times the time required to solve each subproblem.

Which is T(n/b) because the input size of a subproblem is n/b.

Plus O(n to the d).

The work outside of the recursive calls.

There's also a base case which I haven't written down.

So once the problem size drops below a particular constant

then there should be no more recursion and

you can just solve the problem immediately that is in constant time.

Now given a recurrence in this permitted format, the running time is given by one

of three formulas depending on the relationship between a,

the number of recursive calls, and b raised to the d power.

Case one of the master method is when these two quantities are the same,

a = b to the d.

Then the running time is n to the d log n, no more than that.

In case 2, the number of recursive calls, a, is strictly smaller than b to the d.

Then we get a better running time upperbound of O(n to the d).

And when a is bigger than b to the d, we get this somewhat funky looking

running time of O(n raised to the log base b of a power).

We will understand where that formula comes from a little later.

So that's the master method.

It's a little hard to interpret the first time you see it.

So let's look at some concrete examples.