Okay, so we, we did a merge sort we showed how merge sort worked, and then we analyzed its running time, and then we got it these three currents. Okay, so this tells us that the running time of of merge sort for a sorting a list of size n is twice the running time of merge sort on a list of size n over 2. Plus some over head for merging the two sorted halves of the list. This over head in this case, in the case of merge sort, it's linear, okay. So taken two list of size n over 2, to merge them, it's going to take a time that is linear in the size of the input. Now when we said that for the base case when we have a list of size 1. I have constant amount of work to do. And as we said you know this is not the type of function explicit function, that allows us to understand the running time. I cannot easily plot this, or I cannot easily imagine how fast or slow this is going to grow. So the question is how do we go from this? To some explicit function. How do we get from this to the function, that the running time of merge sort is really O of n log n? So, dealing with recurrences in general, this is a recurrence that we are seeing from merge sort, but, recurrences are not only from merge sort or for this kind of algorithms. Recurrences are a very broad mathematical tool for describing functions. Right? So sometimes we describe the function explicitly. We say, f of n is 2n. But, sometimes we describe a function in this kind of form. We say, f of n equals f of n minus 1, plus f of n minus 2. All right? So, we describe the the function in a recursive way or in a recurrence rather then in an explicit form, but, again as I say it we would like to understand what this function is so that we understand how fast or slow the the my, merge sort is. Fortunately for us, the algorithms that we are going to see that are going to do divide and conquer have this form, have this form of a recurrence. Where we take a problem, we divide it into two sub problems, each of equal size, n over 2, and we solve 2 of the sub problems, or a certain number of the sub problems. Plus sometime for the, for the merging process when we have a recurrence of this, the nice thing about this type of recurrence is is that there's already a mathematical tool that will allow us to go from here to an explicit function. Very simple, in a very simple manner, okay? And that mathematical tool is called the Master Theorem. And this brings us to the, the Master Theorem, which is written here. So Mater Theorem applies to recurrences of this form. What is this form, how do we relate it to algorithms? Again, think about an algorithm, that I want to solve a problem of size n. I divide the problem into b sub problems, each of the sides n over b. I divide the problem into b sub problems, each of size n over b, of equal, size, I solve a of them, I solve a of them. Okay, so a cannot be greater than b for example because I'm already dividing the problem into b sub problems. I can divide, I can solve one, two, three, four or all b of them. Plus, I have some time, f of n, for the merger, for merging the sub problem, the solutions of sub problem. In addition, I have also, constant time for, for the, for dealing with the simplest cases or the base case. So for these general forms of, of a recurrence. For this general form of a recurrence, I have the Master Theorem. Notice here that i'm saying a is equal to or greater than 1, b is greater than or equal to 2, c is greater than 0. These are not something to memorize, these should be values that make sense. Remember that i'm saying a is the number of sub problems that I solve, in the case of merge sort, i'm solving both of the sub problems. Here, I'm saying, in general, I'm solving a certain, a certain number of sub problems. What type of an algorithm, what kind of an algorithm, and what kind of a problem would it be if I'm solving 0 sub problems? I divide the problem into two sub problems, and I solve none of them. That is a useless algorithm. This is why a has to be at least 1. Okay, so you don't need to memorize that a is greater than or equal to 1. It should make sense, and should be natural for you to think about it. Second thing is b. We need to divide the problem into sub problems. If b is not greater than or equal to 2, the only other option is that b equals 1. I'm taking this up, the big problem, I'm dividing it into one big problem and I keep dividing it into one big problem. What type of a divide and conquer is that? Of course we have to divide it into at least two sub problems. Two is not the magic number. You can divide the problem into four sub problems. The merge sort you could have taken the original list. And divided it into four sub lists of equal size. You could have taken the original list, and divided it into 100 sub lists, and repeat the same process. Okay? But, here b is the number of sub-problems, that we, that we divide the original problem into. So it has to be at least 2. We have to divide the original problem into at least 2. Notice that I'm not. Allowing much much room for, for playing with the sizes of the sub problem. I'm saying if you divide your problem into b sub problems, all of them should be almost of equal size, n over b. Of course in practice, doesn't have to be exactly equal. I, I illustrated merge sort on a list of size eight. And I chose eight on, on, on purpose because eight is a nice number. It's going to divide into four and four. Four is going to divide into two and two, and then two is going to divide in one and one. But, if I have a list of size, of size 13, I have to divide it into six and seven, and the seven into four and three, and the three into two and one. So there not going to always divide into two list, of exactly equal size. But, this would be almost equal size, okay? So this why we divided into b sub problems of equal length, or almost equal length. We solve a of them, we have a of n for the merger and, and we have constant time for dealing with simplest case of problem. The Master Theorem stays for this type of recurrence. For this type of recurrence, and if the time to do the merger is polynomial in n, that can include the constant, because if d equals this is 0 of 1, if d equal 1 this is linear, if d equal 2 this is quadratic and so on. The Master's Theorem says, if you have a recurrence of this form. And f of n is a specific type of function which is polynomial in n. Then we can say that T of n can be one of these three explicit functions T of n is either all of n to the d, or all of n to the d times log n, or all of n to the log of a to the base b. Depending on these three conditions. So you know you have the values of a and b and d. Right? We have the values of a and b and d. We can check, if a is smaller than b to the d or equal, or greater. Depending on the value of this condition, you choose the one from here. And this is the master Theorem. Of course it's not an axiom of life, that this holds. It is not a magic result, there is a proof for this. But the nice thing, about having it with a proof is that once it is proved that is correct, we can use it as a tool, so that we simplify our life and make it much easier by taking a recurrence like this and in a very straightforward manner, getting the actual function by just comparing. Two numbers, a and b to the d. Now let's look, let's look at the Master Theorem, let's look at the recurrence that we got from our sort. First of all, we have to make sure that the recurrence we got is amenable to the Master Theorem, okay? Which means, do, do the, does the recurrence we get. Does it have this form? The answer is yes. We have a equal 2, b equal 2, f of n is of n, and c equal of, c is basically 1. Is f of n of n to the d? The answer is yes, in the case of merge sort, n to the, to the first d is 1, so in the case of merge sort. What is the value of a? The value of a is 2 because we solved both sub problems. What is the value of b? How many sub problems did we divide our problem into, it is 2. What is the value of d? How long did the algorithm take to do the merge? It is O of n, n to the d, which value of d will give us of O of n to the d? d equal 1, d equal 1. So now I need to look at a, and compare it to b to the d. All right? So this is a, and this b to the d. And I need to ask it is, is a smaller, equal or greater than? 2 is not smaller than 2, it is not greater, it is equal. So I have this condition hold for merge sort, a equal b to the d. Therefore I get that T of n for merge sort, is O of n to the d times log n. And this is why merge sort takes n log n time, okay? So this, in this case, we have s, in, in this, in this example of merge sort we have seen divide and conquer. How we can take a problem, divided it into sub problems, conquer each one of them. And solve the big problem. We saw that such an algorithm is, is a naturally, implemented, in a recursive manner, so that when you divide the problem into sub problems, recursively called original functional on the two sub problems. We saw that for recursive algorithms, we look at running time, what naturally arises is a recurrence. And the fourth thing we saw is that for this type of recurrence that we are seeing. Instead of dealing with the full blown power and complexity of recurrences and making our life hard, we have a special form of recurrences that are amenable to a very nice result to the application of very nice result that's called the Master Theorem. That without much thought I can just compare two values here, a and b to the d, and get the actual function. So I get, I go from merge sort to this recurrence, from this recurrence through the master theorem to this value. Merge sort takes o of n log n. Remember that our naive algorithm, which takes elements and copies them to a different list is O of n squared. O of n log n is better than O of n squared. Merge sort is O of n log n. The question is, can we still do better than this? Notice that merge sort, what it does. It compares elements, right? When we are doing the merging, it's comparing elements. So this falls into a category of sorting algorithm, that is called Comparison Based Algorithm. It's an algorithm. That when it's sorting it has to compare pairs of elements, okay? The, one of the powerful mathematical results in the area of algorithms and in the area of sorting, is that this is the best we can do. If we are, if we employ comparison based sorting approach. So any, if you look at any algorithm for sorting a list of size n. Without putting any assumptions or constraints about the elements in the array. If it's doing comparison the algorithm the sorting algorithm cannot do n log n, if you see a sorting algorithm in the literature who's running time is linear, it takes O of n then it cannot be a comparison based algorithm. Okay? Or if can be wrong, of course. But if we assume it is correct, it cannot be a comparison based algorithm, okay? So for comparison based algorithms, algorithms that are comparing. Like the first one we showed O of n squared. Where we always [COUGH] went through the list, comparing elements until we got the minimum. That was n squared. This is O of n log n. We cannot do better than this, as we, as long as we are sticking to the comparison based sorting algorithm. Okay? So sorting, merge sort takes over n log n. And master theorem allowed that to go from the recurrence to this explicit function, that we understand how it grows. And this grows much better than O of n squared.