[MUSIC] We will now discuss an alternative way of solving the longest increasing subsequence problem. It is a common feature of many dynamic programming algorithms, that the most creative part in designing such an algorithm is coming up with the right notion of a subproblem. When we have the right notion of a subproblem and the recurrence relation for subproblems, it is just a technicality to convert this reccurence relation into a recursive algorithm. In the previous part, we were able to find the right way of defining subproblems by analyzing the structure of an optimal solution. In this part, we are going to do the same, by actually first implementing the brute force solution and then by optimizing it, okay? So what is the most naive way of solving the longest increasing subsequence? Well, let's just enumerate all possible increasing subsequences, and let's just select the longest one, right? So this is once again, the most naive way of solving this problem. So to implement this idea, let's do the following. Let's start with an empty sequence, and at each iteration we are going to do recursively the following thing. So we have some, initially, the sequence is empty. And at each iteration, we just try to extend it by every possible element, just by one element, and then we make a recursive call. So when also we keep track of the lengths of our current subsequence, then if it is longer than what is currently known? Then we update some global variable, for example. The resulting algorithm is going to be slow. For example, if our initial array is increasing, then we will be forced to go through all possible subsequences of this array, and there are exponentially many of them. But not to worry, we are going to optimize this algorithm later. Here is a Python implementation. So this is a recursive procedure that is going to extend, optimally, the current subsequence. Initially, the sequence that would pass to our recursive procedure is empty. Inside the recursive code, we do the following. We initialize the following two values, last_index and last_element. So last_index is the last index of our current sequence. So if the sequence is empty, then it is just equal to -1. If it is not empty, then this is just the last element of the sequence. The last_element is actually the value of the correspondent index. If the sequence is empty, it is equal to -inf. If it is not empty, it is equal to the corresponding element of our initial array A. Then after initializing these two parameters, we try to extend our current sequence by one element. Namely, we go through all possible elements to the right of our current index. And if the corresponding element is greater than our last element, then we append it to our sequence and we make a recursive call. Okay, now let's start optimizing our code. So first of all, a closer look at our code reveals the following interesting property. In fact, what we do in our recursive procedure is trying to extend optimally our current sequence. At the same time we are not interested in what is going on in the beginning of our sequence. What we essentially use is its last element, and the lengths of the current sequence. So which means that instead of passing the whole sequence to our recursive call. It is enough to pass its last element, and the index of the last element of the current sequence, and also its length. So let's do this, and let's optimize the code. And here on the slide, you actually see the optimized code. So in this case, we start by computing the last element, again just by using the last index. And then again, we try to extend the current sequence optimally. By making a recursive call to a sequence for which we append some new element. So we iterate through all the elements, through all i that are greater than the last index. And if A[i] > A[last_index], we try to append the element i to our current sequence. So in this case, we increase the length by 1, and we use i as the last index of our new element. So, this is to optimized code, so far so good. In this case, instead of passing the whole sequence, we pass only two parameters, the last index of our current sequence and its length. Again, a closer look at the resulting code reveals that in fact, we are not that interested in the length of the current sequence in order to optimally extend it. In other words, no matter what is the length of this sequence, we are going to extend it, based on its last element, in exactly the same way. As slightly more formally, if we subtract x from the seq_lem in this recursive call and add x to the result, it would give exactly the same. And this suggests that we just replace seq_len by 0. And this will give us the algorithm that you see here on this slide. So basically, what we did here is just that we replace the sequence lengths with zero. So again, this is a recursive algorithm, and in general it is going to be slow, just because it is going to recompute many things again and again. But we already know how to make it more efficient, we just need to add memoization. So for this, we might want to introduce a table t, and to store all the results that are already computed in the table. And for each time we make a recursive call, we first check whether the result is already in the table or not. So if we add memoization to these recursive procedures, then what we get is almost the same algorithm that we got before, in the previous parts. Okay, now let's summarize. So the most important message is that the most creative part in designing dynamic programming solutions is coming up with the right notion of a subproblem. And there are basically two common ways of finding the right problem. So the first one, as we illustrated with an example of the longest increasing subsequence, is analyzing the structure of an optimal solution, okay? The second one that we've just illustrated in this part is first implementing the most naive brute force solution, and then optimizing it.