[MUSIC] Now, we are going to design a dynamic programming solution for the so-called longest increase in subsequence problem. In this problem we're given an array or just a sequence of n integers and our goal is to find a subsequence inside the sequence which is increasing as long as possible. Now that it should not be contiguous. So one example of the longest, of an increasing subsequence in the sequence shown on this slide is 1, 3, 4, 6, 9. So it is not difficult to check that it is indeed increasing, and we will soon learn a way to convince ourselves that it is the longest possible. Another longest increase in subsequence in this case is 2, 3, 4, 5, 8. This shows in particular that an optimal solution for this problem needs not to be unique. So to start designing an algorithm for this problem, let's try to analyze an optimal solution. So consider a longest increasing subsequence and its last two elements, z and x. What can be said about z and x? So first of all, we know that z must be smaller than x, okay? So what is much more important for us in the following property? The prefix of the longest increasing subsequence shown here ending at the element z must be the longest possible subsequence ending at this particular element. Why is that? Well, imagine that this is not the longest increasing subsequence ending at z. This means that there is some other subsequence which is longer than this prefix which ends at the element z. But this would mean that we could take this longer subsequence ending at z and append x to it. This would give us a longer increasing subsequence in the initial array than the one that we started with. What we've just shown is that the longest increasing subsequence problem possess the so-called optimal substructure property. This means that in any longest increasing subsequence, any of its prefixes must be also an optimal increasing subsequence ending at this particular element. We have shown this by the so-called cut and paste trick, namely, if this prefix is not optimal, then there is a longer prefix and in this case we can cut this original prefix and paste a longer prefix. And this would give us a longer increasing subsequence contradicting to the fact that we started with an optimal solution. And this leads us naturally to the following subproblems. For all i starting from 0 to n minus 1, let's compute the lengths of an optimal increasing subsequence ending at an element i. As we've discussed previously, such a subsequence, if it is not of length 1, it has some previous element j. So in this case, the length of an optimal increasing subsequence ending at the element i is equal to the length of its prefix ending at some element j plus 1. Corresponding to the fact that we append element i. So this gives us the following recurrence relation. So the solution for subproblem at i is equal to 1 plus the maximum of all the solutions of all subproblems for js where A[j] is smaller than A[i]. So we only go through all subproblems psi j i can be appended to the solutions, okay? So if there is no psi j, meaning that the set of all psi j is empty, then in this case there are now j psi j A[j] is small as an A[i], and in this case, clearly the solution to our subproblem is equal to 1. Meaning that there is just 1 increasing subsequence ending at the element i, and it is just the subsequence consisting of just this particular element, okay? Also there is a base case when i is equal 0, the solution to our subproblem is also equal to 1 and this case, again our subsequence consists of just the zeros element. We're now going to convert our recurrence relation into a recursive algorithm. For this, we are going to use a table T such that T[i] is going to store the value of a subproblem LIS(i). The exact data structure behind the table T is not that important at this point. It could be an array or a dictionary or a hash table or whatever. So, it's usual before computing LIS(i), we are going to check whether its value is already stored in the table. If it is stored in the table, we'll return it immediately. We only start computing it if it is not in the table yet. And for computing it, we actually follow our recurrence relation. So we first initialize it to 1, just to reflect the fact that we always have at least one increasing subsequence ending at the element i, it is the subsequence consisting of just element i. Then we go through all js that are smaller than i, such that A[j] is smaller than A[i]. And we check whether appending i to an optimal increasing subsequence ending in j, whether it gives us a better solution than we currently have. If it gives us a better solution, then we update our current solution. Finally, we store the result of T[i], and as a result of the whole algorithm, we need to return the maximum overall i of LIS[i]. This is just because we don't know where the optimal increasing subsequence ends, okay? The running time of this algorithm is quadratic. And this is just because there are n serious recursive calls. By a serious recursive call, I mean a call which is not just a table look-up. There are at most n of them, just because each time when there is a serious call, we store its value into our table. And the running time needed to compute this value is also proportional to n. Overall, this gives us n squared operations, okay? Then, one can notice that what we need to store in the table are all solutions to our subproblems. They're indexed by integers from 0 to n- 1, and in fact, these are consecutive integers. This suggests that we can use an array to implement our table T. And in fact, we can actually fill in this array just in a straightforward fashion, I'm sorry. So we're going to fill in these array iteratively instead of recursively. So for this we first initialize this array, and then in a single loop going from left to right, we fill in this array with values using our previous recursive definition. What is important to note here is that each time when we compute T[i], all the previous T of js, for js less than i are already computed, okay? So the running time of this algorithm is also quadratic just because it consists of two nested for loops.