The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

247 ratings

Stanford University

247 ratings

Course 3 of 4 in the Specialization Algorithms

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 3

Huffman codes; introduction to dynamic programming.

- Tim RoughgardenProfessor

Computer Science

Hey, so guess what? We just designed our first dynamic programming algorithm. That linear time algorithm for computing the max weight independence set in a path graph is indeed an instantiation of the general dynamic programming paradigm. Now I've deferred articulating the general principles of that paradigm until now because I think they are best understood through concrete examples. Now that we have one to relate them to, let me tell you about these guiding principles. We will in the coming lectures see many more examples.

So the key that unlocks the potential of the dynamic programming paradigm for solving a problem is to identify a suitable collection of sub-problems. And these, sub-problems have to satisfy a number of properties. In our algorithm for computing max weight independent sets and path graphs, we had N plus one sub problems, one for each prefix of the graph. So formally, our Ithi sub problem in our algorithm, it was to compute the max weight independent set of G sub I, of the path graph consisting only of the first I vertices. So the first property that you want your collection of subproblems to possess is it shouldn't be too big. It shouldn't have too many different subproblems. The reason being is, in the best case scenario, you're going to be spending constant time solving each of those subproblems, so the number of subproblems is a lower bound than the running time of your algorithm. Now, in the Maximum Independent Set example, we did great. We had merely a linear number of subproblems, and we did indeed get away with a mere constant work for each of those subproblems, giving us our linear running time bound overall. The second property you want and this one's really the kicker, is there should be a notion of smaller subproblems and larger subproblems. In the context of independence sets of path graphs this was really easy to understand. The subproblems were prefixes of the original graph and the more vertices you had, the bigger the subproblem. So in general, in dynamic programming, you systematically solve all of the subproblems beginning with the smallest ones and moving on to larger and larger subproblems. And for this to work, it better be the case that, at a given subproblem. Given the solutions to all of the smaller sub problems it's easier to confer what the solution to the current sub problem is. That is solutions to previous sub problems are sufficient to quickly and correctly compute the solution to the current sub problem.

The way this relationship between larger and smaller subproblems is usually expressed is via recurrence and it states what the optimal solution to a given subproblem is as a function of the optimal solutions to smaller subproblems. And this is exactly how things played out in our independent set algorithm. We did indeed have a recurrence. It just said, that the optimal value, the maxwood independence head value for G sub I. Was the better of two candidates. And we justified this using our thought experiment. Either you just inherit the maximum independence set value from the preceding sub problem from the I-1 sub problem. Or you take the optimal solution from two sub problems back from GI-2. And you extend it by the current vertex, V sub I. That is, you add the [INAUDIBLE] vertices weight to the weight of the optimal solution from two sub problems back. So this is a pattern we're going to see over and over again. We'll define subproblems for various computational problems. And we'll use re, recurrence to express how the optimal solution of a given subproblem depends only on the solutions to smaller subproblems. So just like in our independent set example once you have such a recurrence it naturally leads to a table filling algorithm where each entry in your table corresponds to the optimal solution to one sub-problem and you use your recurrence to just fill it in moving from the smaller sub-problems to the larger ones.

So the third property, you probably won't have to worry about much. Usually this just takes care of itself. But needless to say, after you've done the work of solving all of your sub problems, you better be able to answer the original question. This property is usually automatically satisfied because in most cases, not all, but in most cases the original problem is simply the biggest of all of your subproblems. Notice this is exactly how things worked in the independent sets. Our biggest subproblem G sub N was just the original graph. So once we fill up the whole table, boom. Waiting for us in the final entry was the desired solution to the original problem.

So I realize, you know this is a little abstract at the moment. We've only have one concrete example to relate to these abstract concepts. I encourage you to revisit this again after we see more examples and we will see many more examples. Something that all of the forthcoming examples should make clear is the power and flexibility of a dynamic programming paradigm. This is really just a technique that you have got to know. Now when you're trying to devise your own dynamic programming algorithms, the key, the heart of the matter is to figure out what the right sub problems are. If you nail the sub problems usually everything else falls into place in a fairly formulaic way. Now if you've got a black belt in dynamic programming you might be able to just stare at a problem. And intuitively know what the right collection of subproblems are. And then, boom, you're off to the races. But, of course, you know? For white belts in dynamic programming, there's still a lot of training to be done. So rather, in the forthcoming examples. Rather than just plucking the subproblems from the sky. We're going to go through the same kind of process that we did for independent sets. And try to figure out how you would ever come up with these subproblems in the first place? By reasoning about the structure of optimal solutions. That's a process you should be able to mimic in your own attempts at applying this paradigm to problems that come up in your own projects. So, perhaps you were hoping that once you saw the ingredients of dynamic programming, all would become clearer why on earth it's called dynamic programming and probably it's not. So, this is an anachronistic use of the word programming. It doesn't mean coding in the way I'm sure almost all of you think of it. It's the same anachronism in phrases like mathematical or linear programming. It more refers to a planning process, but you know for the full story let's go ahead and turn to Richard Bellman himself. He's more or less the inventor of dynamic programming, you will see his Bellman-Ford Algorithm a little bit later in the course. So he answers this question in his autobiography and he's says, he talks about when he invented it in the 1950's and he says those were not good years for mathematical research. He was working at a place called Rand, he says we had a very interesting gentleman in Washington named Wilson who was the Secretary of Defense. And he actually had a pathological fear and hatred of the word research. I'm not using the term lightly. I'm using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how we felt then about the term mathematical. So the Rand Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the Rand Corporation. What title, what name could I choose? In the first place I was interested in planning and decision making, but planning, it's not a good word for various reasons. I decided therefore to use the word, Programming. Dynamic has a very interesting property as an adjective, in that it's poss, impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus I thought dynamic programming was a good name. It was something not even a congressman could object to so I used it as an umbrella for my activities.

Coursera provides universal access to the world’s best education, partnering with top universities and organizations to offer courses online.