[MUSIC] Hello, I am so happy that you're interested in this course and that you have been trying to do the first assignment, and that you have some questions about this assignment that I am now going to answer. In the assignment we said, consider the linear programming relaxation for the vertex cover problem, and let us assume that we have a solution such that every coordinate is either zero or one half, or one. And you had some questions. You asked is this a special property, or is it always true? I said, yes, it's always true. Then you said, but then, is there an algorithm for computing such a solution, or is it hard to calculate? And I said yes, there is an algorithm. And now, given that you seem to be interested in this question, I thought it would be worth giving you a little proof. So, that is what this video is about. The theorem is: there exists an optimal solution of the LP relaxation forward x cover, such that every coordinate x sub u is either 0 or .5 or 1, and there is a polynomial-time algorithm to construct it. Let's do this algorithm. The algorithm, first solves the LP. That gives it an optimal solution, xu*, that satisfies all the constraints and such that the weight of the resulting fractional vertex cover is minimized. But, possibly, these variables, they're between zero and one, but possibly, they're not all equal to zero, or to one-half, or to one. So what do we do? We gradually modify the solution to have more and more coordinates equal to zero or to one-half or to one. First, all the variables that already have value in 0, .5, and 1, or 1, we freeze them. We will not change them anymore. They already have the value we want. Second, consider the remaining variables. There are two possibilities. Large. Small. Large between .5 and 1, strictly. Small, between 0 and .5, strictly. Every variable is either frozen or large or small. Now, there's an interesting property due to the constraints of the LP relaxation. If u is small, and if we have an edge from u to v, then we only have two possibilities for v. Either v is large, as in here u has value 0.4 x* u is 0.4. V has value 0.7, the sum is greater than or equal to 1. Or the value for v is 1, 0.4 plus 1. But what we cannot have is two small vertices adjacent to one another. If u is 0.4 strictly less than 0.5. If v is strictly less than 0.5, then there sum would be strictly less than 1 and that would be not feasible. That is not possible. So, if a vertex is small, then it's neighbors cannot be small. Now what do we do. We try two things. The first thing we try, is to do a small local modification. We take a very small epsilon, very small positive epsilon. We take every small value, x to star. And try to increment it by epsilon. Every large value, and try to decrement it by epsilon. What does that give us? What this means is that, if we have an edge, we have a small adjacent to a large, the small increased a little bit going say from .4 to .41, if epsilon is .01. The large decreased a little bit by the same amount and the sum does not change. So it was greater than or equal to 1 before, it's still greater than or equal to 1 afterwards. Even if the sum was exactly 1 before, it's still exactly one afterwards. So it still satisfies the constraint. Or if the two vertices were both large, then they both decrease by epsilon 0.7 because 0.69, 0.6 because 0.59. So there's some decreases, but the sum, was strictly more than one. If you have two numbers that are both strictly more than 0.5, their sum is strictly more than one. So when you do a little change of an epsilon. Now the sum decreases by two epsilon, and for epsilon small enough, it's still greater than or equal to one. So what this means is that for epsilon small enough, this y vector is still feasible, it's still a feasible vertex cover. That's the first thing we tried. Two, we tried to do the opposite. The small vertices will decrease by epsilon. The large vertices will increase by epsilon. Again, if we have a small adjacent to a large, the sum doesn't change. Plus epsilon, minus epsilon, that's 0. If we have a small adjacent to 1, well the small decreases. But then the value was one plus something, so it was strictly more than one and so it's still strictly more than one afterwards, for epsilon small enough. And if both were large, then it increases, so it satisfies the constraints even more than before. So what this means is that, for epsilon small enough, this factor z is still feasible. So we have two new solutions. One where all the small items increase by epsilon, and the large items decrease by epsilon. And one where we do the opposite, the small decrease, the large increase. They're both feasible. Y is feasible but x* is the optimal value. So we know the value of y is less than the value, less than or equal to the value of x*. Same thing for z. The value of z, because it's feasible, is less than or equal to the value of x*. But, y and z are so close, they're so close to one another. If you look at the average of y and z, on average we get x*, plus epsilon minus epsilon the average is zero. Minus epsilon plus epsilon the average is a change in zero. In other words The value of y plus the value of z over 2, the average between the two is exactly value of x*. And they're both less than or equal to x*, so they have to be equal. If we have two values, both less than or equal to the value of x*, and on average, they are equal to the value of x*. Then they all have to be equal. Y and z are feasible. Their value is the same as the value of x*, so they're also optimal solutions. They're also optimal solutions. So what do we do? We take y. We take z. We think of epsilon as being small. Small enough. What does that mean, small enough? What happens if we try to start from a very small, an infinitesimal epsilon, and we increase it? Increase it, increase it, increase it, until something happens. What can happen? What may happen is that this small x*, it was strictly between zero and 0.5, we keep increasing it, eventually it might reach 0.5 or this large value. We decreased it, decreased it. It was between 0.5 and 1. Eventually, it may reach 0.5. Or this small value here we decreased it, decreased it, decreased it eventually we may reach zero. Or this large value increased it, increased it, eventually we may reach one. So as epsilon increases, one of these events is going to happen. Whichever one happens first, that's when we stop. We stop, we look at the variable that just reached 0, or 0.5, or 1. We freeze it. Now we have one more variable, with value 0.501. And we have this optimal solution, either y or z, depending on which of those events happens, that has all these frozen variables. And we can repeat the construction. In each iteration one more variable will be frozen. When does this end? It ends when every variable is frozen. At this point, every variable has value 0.5 or 1, and we still, we worked only in the space of optimal solutions. And so, what we have done here is we have proved that there's an optimal solution of the vertex cover LP relaxation, whose coordinates are all in 0, one-half, and 1, and we gave an algorithm to find it in polynomial time.