0:00

Okay, so discrete optimization, part six, last part, okay?

Â And, so what I am going to talk about today is where these, this dual coming

Â from and why it is useful, okay? So, we saw that it was beautiful, okay?

Â We have no idea, you know, where magically it came from, okay?

Â And so, that's what I want to talk about, you know, in this, in this lecture.

Â And then, I want to also tell you a little bit what it means.

Â How you can understand these things, okay?

Â What is the meaning of these dual variables, and then I want to talk about,

Â you know, what it's useful for. Okay, so this is, you know, the mod, you

Â know, understanding where it came from, okay?

Â So this is an example that I took from [UNKNOWN] book.

Â If you have to read two books on linear programming, this, this has to be one of

Â them, okay? So, what you see there is basically

Â linear programming. I'm maximizing my profit here, I'm

Â maximizing this objective function, you know, subject to these constraints.

Â Okay? And the, the question is that, you know,

Â I'm asking, you know, the question, you know, can I bound this maximum?

Â Is there an optimistic evaluation of that maximum?

Â Okay? And so what I'm showing you here is one

Â of them. I'm basically telling you here, okay?

Â That the maximum value that is objective function in everything is 110, okay?

Â Why? Okay?

Â So, look at this constraints and look at this one.

Â Look at these two constraints, okay? So, this is 55, this is 110, okay?

Â Double. Why?

Â Because I basically took that constraints and I doubling every one of the

Â coefficient, okay? So instead of adding 5, I get 10, okay?

Â Instead of there being 1, I get 2. Why did I do that?

Â Okay? Because as soon as I do that, okay?

Â So, what you can see, okay? Is that this 10 here, okay?

Â Is actually greater than the coefficient in the objective function.

Â This value is also greater. All the values here are greater than the

Â objective function. And therefore, I'm maximizing something,

Â which has to, so, so if this, is this is a value, this will have a larger value of

Â inter-objective functions. As a value, this has to be greater than

Â the value of the objective function by definition, okay?

Â And therefore, in this particular case, I know that 110, okay?

Â Has to be an upper bound, okay? I will never be able to get to, to, you

Â know, to get higher than 110, okay? So that's an upper bound.

Â Now this is a terrible upper bound, of course.

Â But I'm going to show you better ones, okay?

Â So, look at these two things here. I have two, I basically take these two

Â constraints. I add them together, okay?

Â If I add them together, I get this constraint that you see here, right?

Â So, you basically see here 4x1, you know, plus 3x2, and so on smaller equal to 58,

Â okay? So now, once again, you know, you can do

Â the same reasoning. You can look at all the coefficients that

Â you see there. And once again, they are always greater

Â than the coefficients inside the objective function, okay?

Â So, greater or equal, right? So this 4 is the 4 of the objective

Â function. The next one is 3 instead of 1.

Â So, I know that if I sum these two terms there, they will be smaller than the sum

Â of these two terms. I continue to doing the same way, right?

Â So the five in the objective function becomes six, okay?

Â And the three remains three, okay? So, I know that this value is always

Â going to be greater than the value of the, the objective function.

Â Whatever the value that I find for the variables.

Â But now, this time, the value here is 58. So, it's a much smaller value.

Â So, I know that 58 in this particular case is a bound to this objective value

Â already, okay? So, wow.

Â Okay. So, so what I'm showing you here is that

Â by combining these, these constraints. And by actually making, you know, making

Â sure that this combination satisfies the basic constraints.

Â They have to be greater than the coefficients of the objective function,

Â okay? I can actually bound the value of this

Â objective function, okay? And that's very nice, okay?

Â So in a sense, yeah, yeah, yeah, but what is the relationship of the dual, okay?

Â So, look. What I'm going to do is take an arbitrary

Â combination, positive combination. It's called conic combination, of these

Â constraints, okay? And so if I do that, I'm basically going

Â to use y1, y2, and y3 to find the coefficients of the way I want to combine

Â these equations, right? [laughs] Think of it, you know, why y1,

Â y2, y3, does that remind you of something?

Â Okay. So, I'm basically taking these y1, y2,

Â y3, okay? And then, I'm basically combining,

Â multiplying the, the various constraints that you have there, okay?

Â And I know, okay? I know that, you know, when, when I do

Â this combination, okay? So, I have to be smaller or equal to this

Â particular value here, okay? Because, you know, this other right hand

Â side of this constraint, okay? So, if I multiply these left hand side by

Â y, I have to be smaller or equal to that, okay?

Â So, I know that is I multiply this constraint by y1, y2, y3, okay?

Â I have to be smaller than this expression, okay?

Â Once again, you know, right inside, hey, objective function.

Â So, if I minimize this, okay? I know that the value of this because of

Â this combination has to be greater than the value of it's to be, it's to be

Â greater. I knew that these values have, would have

Â to be greater than this thing, right? But what I want to do is find the

Â tightest, the tightest upper bound. So, I minimize this, okay?

Â So, so I minimize, I want to find the y's that are basically minimizing the value

Â of this expression. That I want to be bonding this thing by

Â both, okay? Now I told you before, we have to satisfy

Â some constraints, right? So, when I look at these coefficients

Â there and multiply them by the y's, I have to make sure that they are greater

Â than the value of the coefficient in this primal, right?

Â So in a sense, what are these? Well, these are the dual constraints,

Â okay? So, this is the dual objective function.

Â These are the dual constraints, okay? So, the whole thing here, it's just the

Â trick to actually bound the value of this primal, okay?

Â So, the dual is basically bounding these values.

Â And that's what I told you before, right? So the other day, you know, what we were

Â doing is the primal might be minimizing, the dual was, you know, maximizing.

Â And basically, the dual was always the lower bound to the value of the primal.

Â Here, I'm basically maximizing. Of course, the dual is minimizing, and

Â I'm always telling you hey, hey. The primal, the dual here has always to

Â be an upper bound to the value of the primal, okay?

Â And this is a systematic derivation of why this is true.

Â Of course, once you relax, ooh, this is an interesting linear program, you can

Â start whether the properties of these thing.

Â And that is essentially what I have shown you last time, okay?

Â That the value of the objective function of the primal at optimality is equal to

Â the objective value of the dual of the optimality.

Â So, this value here that you are minimizing is going to be exactly the

Â optimal value of the primal, okay? That's how you actually find the

Â derivation. You know what?

Â Where this dual is coming from essentially, okay?

Â Now, so this is, so I'm going to talk about complimentary slackness.

Â And this is essentially starting to convey what these dual variables mean.

Â And so in a sense, this is a topic which is, you know, a very interesting topic in

Â mathematical programming. In general, this is applied to linear

Â programming only, right? So what you see here, what basically

Â these equations are saying, is that if you have x star and y star.

Â Solutions to the primal and the dual, you have to have these conditions which are

Â satisfied. And I'm going to go over, you know, over

Â that. But essentially, what it means is that if

Â you look at the constraints of the primal, this is the constraint of the

Â primal. It's an equality, right?

Â And if this, this inequality of the optimal solution is tight.

Â So, so it has to be the case that either this inequality at opt, of the optimality

Â has to be tight. Or the dual variable has to be zero,

Â okay? So this is linking the primal val, the

Â primal solutions, the primal optimal solution.

Â And the dual of, you know, optimal solution.

Â And basically saying oh, either that constraints in the primary state or the

Â dual variable is zero. And this essentially expressing the same

Â thing for the dual, right? So, all the constraints in the dual is

Â tight, all the primary variable is zero. Okay, very interesting, okay?

Â Very interesting ratio between these things, okay?

Â So, you can guess which values are zero in the other side depending upon the way

Â that the constraints are tight, okay? Now, let me give you an economic

Â interpretation of this, okay? So once again, you know, I'm looking at

Â this program, okay? This linear program, and we are

Â maximizing so think maximizing profit. Now you have constraints, okay?

Â For instance, the constraint could be production, you know, capacity

Â constraints on your production, okay? So basically, this bi over there, okay?

Â Is telling you oh, this is as much as you can produce, okay?

Â So basically, you want to maximize your pro, profit.

Â This is what you can produce, okay? But, we are limited in what you can

Â produce, okay? So, look at this ti there.

Â So, what I'm looking at here is ooh, assume that I can increase my capacity

Â production a tiny bit, okay? What's going to happen?

Â And this is essentially what this dual variables are telling you, okay?

Â So, if you increase your capacity a little bit, okay?

Â The value of the, the objective function is going to change tiny, you know, in a

Â tiny fashion, okay? Is going to still be the, you know, still

Â be at least be as good as the primal objective function that you see there is

Â z star. But then, you're going to increase it

Â with what, with, you know, ti times yi star, okay?

Â For everyone of these dual variables. So, which basically means that, if you

Â increase the capacity of these various, various constraints on the capacity,

Â okay? I can, I can increase the value of the

Â objective function by multiplying this increase by the value of the dual

Â variables, okay? So, the dual variable is, in a sense,

Â telling you a much Increasing that particular capacity.

Â You know, relaxing that particular constraint, is bringing to the objective

Â function, okay? Now, this is only valid when ti is

Â sufficiently small, right? Because at some point, if you make this

Â ti sufficiently big, you may change basis, and so on and so forth, okay?

Â But this is essentially, this is essentially telling you, you may change

Â fundamentally the nature of the solution. But here, when you're very close, this is

Â what this is basically telling you, okay? So one last thing that I have to tell

Â you, duality in the tableau, okay? So remember you know, when, when we

Â presented the tableau, you always start with a basis, okay?

Â So, when you start with a basis, okay? So, you know that at the end of the day,

Â when you are at optimal solution. This other value of the objective

Â function, okay? Now, when you look at, when you have the

Â first basis, it's either the artificial variables.

Â Or a basis that you found after the first phase or something like that.

Â You have this basis and assume that these are, let's say artificial variables,

Â okay? The cost of this things is zero in the

Â objective function, okay? So, what do you know?

Â You know that when you look at this formula, the cj has to be zero, okay?

Â You also know that this matrix here, Aj, you know, this is the unit matrix, this

Â is a unit thing, okay? So essentially, what you have here is

Â that what remains of this is basically the value of these guys.

Â So, cBAB minus 1 which you recognize as the simplex multiplier, right?

Â So in a sense, the cost here, so what you will find in those locations are the

Â simplex multipliers. Which are the dual variables, okay?

Â So essentially, what you see there is that the, if you solve the primal, you

Â can always look inside your tableau. And you will find the value of the dual

Â variables. You will find the solution to the dual

Â variables. So, not only you know that the dual is a

Â particular cost, but you can actually retrieve the value of the dual variable

Â to its optimality, okay? Now, why is this thing useful?

Â Okay, so, so we have this beautiful theory.

Â We know where it comes from. We know what it means from economic

Â standpoint, for instance. But what does it correspond to?

Â Okay? So, so let's look at this example, okay?

Â So, we have a primary which is visible, okay?

Â 11:41

So, the dual is not visible until you get a optimality.

Â So, you start your primary. You start minizining.

Â So, because essential, so, so you have to think the primary always visible I get

Â down to a point. In all these things, the dual is

Â infeasible. And at some point, you get to the optimal

Â solution. The dual become optimal and at the same

Â time feasible, okay? So the dual, so when you do the dual

Â simplex, okay? So, the dual is always feasible and the

Â primal is not until you reach optimality, okay?

Â So in a sense, what I'm telling you, that we can solve the primal.

Â And then when, add optimality of a solution to both the primal and the dual.

Â Or you can solve the dual, and then add optimality of a solution of a primal into

Â the dual. So, which do you do, okay?

Â Well, there is one case where it's actually very useful to use the dual,

Â okay? So, assume that you optimize the primal.

Â Okay, you optimize ta-ta-ta-boom, you have the objective, you have the optimal

Â solution. And then somebody comes and says ooh, I

Â forgot to tell you, I have this one constraint, okay?

Â And obviously, the constraint is satisfied, it's not interesting.

Â But assume the constraint is not satisfied, what's going to happen?

Â Okay? Well, the dual, okay?

Â Is going to be feasible. Why?

Â Because if you look at these constraints in the dual, what is it?

Â Oh, you flip, you know, and then it's the variable, okay?

Â So, if you are the variable to the dual, you know, of course, you know, this, the

Â dual is still feasible. You can give a zero to all these

Â variables and you have the same solution, right?

Â The primal is not feasible, it's constraint is violated.

Â So what you can do is to say ooh, but my dual is feasible, so I can optimize the

Â dual. [SOUND] I get to optimality and at that

Â point, I know that the primal is going to be feasible, okay?

Â So in a sense, if I'm adding a constraints to a primal algorithm, okay?

Â And, and obvious this, obviously, if these constraints is, is feasible, I

Â don't have a basic feasible solution to start from in the primal.

Â But I have one in dual. And I can optimize the dual, get back to

Â optimality, and there I know that the, the primal is, is, is going to be

Â feasible and optimal, okay? So in a sense, if I have an optimal

Â solution to the primal, [SOUND] I add the constraints, and that constraints is

Â violated. I can optimize the dual to get back to

Â optimality and feasibility by using the dual algorithm.

Â Very nice, okay? You will see why this is very nice in the

Â next couple of lecture when you do, when we do mixed integer programming, okay?

Â Now, the other things that we can do is that actually, you know, you can do

Â primal and dual on exactly the same tableau, right?

Â So remember, you know, we have this primal, we have this, basically, this

Â primal over there. And basically the dual is what?

Â The dual is taking the objective functions of the primal, okay?

Â And putting it as a right end side, okay? Where is my right end side?

Â Here, right? So, we can do the same thing for the

Â objective function, right? So, this is the right end side of the

Â primal, it becomes the objective function of the primal, okay?

Â And then, you have this big matrix, okay? The only thing that you do, [SOUND] you

Â transpose it, right? And so, that's what you have here.

Â These two things are the same, okay? And so, when you have the primal, assume

Â that I don't have this representation of the dual, right?

Â I see only the primal, right? But if I want to see the dual on the

Â primal, I do this, right? And know I see the primal, right?

Â So, in a, I, I see the dual, right? So, in a sense, I don't need this guy

Â because it's already there, right? It's, you know, it just, you know, a

Â different way of looking at it. So, now look at this.

Â If I do, if I do the primal algorithm, the primal simplex, what do I do?

Â I select an entering variable and then I select a leaving variable, okay?

Â So, that's the primal algorithm. If I look on the other side, what does it

Â mean? Ooh, it could mean in the dual, I would

Â select then living variable, and then an entering variable.

Â It's another way of doing it. It's fine, right?

Â Because it's going to do a pivoting operation in exactly the same fashion,

Â okay? And when I do the dual, right?

Â If I select an entering variable in the dual and the leaving variable, and then

Â the leaving variable to the dual, what does it correspond to?

Â Well, the leaving variable on this guy is the entering variable on the primal and

Â then vise versa. So, these things are the same, okay?

Â If I pivot here, I have the same pivot here.

Â If I pivot there, whoops, I have the pivot here, right?

Â So when I add a new constraint and it's violated, and I'm looking at the dual,

Â what do I do in the dual? I'm adding a, I'm selecting a constraint

Â to enter the basis, and another constraint for leaving the basis.

Â Which basically means that here, you know, I'm also selecting a variable.

Â But this time, a primary variable to lead, to enter the basis, and another

Â variable to leave the basis. So when I, when my constraints is

Â violated in the primal and I look at the dual, the dual is basically fine.

Â You know, I am basically, you know, doing a pivot operation here.

Â The same thing happens here. But of course, I never have to use this.

Â I can always use one of the tableau and do all of the things.

Â I just have to, you know, turn my head and then it's fine, okay?

Â So in a sense, that tells you, and of course, you could do some steps in the

Â primal, some step in the dual. And you can do all kinds of beautiful

Â things, right? So, using the same tableau, okay?

Â So, this is the relationship with this primal and dual.

Â And these things, you can exploit, and we will exploit them.

Â And I'll, you know, you'll understand, you know, more of this when we do mixed

Â integer programming. But the fact that we have these two

Â things is really very useful, okay? So next time, we're going to move to

Â mixed integer programming, okay? Which is going to be the, you know, the

Â third big approach to combinatory optimization.

Â And it will obviously rely on linear programming.

Â Thank you very much.

Â