Okay, so this is Discrete Optimization, Linear Programming, Part V.

So, what we are going to today is some really, really beautiful thing.

Okay, so we're going to see duality. Okay?

And at a higher level, okay, so you can think of, we are looking at the same

thing but from two different angles. And we can see some very, very different

things. So, most of you have probably seen this

picture where you can see this beautiful young lady and this old ugly woman.

Right? And so, this is duality.

Right? So, this is going to be looking at the

same thing, but from different angle. And to motivate you to this lecture, I

want to tell you the story of, of, of [UNKNOWN] and [UNKNOWN].

So, so, so when [UNKNOWN] invented linear programming, he went to see John von

Neumann. So, this genius mathematician and

computer scientist with this encyclopedic knowledge of everything, right?

So, he went to see him and start explaining the, the, the linear

programming to von Neumann. And von Neumann, typically being a

patient, you know, man, was kind of, you know, impatient that day.

And said, you know, get to the point, get to the point.

And so then, [UNKNOWN] say, oh, he wants me to get to the point.

And so, he started going fast. And then, von Neumann stopped him, and

started there he's writing all the results.

And the things I'm going to show you today.

Okay? And [UNKNOWN] was like, what's happening?

You know, yeah. It's just deriving all this guy is just

deriving all this on the fly. But von Neumann was also a very kind

person. And he said at the end, I just don't want

you to think that I'm deriving this on the fly, right.

So, this is very, very similar to something I'm doing in game theory, okay.

And therefore, you know, I'm postulating that these two things are the same, okay.

And they were, they were, they are essentially the same, okay.

And so, that's what duality, that's what I'm going to show you today.

I'm going to show you these results on duality theory which are basically

looking at linear programming in two different ways.

Okay? So, so what I'm going to show you today

is basically two linear programs. The primal which is essentially what we

have been seeing all along. Okay?

Which is minimizing this objective function c x subject to inequality.

Ax greater or equal to b with all the variables being non-negative, okay.

And then, I'm going to talk about another linear program, which we're going to call

the dual. And that dual is going to maximize, okay,

an objective function y b, y are the variables here, okay.

So, y are going to be called the dual variables.

The x, I called the primal variables. So, okay.

Primal, dual, okay. And then, we have also a set of linear

inequality, okay? y A is smaller or equal to c, instead of

being greater or equal, it's going to be smaller or equal.

Same matrix, okay? Different variables, okay?

Primal variables, dual variables. And then, the y's are also going to be,

you know, non-negative. Okay?

So, let me show you an example here, so that you can see this on the, on the read

example. This is the primal, this is the dual,

right? The variable here, the primal variable,

x1, x2, x3, okay? You see you see the coefficient, 3, 2, 4,

okay? Then, you see these two constraints,

okay? And then, you see the right hand side.

Okay? in this particular case, 2 and 5.

Okay? 2 and 5.

Okay? This is a dual.

Okay? Instead of minimizing, we are maximizing.

We have two variables, the dual variables.

Okay? And then, we have three inequalities

here. Okay?

I'm, I'm remove, you know, I'm not showing the non-negative constraints

here. Okay?

So, they are implicit. Okay?

So, primal dual, you see these things. Okay?

No, these two guys have a very strong relationship.

So, I'm going to show you how we obtain the dual.

Right? So, the first thing we do, is that we

introduce these dual variables, y1 and y2.

Okay? So, in a sense, with every constraints

now, we are associating a dual variable. Okay?

So, this is y1 and y2. And of course, in the dual is only

expressed in terms of y1 and y2. The key point here is to remember these 2

variables are associated. There is as many dual variables as there

are constraints in the primal. Okay, now, how do we obtain the

objective? Right to we transform the min into a max,

but the coefficients of the variables, where do they come from?

Well, they come from the right end side of the inequalities in the primal, okay?

So, you saw 2 and 5 here, okay? You see 2 and 5 here, okay.

And you see now that the objective function is maximizing 2y1 plus 5y2,

okay? So, in a sense you take this, you flip

it, and you get the objective functions of the dual, okay?

One thing stopped, okay? Now, how do we get the right hand side of

the dual? Well, you take the objective function,

right? So, the objective function of the primal

is what? 3, 2, 4, is in term of the coefficient.

And now, you see 3, 2, and 4 as the right end side of the dual, right?

So, in a sense, the right hand side of the primal become the objective function

of the dual. The objective, you know the objective

coefficients of the primer become the right end side of the dual.

Okay? And now, the trick is almost played,

right? So, the only thing that we need to do is

to look at the, the matrix that you see over there, okay?

For the constraints, okay? So, so we look at the coefficients of, of

the first column here, do you know? So, we see y1, y2.

We take a column here, the coefficients for x1, okay?

They are 2 and 2, and what you get here is essentially constraints.

Okay? It's going to be 2y1 plus 2y2, okay.

Smaller or equal to the value that we had already computed, which is 3.

Okay. How do we obtain the second constraint?

Well, you take the, you take the column associating, associated with x2 over

there. Okay.

The coefficient for x2 are what? 1 and minus 1.

And therefore, these are the coefficients of the dual variable in the second

constraint. 1, for x, y y1, and minus 1 for y2, and

you get the third constraint. the second constraint.

The third constraint is easy right? There is only one coefficient, and that

is the coefficient for the dual, for the dual variable y2.

And so, what you see here is, that the last constraint is simply y2 is smaller

or equal to 4. Okay?

So, in a sense, it's beautiful, right, the objective function becomes the right

end side. The right end side of the primal become

the, the objective functions of the dual. And then, this matrix, the column here

becomes the row, and the, the column, you know, of the primal, become the row of

the dual. Okay?

And that's how you express this dual. Okay.

So, this, once again, this is involving the annotation, but you see primal

variable you see the dual variable. Obviously, if you have two constraints

here, you have two dual variables. Okay.

If you have three variables over there, you have three constraints.

Okay. So, it's going to these nice matching.

It's like you take it your head. You flip it and you get the dual.

Okay, so that's what you get. So, this is even more explicit when you

take matrix notation, okay. Remember the last lecture I told you, we

can view all these things in terms of matrices.

This is what you see here as well, okay. So this is the primal.

You see the primal variable there, y, you know, x1, x2, x3, okay.

So, you'll see the dual variable there, y1 y2.

These are the coefficient of the objective function for the primal.

Where do you find these guys? Well, these guys will be here, right?

So, they will be the right end side of the constraints in the dual, okay?

Now, you see, basically, the matrix here, okay?

The matrix, the inequality is basically the big matrix A, okay?

You see the primal, you see the primal variable over there, you see the, the

the, sorry, yeah the primal variables here.

And what you see here is the right end site, okay?

So, these right end side, no is of usely going to the objective function here,

okay? And you see this matrix here is going to

be essen, is going to be here, okay? So, the only, you know, here, the column

and the row comes from the fact that here we are multiplying this matrix by these

variables. And here, we are multiplying the, the you

we are multiplying, we are multiplying, you are taking the dual variables and

multiplying the matrix. We just switch the side, okay?

So, this is, once again, in matrix from primal dual, the objective function

become the right end sign, okay. The right end sign becomes the objective

coefficients, and then you have the the matrix that you basically switch around,

okay. So, that's duality.

Now, why is this interesting in any sense, okay?

This is the beautiful property, right? You see the primal, you see the dual.

And the basic result that you have is that if the primal is a feasible solution

and for an optimal solution. Okay.

Then the dual also have a feasible solution and it has an optimal solution

with the same cost. Okay.

So, in a sense, these two problem have the, have optimal solution which have the

same cost. Okay.

Now, you see, you, you don't know really why this is useful, but I will show you

next time. Right?

So, but, but this is, this is the basic result.

Okay, the basic result is that the opt, the optimal value for this one is

going to be the optimal value for the other one.

And I'm just going to prove it to you. Okay.

I'm going to say, this x and y so the first thing that I'm going to show you is

that the cost of the primal, okay? Is always higher than the cost of the

dual. Okay?

So, think about it. The primal we are minimizing.

So, we start very, very high and then we go down, down, down, down, down.

Okay? The dual is maximizing.

We start, we're going to start at the bottom and go up, up, up, up, up.

And what I'm basically show is, the, the first thing that I'm going to show you is

that the primal is going down. The dual is going up, but the dual is

always lower than the primal, okay? So, they do this, okay?

They kind of crossover, okay? So, and this is, this is the proof which

is very simple. It's very simple algebraic manipulation

of these things, okay? So, let x and pi okay, be solution.

and there is a reason why I am using pi, you'll see later on, right?

So, but, but x and pi are feasible solution to the primal and the dual,

respectively. Okay?

Now, I look at the costs, cx, do you see this?

Okay? So, this is the cost of the primal.

Okay? And now, I'm going to use the fact that,

you know, these guys have to satisfy some conditions, right?

So, essentially what I know. Look at this thing here.

So, I know that c has to be greater than A, no, y times A.

Okay? And y is the feasible solution to the, to

the dual. Right?

So, here I have pi which is a solution to the, to the dual, okay?

So, I know that c has to be greater than what?

Than pi times A. And that's what you see there, okay?

So, you see that cx has to be greater than pi Ax, okay?

So, and I'm just using the fact that this has to be true for a feasible solution to

the dual, right? And so, the last thing I need to do is

use this fact here which is the feasible solution to the primal, I know that any

solution x which is visible to the primal, you know is to satisfy Ax is

greater than b. Okay?

Now, I have this beautiful Ax over there. I can replace it by b and what do I get?

I get pi b. Okay?

Now, pi b is the objective functions of what?

Of the dual. Right?

So, that's the value of that, that's the value of that objective value of pi, of,

of feasible solution pi. I have c x over there.

Okay? Which you see here, which is the value of

the solution of the primal. And then, I have all of these

relationship that, you know, the objective value of the primal is always

greater, okay, than the solution of the dual.

So, they do this. Okay?

So, that's the first result. The primal objective function is always

greater than the dual objective function. Okay?

So, obviously know, obviously, since the primal is a feasible solution, the dual

cannot be unbonded. Once again, you know, we're going down

with the primal. The dual cannot just go like this, right?

So, it has to be bonded, okay? So, that's the second fact that I have,

okay? And then, the third fact, okay.

While the, while the, there will be two more facts.

The third fact is going to be that all the optimality of the primal, I have a

feasible solution to the dual. Okay?

And this is this pi, of course, right. This simplex multipliers.

Okay, why? Because I know that optimality in the

primal, okay. This linear program, I have to satisfy

that all these costs, you know, in the basic feasible solution has to be

non-negative. Okay.

These costs are basically re-expressed in this particular fashion.

I've shown you that last time, okay. So, I know that c minus pi A has to be

greater than 0, okay, which basically means that c has to be greater than pi A,

okay? So, which is essentially the condition

that we shown, that we have seen, that we have seen here.

Right? So, this is the condition on the dual.

That's a feasible solution to the dual. So, what do I know?

I know that the simplex multiplier pi have to be a feasible solution to the

dual. So, if I solve the primal, I have already

a feasible solution to the dual, okay? So, now, I can actually show you, with

these two facts, I can actually show you that the primal and the dual optimality

have the same cost, right? So, I take an optimal solution to the

primal x star, okay? And then, obviously yielding all of the

things that we have seen in the previous lectures.

I know that, you know, the value of that particular that particular optimal

solution has to be given in terms of a basis, and associating this value to the

basic variables, all the non-basic variables being 0.

So, I know that x star has to be A B to, you know, the, the, the matrix for the

basis, minus 1, I have to invert it, times b, which is the right-hand side.

When I state the system. Okay?

Now, I told you that the dual, has a feasible solution which is obtained by

the simplex multiplier, that this is what, that this is expressing.

And the simplex multipliers are simply expressed as cB times AB minus 1.

Okay. And therefore, I know can come to this

interesting derivation. So the, the objective value of the dual

is this value there that you see, right? So, y star b, okay?

Now, y star, I just gave you the formula there.

It's cB AB minus 1. So, I have cB AB minus 1 stein b, but AB

minus 1 times b is, what is the value of the, you know, the, the basic variables

in the primal? So, I get here cBx star.

So, what I get here is that this feasible solution of the dual is exactly the same

cost as the optimal solution to the, to the primal.

So, I have one feasible solution to the dual, same solution as the primal.

I know that there are no solution of the dual that can be better, so I fit, what

you see here is that these two, the primal is going down.

The dual is going up, and then the meet at this optimal point.

Okay. So, that's, that's basically the result.

So, you will have the primal, you have the dual.

These two programs are closely related right.

To derive one from the other. And then, you know they have the same

objective value of optimality, okay. So, I've shown you the primary and dual,

dual relationship in, in the general in the, in the, in the restricted cases.

So, this is the general formula on how to obtain the dual from the primal, in full

generality, okay? So, in full generality, I'm doing two

things. I'm allowing to have equations not only

inequalities. And I'm allowing some variables to be

not, you know, non-negative. They can take any arbitrary real value.

Okay? And once you do that, okay, so you can

derive the dual in essentially the same way.

There will be some constraints which are equalities, some variables which are

constrained to be non-negative, and some which are not.

How do we get them? Well, look at this thing.

You know. Every equality will have a dual variable

which is associated with it. And that dual variable here is not

going to be constrained. It's non, it's, it doesn't have to be

non-negative. Okay?

If you have an inequality that's these types of constraints, they will obviously

have a dual variable, and that dual variable is to be non-negative, that's

what I've shown you before. Right?

And, the same thing happens for, you know, the, the primal variables.

If a primal variable is non-negative, you know that the constraint we are deriving,

like I've shown you before, has to be an inequality.

And we know that if the variable is non-constraints, we will get an equality.

Okay? So, very simple, equalities, no

constraints. Inequalities, non-negativity constraints.

Okay? So, that's the general form of the dual.

Okay? Now, there are some very interesting

properties for this dual and primal. Okay?

So, you know, this is basically telling you that the friend of my friend is my

friend. Okay?

So, the dual, the dual of the dual is the primal.

Okay. So, if you take the dual, and then

reapply the dual of that dual, you get back to the primal.

You know, that's a good property to have, otherwise it would be crazy.

Right? But this is essentially saying that if

you take the dual of the dual of the primal, you get the primal back.

Okay? Now, the other thing that you have to

understand is, you know what is the relationship if these things are

feasible, unfeasible, unbounded and so on, okay.

So, we already know that if the primal is feasible, okay, the primal is feasible

then the dual has to be feasible. If we have a solution, the other guy can

get back to that part of the solution. Okay?

Now, what if the primer is unbounded, okay?

So, what, you know, this is a picture which I've been, you know, telling you,

okay? So, when they are both feasible, when,

you know, they are both bounded and feasible, okay?

So, you have the primer going down, you have the dual going up, and they meet.

You know, it's like in Pac-Man when they go to the next level.

Oh, they meet. Okay.

So, this is what you have here. Okay.

Now, if, if the primal is unbounded, what is happening?

This guy's going down, down, down, down, down, down forever.

Right. And we know that the cost of this guy has

to be always greater. The cost of the primal is always to be

greater than the cost of the dual. Okay.

Therefore, the dual cannot go up. You know, it can go up, it, it, it

cannot, you cannot have a fixed cost for this dual because, otherwise, the other

one is, is, you know, going through it. And therefore, the only possibility for

the dual is where it, it has to be infeasible, right?

So, we know that this guy is, is, this guy is in, is, is the primal is

unbounded, the dual has to be feasible, okay.

Now, we have the last column that we need to consider, which is what about if the

primal is infeasible? Okay?

If the primal is infeasible, can the dual be infeasible?

And can the, can the dual be unbounded? Okay?

So, by analogy to the, you know, to the previous you can already, you know, reach

one of the conclusions. Okay?

But this is, this is a very simple example with, with, which expresses the

possibility. This is the primal, I minimize, okay x1.

Can I, cannot be simpler than that, okay? And two constraints okay?

These two constraints are very interesting, because, you know, you look

at the left-hand side here. And the other left-hand side that shows

the negation of each other. And they both have to be greater than 1.

So, this is clearing infeasible. Right?

So, this is the dual, okay? And what do you see in the dual?

You see, you know, this variables here are not, the variables x1 and x2 are not

constraint. So, you get equations inside a dual.

and therefore, you, you see some very nice equations with the right-hand side.

Okay? It's the same for this, the left hand

side for these two things are the same, but the constant, the constant here, is

different. Okay?

So, once again, it's feas, it's infeasible.

So, if the primal is infeasible, we know that the dual can be infeasible.

Okay? And then, there is the other case.

It can also be the case that the primal is infeasible, but the dual is unbounded.

And the only thing that I did here, I kept essentially the same constraints on

the top, so it's still infeasible. But I added more constraints saying that

the variables have to be non-negative. When the variables are non-negative,

remember the impact of that is changing equations into inequalities.

And that's what you see there. So now, essentially you have these two

guys here, which have the same left hand side, okay.

But now they are smaller or equal to 1 and 0, so you don't force them to be

feasible anymore. Okay.

So, these constraint are easy to satisfy, right.

So, you make y a little bigger than, than, than y2, a little bit bigger than

y1, okay. And these constraints are always going to

satisfied. And therefore, what I can do now is take

this objective function and drive it up, as much as I can.

Right? I'm, I'm basically getting a value for

y1, getting an even greater value for y2, and these values can go up, up, up, up,

up, up, up. And so, it's unbounded.

Okay. So, once the primal is feasible, okay.

It's infeasible. The dual can be infeasible or the dual

can be unbounded. So, you know, the relationship which is

primal and dual now. Okay.

Now, there is another thing which is very nice.

Right? So, one of the things about the theory of

NP completeness is I told you, okay. These problems are hard, we have to push

the exponential, okay. That's what we have been doing all along,

okay. But the nice thing about this problems is

that we can actually show that if you give me a solution, I can check if it's

feasible or not. Okay.

Very easy, you know, typically low polynomial type, okay.

But, but proving that something is optimal is kind of tough, right.

So, I don't have that, okay. Now, in linear programming, can you

actually provide me with a certificate of optimality?

And the answer is yes. This is nice.

In, in linear programming, I could show something is feasible.

But if I give you, if I give you a solution, you can check quickly if it's

satisfy, you know, if it's feasible or not.

But here, you can also convince me that you can act, that you actually have found

an optimal solution. How would you do that?

Think about it. How would you actually convince me that

you have an optimal solution? Well, this is what you would do, right.

So, you would say, oh, but you know, we have primal and you have a dual you know,

okay. So, von Neumann and [UNKNOWN] and some

people at Princeton, Ken Tucker and Dale, if i remember correctly, actually proved

this beautiful relation between these guys.

These, these primal and the dual. So now, and so you believe these guys.

So now, you can give me an x star which satisfy, you know, the constraints, the

constraints of the primal. You can give me a y star which satisfies

the constraints of the dual, and you can show me that the cost of these two things

are the same. If this is the case, you know, I know,

then, that, you know, x star is an optimal solution to the primal.

And of course, that y star is an optimal solution to the dual.

So, you can actually give me something that I can check very quickly, if this is

actually an optimal solution. And this is one of the beauty of linear

programming. That's the gap between NP completeness

and polynomial time. Okay?

So. We have seen the, you know, we have seen

the basic introduction to the dual here, which is beautiful right?

You have this primal and the dual, right? And they have the same objective value at

optimality, assuming that there is a feasible solution for both.

For, for, you know, for one of them because then they are solutions for both

of them. They have this beautiful relationship,

okay? And what I'm going to do next time is to

show you, you know, how we act, what this actually means in practice, and what does

it actually, what is, how do we actually get to these things, and why does it make

any sense, okay? Thank you very much.