25 slides. In 25 slides you would have seen

everything about, you know, the Simplex algorithm to actually code it and know

exactly how it works. Okay?

So this is what we're going to do today, we're going to talk about the last steps

of the Simplex algorithm and some of the most complicated stuff.

Okay? So we know, what do we know so far?

So we know that the optimal solution is located on a vertex, and a vertex is a

basic feasible solution. What is a basic feasible solution?

That's a solution is in which you isolate, you know, as many variables as

there are constraints. Okay?

These are the basic variables. You express them in terms of the

non-basic variables. Okay?

And if the Bs you know, the constants that you get on the right hand side are

all nonnegative, you have a basic feasible solution.

Okay? So we know that in a very, very, very

naive algorithm which was basically any variable on this basic feasible solution.

And look at the one which has the best cost, and that would be your solution.

The issue with this is that this will take a long, long, long time.

Okay? Because there are many combinations of of

basic feasible variables that you can select.

Okay, so what the simplex algorithm is, is a better way of actually doing this,

okay, and so the key point, the third thing is going to basically telling you

that we want a numeric. All these basic feasible solutions, we

are basically going to move from basic feasible solution to basic feasible

solution, and trying to improve the value of the objective function all the time.

So, in a sense you know, the simplex algorithm is a local search algorithm,

okay? And the basic move is you know, the basic

local move that is applied is moving from a beautiful fantastic spot to another

one. Okay.

Moving from a basic feasible basic solution to an adjacent basic feasible

solution, and will tell you exactly, you know, how we move from one to another.

That's the key point of this lecture. Okay.

And so, the only thing which is really nice about the simplex algorithm is that

because of complex, convexity. Okay.

It's guaranteed to find the global optimum.

So it's a local search algorithm, but it's guaranteed to find the global

optimum. Okay, so the key idea is basically going

to be telling you how we can move from BFS to BFS.

Okay so this is essentially what we're going to be doing.

Okay so you have this systems of equations that you see on the top, right?

And then we find a basic feasible solution and that's what you see here, so

we isolated x3, x4, x5, okay, we re-express them in terms of the other

basic variables, in this particular case x1, x2 and xx, okay, and now we have a

basic feasible solution, okay so we give the value to x...

3 in x4 and 5 and give the value 1 to 3 and that's the basic feasible solution.

All the non basic variables are assiged to 0.

Okay? So, now, how do we, so, so, basically the

local move that is simplex algorithm is going to do is taking one of the non

basic variable, okay, and moving it inside the bases, which basically taking

to basic variables. And of course, you have to keep one out,

okay? So you're going to take one of the basic

variables and kick it out, okay? So this is the basic idea, that's a very

simple local move, you swap two variables, okay?

One which is basic, one which is non-basic and the status of these two

variables is basically changing, okay? The basic becomes non-basic, the

non-basic becomes basic, okay? So this is the local move right, so you

select a non-basic variable And that known basic variables has to be a

coefficient, okay, negative has to have a negative coefficient.

We want to take it from the right hand side and moving to the left hand side,

okay? And to do that you have to have a

negative sign, okay? So you take the value that is negative

side, you're going to move it on the other side, and of course the other

variable's going to move to the non-basic site and also change, you know, sign.

Okay? So, in a sense we find the entering

variables. That's the variable which is non-basic

and has a negative coefficient. Okay?

And then, we going to find the value in the, in, in basic variable is, one

constraint and one basic variables that we going to kick out of the basics.

Okay? Make, you know, non-basic And then what

we going to do is re-express all the equations in terms of the new basic

variables, okay?. So we going to basically eliminate the

new basic variable from the other equations, okay?

And so that's essentially the local move that we are doing.

So basically local move here swapping a basic variable and a non-basic variable,

okay? And the way you do it, you choose an

entering variable, you choose a leaving variable, and then you do Gaussian

elimination to eliminate the new basic variables that you selected.

Okay, so I'm going to show you example, okay.

So once again, this is a system of equations, okay.

You see the beautiful basic variables there, the non-basic variables over

there, okay. Now, I'm looking at the non-basic

variable in this particular case X2, okay.

It has a negative coefficient, minus 2, and therefore I decide that I want this

variable to go into the bases. Okay.

I select the first constraints here, that's the only one that it appears to,

so it's easy. Okay.

And essentially what I'm going to do is swap x2 with the variable which is on, is

in bases inside the first constraints, and that's x3.

Okay, so I'm basically swapping these two guys, now x2 is going to move on that

side. x three is going to move on the other

side. Know the coefficient is minus two so I

will have to divide the equation by two such that the coeffiecient of x two is

going to be one over there and so I get these constraints, no?

Where you see that this is one half you know and then the coefficient of x one is

minus three half and the coefficient of x three is minus one half.

Okay, and that's the constraint that you see there.

Now, you know, in this particular case it was very convenient because x2 didn't

appear in any one of the, the other constraints.

And therefore I don't even have to eliminate it with the other constraints.

In general, you would have to actually take the volume of the, of the right-hand

side on, in, in, in this equation and remove it from the other equation.

Okay? But this is basically telling you what we do. Okay?

So we select the variable that we want to enter the basis.

X2 in this particular case. We select the constraints, and by

selecting the constraints, you select a variable that you want to kick out of the

basis, x3 in this particular case, and you swap these two variables.

You make sure that you get a basic feasible solution by you know, making

sure the coefficient of x2 is 1 and then by removing you know x2 in this

particular case from all the other equations.

Okay? So that's the basic move that the simplex

algorithm does. Okay?

Now, I selected x2 to enter the basics, I could have selected x1.

Right? So because x one also appear with a

negative coefficients there and therefore I could have selected x one from, you

know, as, as the, as, as the entering variables for the local move.

Now if I select x1 here, I have basically three negative coefficients.

And therefore, I can choose which constraint, you know, I want to, which

non, which basic, existing basic variables I want to kick out, out of the

basics - I want to make non-basic. Right?

So can I take any of these num, the, these basic variables and make them

non-basic? Well, look at this, okay?

If I take the third one, okay? So I'm basically saying, oh, you know, I

want to kick out, you know, x y for all of the bases.

Okay, so what is going to happen? Well I perform Gaussian elimination for

all the three constraints, and then what do I get?

I get that x three now is a sign of value minus x, x four is a sign of value minus

4, and that's terrible right? So because this is not a basic feasible

solution. I told you that a basic feasible solution

assign only positive values to the variable because all the variables have

to get a positive value. Okay, so what happens, okay?

So what happens is that I selected the mentoring variable.

That's all fine. You know, I, I can do that.

But then I selected one particular constraints, and one particular basic

variables to remove from the basics, to make non-basic, okay?

And when doing that I perform Gaussian elimination, and then suddenly, what

happens is that the value of the, the existing basic variables, okay, was

becoming negative. And that's terrible, okay.

So what does that mean? Does that mean that at that point, I'm

stuck on this beautiful, fantastic spot, and I can't move?

No, it just means that I have to move a little bit more carefully, okay?

And so the key point is that you have to choose, essentially, the leading

variable, using the formula that I'm showing you here.

And I'm going to give you an intermission here.

Okay? So, when you look at this column, okay,

so essentially you see various coefficients for these nonbasic variable.

Okay? And so this coefficient is large, okay?

This coefficent here is much smaller. Now, I also look at the b variables

there, okay? And so what you can do is if I take the

first constraints here, if I take the first constraint.

And to, to make x one into the basics and, and for that popular constraints and

therefore remove x three, the value of x one is going to be what?

It's going to be basically dividing this coefficient, dividing the b by this

coefficicent three over there. So it's going to be one third.

Okay? And so that's a small number compared to

what would happen if I make it, if I move it.

You know, if, if I, if I make it enter the basis using the 3rd constraints.

In which case, x1 is going to take the value of 3.

Now why is this important? Because if x1 is the value 3 here, it's

going to be multiplied by this 3 there, and that's you going to get a minus 9

there. Okay?

And therefore, 1 minus 9 is going to be the minus a that you seen before.

So if i take a very large value here, okay, and I substitute it the other ones

you going to get these negative values. So what I need to do is to find the

smallest ratio between the B's, okay the B's that you see there, and the

coefficient of the variables that has to enter the basis.

Okay, and so this is, these are the ratios that I have shown you.

This is basically the bi, you know, divided by minus the coefficient of the

entering variables, okay. And you see that the one which is the

smallest ones at the top, and if you actually, you know, choose the first

constraints, as the constraint from which you select the basic variables that you

will remove from the basis, then you are guaranteed in a sense to make sure that

you go to a basic feasible solution. Because you are making sure that the

other constraints, when you do the Gaussian elimination, are going to stay

positive, okay? So essentially what we do, we use this

ratio to select the leaving variables, okay?

Using this formula, here. And we are guaranteed to maintain

feasibility, okay? So, in a sense, the look and move, you

have to be a little bit careful. You take any kind of entering variables,

okay, and then you use this route to find the variables that you have to kick out

of the basis, okay? So, and you compute this ratio, it's very

easy to do. But you have to be careful in do that.

Okay? So, in this, this particular case, you

get this beautiful basis where, as I, you know, alluded to, x1 is going to be 1

3rd, and then you see x4 and x5 we get also fractional value, you know, 4 3rd

and 8 3rd. But this is a basic feasible solution.

These guys now are positive, okay? And so, essentially when you move from

basis to basis, you have to make sure that you get a basic feasible solution

there, and so you have to make sure that when you perform the Gaussian

elimination, the right value are going to emerge.

And that's what you do, by selecting this the, the, the leaving variable, using the

rules that I've shown you. Okay?

So, to summarize, we move from basic feasible solution to basic feasible

solution by selecting an entering variables that is to be a non-basic

variables, which has a negative coefficient in some of the, in some of

these constraints. Then you choose a leaving variable,

giving the, the, the, you know, the formula that I've shown you.

You compute these ratio, you select the constraint which is the smallest ratio,

okay, and then that's the variable that, that's the basic variable that will leave

the bases and then you perform, you know, Gaussian elimination.

Now, this entire set of operation here is called pivoting, okay, and pivoting is

basically the name of the local move in linear programming but that's basically

choosing this entering variable, choosing this leaving variable.

And performing, you know, Gaussian elimination.

So the local move is called pivoting and basically swap one basic variable and a

non basic variable, okay? So at this point we have seen a lot of

thing already, okay, so if you want to solve this linear problem, we know that

this solution is at the vertex, you know that the vertex is a basic feasible

solution. And now we know how to move from a basic

feasible solution to another basic feasible solution so we can look at the

four points, you know, when do we need to stop?

Okay. And we need to stop, you know, when you

see the basic feasible solution is optimal.

And I'm going to give you very, very simple criteria to actually detect that.

And this is the criteria, right? So we have this basic feasible solution.

We have these equations, right? And these equation are expressing the

basic variables in terms of the non-basic variables.

Okay, take the objective. Okay, take the objective, remove all the

basic variables which basically mean replace them with the non-basic variables

and you get an expression of the form you know, 'c' zero which is going to be a

constant plus 'c' one is one, well this is a coefficient of you know the variable

'x' one plus 'c' two, 'x' two and so on when I have basically replaced all the

basic variables way off, you know on the right hand side, the non-basic variables.

The expression with the non basic rivals, okay.

Now, if I have an expression like this, okay, and all the CIs are strictly, are

greater or equal to 0 then I know that I'm optimum, so I have a very, very, very

very simple way for actually detecting that I'm optimum.

I'll rewrite the objective function, remove all the basic variables, look at

the expression, which is only in terms... Of the non-basic variables, and if all

the coefficients there are greater or equal to zero, I'm done.

Okay? I'm at the top of the world, okay?

Now, I'm going to give you a little intuition why this is true, a little bit.

Or actually, the inverse. But, but we're going to go through some

examples so that you can, I can convey that intuition, right?

So we had this beautiful problem that we were trying to solve for a long time.

So we have variable x1 for x5. You have inequality, equalities here and

so the first thing of use is to find a basic feasible solution.

That's what we find here. Okay.

Now look, When I take the basic variables here, right?

So x three to x five, okay, remove them from the objective functions there.

This is the objective function that I get.

Okay? As you see, the constant is six.

Why? Because x three, x four and x five, you

know, I assigned one through three and there are the coefficients of one at the

top. Okay?

So I know that I will have, you know, the value six.

But then I give a minus you know, 3x1 and then a minus 3x2 okay?

Now this is not an optimal solution right?

Because these guys are negative okay, they are not, non negative the

coefficients of you know, x1 and x2. So what is the intuition here?

Well, you know that these guys x1 and x2 okay they have to take non negative value

okay? So currently they are not in the business

right so their value is what? Zero, right.

OK since they are zero you know may wonder oh but if I put them in the bases

there going to get a positive value, a nonnegative value and therefore I can

drive this objective function down because you know a negative times a

positive value is going to decrease this objective function and that's the

intuition here. If they were positive you know they had

to take a positive value this objective function is going to go up.

Okay, so essentially they are negative. That means that I can drive this

objective function down, okay, so this is what you have there.

You know, we have these objective functions, we select x2, which is a

negative coefficient, and we want this x2 to enter the basis, okay, so how do we do

that, we have to select a variable to leave the basis, you know, once again we

look at the coefficient 1 divided by 2... Okay and then three, three divided by by

three, so that's one. So we're going to basically use the first

constraints to remove to make x two inside the bases, removing x three from

the bases and this is what you obtain. Okay?

At this point this is the new basic feasible solution.

And this basic feasible solution is the value of nine and a half, okay, so which

is four point five, so it's going to then six with decreased value of the objective

function. But the now the two co, the two

coefficients that you see are positive which basically means that at this point

this is the optimal solution. There is no way I can drive the objective

function down. I have an optimal solution to my problem.

Okay, and I can detect that and just look at this line and all the coefficients are

going to the zero, great. >> I am, I'm optimum, okay, so this is

basically the idea. So you go, you go, you move from basics

to basics, select this entering variable, you know, select this leading variable,

performing the pivoting operation and look at your function though.

Is it all positive? If it's all positive you're down, okay.

So that's the basic idea... Okay, so in a sense what we have seen at

this point are basically the first four points.

Okay, so we have seen, you know, the fact that an optimal solution is located at a

vertex, okay. A vertex is a basic feasible solution.

I can move from BFS's to BFS's, okay. And I can detect that BFS is optimal.

I haven't shown you the proof yet, okay, I will show you in the next lecture

because it's much easier. To actually prove this thing, if I can

use Matrix notation. But it's a beautiful single proof like

five lines if I can use matrix notation. Which I'm not doing here because it's

simpler. Okay?

So now, the only thing that I have to tell you is that from any kind of BFS, I

can move to another BFS which has a better cust, okay?

And that's the case, okay? Provided that I put some conditions,

okay? Once again, I want to prove this but

this, there is a simple algebraic proof to do that but the basic idea is the

following, okay? So if all the bi's, okay, I'm assuming

that all the bi's are strictly greater than 0, okay, that I can find an entering

variable. So there is an entering variable

somewhere in the constraints with the negative coefficient.

I can find the leaving variables, okay, then the pivoting operation is going to

give me a new BFS. Which is improving.

Great, I am improving the solution. Okay, and so once I have that, you know,

this is the only, so I'm ready in a sense to tell you what the simplex algorithm

is, and this basically these five lines that you see here, these four lines that

you see there, okay. So what you do is, as long there is a

coefficient in the objective function, which is, which is negative, okay, I need

to perform a pivot operation. Okay, so I select an entering variable,

that's a variable which has a negative, it has to have a negative coefficient in

the objective function. That's how I can drive this objective

function down, okay, then I select a leaving variable, okay, such that, such

that, I preserve visibility, so I have to use the rule that I've shown you before,

and then I basically pivot... Okay?

So once again choosing and entering variables and in the centering variables,

how you have to be a little bit careful, right?

I want to take one, which is a negative coefficient in the objective function.

Why? Because I want this objective function to

have only positive coefficient. Right?

I select the centering variable, I select the leaving variable, I pivot and I keep

doing that until the objective function is going to be all, you know, all

positive coefficients. Okay?

So that 's the Simplex Algorithm. And if I guarantee that during the

execution, these B I stays strictly positive, okay?

It can't, I don't want them to fall to zero, and we'll discuss that in a moment.

And if the, the, the objective function is bounded by below.

Okay, so you can't drop, you cannot drive it down infinitely low.

Okay. Then you are guaranteed that the simplex

algorithm is going to terminate, and it will give you an ultimate solution.

So it's going to convert with, a bfs where all these cis are going to be

greater than zero. Okay?

So that's the simplex algorithm. Okay?

Now, there are a couple of nasty cases that I need to discuss with you, and

that's what I'm going to do in the next couple of slides.

Okay? Now the first one is this one, right?

So I told you that, you know, knowing we are choosing the entering variable using

a very interesting criteria. The fact that the objective function,

okay, has a coefficient which is negative for that variable.

Okay, so that's what I told you. We want to do that to convert to an

optimal solution. But, you know, at the beginning of the

lecture, what I was telling you is that we select the entering variable with,

because it has a negative coefficient inside some of these constraints.

So, it may happen that the variable that I'm choosing there has a beautiful

column, okay? Where essentially, all the coefficients

are greater than 0. What does that mean?

None of them is negative, so really the two things that I've told you in this

lecture are contradictory. I want to introduce the variable into the

basis to drive the objective function down, but at the same time, I also want a

negative coefficient and I don't have one.

What is happening? What is happening?

Okay? So think about it.

So what is the basic feasible solution here, okay?

It takes, you know, the basic variables and assigns them the value, in this

particular case one, two, three, okay? This is the objective function that I

have. What is the value of X one in the basic

feasible solution? It's zero.

Alright? But you see these coefficents, okay?

They are all greater than zero. Okay?

What does that mean? Can you think about what that means?

So can you think of the case where I say, you know, in the basic feasible solution

X one okay is basically assigned to zero. What if I assign it to 10,000?

What am I going to get, okay? Well, this equation is going to be fine,

right? So x2 is equal to 1, if I give 10,000 to

this guy, you know, multiplying by 3 that's going to be 30,000.

You know, I'm going to get the value for x3 which is going to be 30,000 and whah,

okay? That's fine, that's still feasible.

What about the second one? Same thing right?

This is positive. I'm going to add a positive number to x4

and that's fine. And same thing for x5.

So if I take, you know, 10,000, I have another solution, okay.

It's not a basic feasible solution, but it's a solution right.

What about if I give a million to x1? That's fine as well right.

If I give 10 million it's going to be fine.

What do these values do to the objective function?

They are driving this objective function down.

Why? Because this guy is a negative

coefficient. So what this really means here is that I

can take the value of x1 and giving arbitrarily large positive values, and

that's going to make the objective function arbitrarily large.

So which basically means that my algorithm here is not bounded by below.

I can drive the objective function as low as I want, okay?

Which is one of the hypotheses that I told you that I needed to have for

actually terminating. So in a sense what I'm telling you is

that the simplest algorithm is not bounded by below.

You will come to a situation where you see this volume you want to put inside

the basics but you cannot. And so that's the case where you, you can

jet, you can detect that your product is actually unbonded.

And you can stop executing at this point. You can report to the user that, you

know, that you can drive this optimization function arbitrarily, you

know down, okay, low. And typically that means that there is a

mistake in the modeling. Right?

So because if you think of a factory which is trying to decrease its costs.

And the solution of this complex algorithm is basically telling the

manager you can drive your cost arbitrarily low.

Something should be fundamentally wrong. Now if you maximize you can drive it

arbitrarily high. >> Its like you know telling the CEO of

Apple you can maximize your profit arbitrarily high That's probably not

right, okay? So essentially, you can detect that the

problem is unbounded and make sure that the user actually knows that.

They have to look at their model and try to do it better.

Now, there is the other things that I mentioned is that one of the hypothesis

that I told you about is that I don't want the bi's to become zero.

What happens if a bi happen's to be zero, right?

So look at these constraints here. I'm basically letting x3 be equal to 0.

That's fine. This is a basic feasible solution.

It's just that there is a basic variable known and that is the value zero.

The known basic variable views the other value zero.

But now what happens if I select a variable to enter the basis?

Remember the pivoting rule, you know, computes as a ratio.

I know I have a ratio zero divided by something is always going to be zero.

That value is always going to be the smallest one.

So you always select that constraint to enter the basics, which basically means.

That when you do the pivot thing, okay. So you going to stay at the same value of

the objective function. Because x3 you know, is going to be

kicked out of the basis, okay. So you, you, nothing happens there.

But x2 is going to basically enter the basis but get a value zero obviously.

And therefore, the objective function is basically not changing.

The objective value is not changing, okay?

So what does that mean? That means that the hypothesis that I

told you before, okay, is wrong. Well valid, it's not valid, it's not

satisfied and I have to find essentially another way to guarantee termination.

So one of the things that I told you about this five facts, okay So is wrong.

Its not always possible to move from a basic feasible solution to another basic

feasible solution with the better cost. I can actually move to another you know

another top of the mountain which is exactly the same height.

OK. So what do we do?

OK so we have to use, we have to be a little bit careful in how we implement

the algorithm and there are very useful ways to do that.

I, I will mention that two of them in a little bit of details and they are all at

once, okay, but you can use essentially a selection of the variables that you want

to make enter the basis in a very specific way and that will garauntee your

race termination, okay, and so the key point is that you look at the objective

function. If you only select the first variable

which is a negative sign, okay, negative coefficient, then you are going to

terminate... But sometimes you really don't want to do

that. For instance a lot of the, a lot of the

simplex algorithm will take a value which coefficient, which is highly negative.

Because if you give a positive value to that particular variable it's going to

drive this, you know, down very quickly, okay.

So this is somewhat restrictive. This is, this is putting some limitation.

To what you can do. The other thing that you can do is do a

pivoting rule, which is a little bit more complex.

Instead of choosing the ratio that I've shown you, you can actually see, you can

actually generalize it to a lexicographic order.

So if the, if the, if there are ties when you do this ratio You move to the next

column, you know, the first coefficient, and you do the same thing and so on and

so forth and if you do that once again. If you do, if you select the leaving

variable in that particular way you will also guarantee termination, so this is

interesting, right, there is somewhat of a duality and it's interesting to talk

about duality for linear programming... But you can, you can put a role you know

which select the entering variable rule that select the leaving variables, and

both of them would guarantee termination. And then you know typical implementation

of also methods for perturbing the basis, so you change a little bit the problem

and then you go back to that later on. But you basically make sure that, you

know, you you don't have this. You don't stay the same place when you

are doing these pivoting operations. Okay.

So I won't go into the details but there are ways to actually address them and,

you know, actual implementation will combine these with better pivoting rule

to actually drive the search quickly. But we can make this algorithm termanate

even if the bis are actually. getting to zeroes.

Okay, so, we know when the, so, so the two things that I've shown you is that we

know or we can predict that the problem is unbounded, that's one of the

hypothesis that I did and we can also detect that, we can also insure

termination if the BI's are actually getting to zero by using this more

clever, you know, People think selections.

So there is one more thing I need to do before we know exactly how to implement

the algorithm and this is actually really beautiful.

This is really clever. So essentially What I've been doing the

entire lecture, I've always assumed that we had a basic feasible solution to start

with, right. But in practice, you won't necessarily

have that, yeah. The user is basically giving you this set

of equations, okay, and these objective function You have actually no clue that

these equations are actually, you know, feasible.

Okay, now there is a feasible solution to these equations.

So, so the problem is the how do I find my first BFS?

Okay? I may not have it.

And it may not be a obvious, obvious how to actually get it, okay?

So what I'm going to do, is I'm going to do a very simple trace, right?

I'm going to add a new set of variables, okay?

Just step out here so that you can see them, okay?

But I'm adding one. We call artificial variables, so y1, yi,

you know, ym, are artificial variables. They are new variables that are just

created out of nowhere, right? And I put them into the equations of the

simplex algorithm, okay? Now, I put them in a very, you know,

simple way, right? So, y1 for the first constraint, y2 for

the second one, yi for the i n. For constraint i, and ym for the last

one. So now, essentially, when you see this

thing, you are like, smiling, right? Because now I have an easy BFS, right?

So this is essentially a BFS. This is essentially a, a, this, I can use

these variables to put inside a basis, and I have a BFS, okay?

But obviously, you know it's a BFS for another problem.

A problem where, you know the y's are actually the variables.

So now I have an obvious solution which is assigning these y's to the other ones,

which basically means that all of the other guys are zero.

But that's not a feasible solution to my initial problem, right?

Because I wanted these guys to be equal to the BI's okay so what am I going to

do? Okay, and this is where the beauty is

Right? So the beauty is so so the beauty here is

that we are going to the simplest algorithm to actually find the real BFS

from this fake BFS. Okay?

Okay? So this is what we are going to do so so

so this is called a two faced method. Okay.

And the first step is going to be, I want to find the BFS if there is one, okay?

And then the second step is, once I have one of these BFS, I'm great, right?

So I can apply the simplex algorithm for optimizing.

But the first one, I want, I will use essentially the simplex algorithm to find

a basic feasible solution. How do I do that?

Well, it's very simple, right? So, if I have a feasible solution to this

set of equation, that means that the y variable should be assigned to 0, OK?

So what I'm going to do is basically minimize the sum of these y's.

And remember, these y's have to be, you know, they are greater than or equal to

0, so If, if essentially this function goes to 0, it means that every 1 of them

is equal to 0 and therefore these, these values here have the value 0 and that

means that I have a feasible solution to this particular set of contraints.

Okay, so what I do? No but this is beautiful, right.

Why is it beautiful? Because I have this basis here.

I love this basis, right. I have this basis, I have all this

variables that I can put inside you know as basic variables and then I have an

offical. Then I can optimize this thing [SOUND],

and then I look at what the objective function is, if it's greater than zero I

know that I don't have a feasible solution to this, to this constraint.

So an initial set of constraints is wrong, okay?

It's basically not feasible. So you go back to the user and say, you

know, please give me a problem with this constraint that satisfies because

otherwise, you know there's little purpose to actually optimize Okay?

But if these values, see if the objective function is 0, that means that this set

of constraint is feasible, and now, I'm ready to do the optimization.

Right? Because at this point I have a basic

feasible solution. Okay?

And this basic, basic feasible solution is in the, in terms of the other

variables. Right?

So So, in a sense, there you can take the basic feasible solution.

We move up, you know, this, this objective, you know, function, use the

original objective function. Remove the non basic variable, the basic

variables from these active function, and start again the simplex algorithm using

that new objective function, which is the one you wanted to optimize in the first

place. So, all the things that I told you are

almost right, okay? So, they only I put it as I told you,

when we complete this optimization, we have a basic feasible solution, where the

basic variable are all the varibale, which are, you know, the original

varibles of the problem. They are not the basic variable.

That's not entirely true. It can be the case that one of these

basic values, these, these artificial variables in, in the basis.

But no you just need to think about it. If one of these variables is inside the

basis, what does it mean? What is the value of that artificial

variable, okay? Well, we know that the objective function

is zero, okay? So that variable has to be zero, okay?

So, at that point, what you can do is transform this, you know, remove it from

the basis, introduce someone else inside the basis.

And now you will have really a basic feasible solution expressed only in terms

of the basic variables and then you can solve the optimization phase, okay?

So this is it, you know, you have seen the entire simplex algorithm now, okay?

You have a two-phase method. First, find the BFS and then optimize.

How do you optimize, okay? Basically, what you do is you move from

basic solution to basic solution, okay? You can detect that you're already at the

end when all the coefficients of the variables, okay, in the objective

functions are greater than zero, okay, and every time the BIs are strictly

positive essentially what you do is you decrease the value of this objective

function. If you use one of the rules that I told

you you are garaunteed to terminate, okay.

Beautiful algorithm, local search, garaunteed to find the optimal

solution... Thank You. [BLANK_AUDIO]