0:00

Okay guys, welcome back, discrete optimization.

But still the knapsack problem, but today we're going to talk about modeling.

Okay, so whatever I'm going to say today is going to

seem very, very simple, but there are some deep

truths and deep knowledge in what I'm going to talk

about, but they will seem to be completely natural.

But we'll come back to them later on, and you'll see why

some of the things that I'm going to say today are actually important, okay?

So the first thing we're going to do is talk

about how to formalize an optimization task as a mathematical model.

This is the key, okay, so this is the key for actually solving these problems.

You have to be able to model them mathematically

why, you know, let me give me an example.

You talk with industry, you talk to people, they start

describing your problem, and you think you understand it, okay?

And then you come back with a beautiful solution and

tell you you know you can't do this, that's the constraints.

But they didn't express it the right way or they forgot

to tell you.

And essentially you come up with this

beautiful algorithm that really doesn't apply in practice.

So what you have to do first is always find

a description of the problems that everybody can agree upon.

Okay? This is the first step.

What is it that we are trying to solve?

And that's what I'm going to talk about today.

Okay, so lets do that for the knapsack which is very simple, but that's

going to give you an idea on how we do that for a very complex problem.

So, we start with a set of item, capital I, that's all the items

that we can actually put in the knapsack and

then for every one of these item, I, okay.

We will have two piece of information, okay?

That's what we've seen in the greedy algorithm so far, right.

The weight of the item, okay, and the weight of the item, and the

volume of the item, that's the two things that we know for the items.

And then the only piece of information that

you need, is also the capacity of your knapsack.

This is the input of your problem.

That's what you need to actually start formalizing it.

And now the problem is really finding a

subset of these items, Okay, which has maximum value.

You want to maximize the value of the

items that you are picking up, but you don't

want the weight of the items that you are

picking up to exceed the capacity of the knapsack.

Okay?

This is still informal and we're going to start

formalizing this in the next couple of slide.

But this is the start of formalizing the problem.

Okay?

We know the input, okay? So, how do we model this?

The first thing you have to do every time you are trying to

model an optimization problem is to find out what the decision variables are.

And you're going to say, oh, but what is this thing.

Essentially this is something that's going to capture

the real decisions you are interested in, okay?

In practice for instance, in this particular case

for a particular item what do you want

to know, you want to know if you select that item or if you don't select it.

lf you put it in the knapsack or if you don't put it in the knapsack.

That's what you want to, to decide, and

that's the decision variables will correspond to that.

Okay, so once you have these decision

variables, you can start doing things with them.

And one of the really important things that you

have to do is basically model the problem constraints.

And these problem constraints are going to tell you

what you can do and what you cannot do.

What is a feasible solution?

A solution that people will tell you, yes, yes that's the solution,

not something no, you forgot something, It's basically capturing

what you can do, what is the feasible solution.

Okay?

It's basically define the, the set of things that people will accept as

a solution, and then the last thing of [UNKNOWN] you have to define

your objective function, what are you trying to maximize or minimize, and in

this particular case the knapsack is going to maximizing the value of your items.

Okay, the objective functions is defining the quality of your solution.

The constraints

are defining what is a solution and the decision variables are

telling you what to decide, what you will decide upon, okay?

And so essentially, the result of these

three things together is an optimization model.

It doesn't tell you what to solve or what, how to solve the problems.

It tells you, it tells you what the problem is.

Okay, now a very important point here, is that there are many,

many ways of actually modeling a

particular optimization problems, and this is

part of the beauty, okay?

So in a sense I'm basically telling you its specify what we want to solve but

implicitly you are already making some choices, and

it can restrict how you will solve the problem.

So you have to be very careful when you do this.

There may be many formulations, they may be translated

from one to the other, but essentially they will

capture the same problem in a different fashion, and

they may influence the technique that you will use.

So you have to keep an open mind when you model the problem.

Okay, so, we'll come back to that many times,

present different models of different problems

for, you know, for practical application.

Now, the, in the knapsack problems, the

decision variables, here are the decision variables.

For every one of the items, you will have a variable xi.

For item i, you know, variable xi will denote whether you select the

item or not, whether you put the item inside the knapsack or not, okay?

So xi will be equal to one if you select the item.

It will be equal to zero otherwise.

Okay?

So, so the goal of the optimization model will be to find the values of all

these decision variables, whether you assign a one

or a zero to every one of them.

That's going to be the goal, but these decision variables when you

see the value of them you know what to do, okay?

Every one which is every item, decision variable which is a

one you know that you want to put them in the knapsack.

The ones which are zero, you don't. Okay?

So the problem constraints have to be expressed now in terms of

these decision variables.

And this is one of, this is the only, you

know, feasibility constraints that you have in this particular problem.

What does it say?

It basically makes sure that the item that you select, they will

have an xi equal to one, don't exceed the capacity of the knapsack.

So basically what you do is you sum this product, okay, which is a constant,

which is the weight of the variables and

then whether you select the variable or not.

And that summation

here has to be smaller than the capacity of the knapsack.

And you know, when you don't select the item, you

know this, you know this product is not contribute anything.

When you select it, it will contribute the weight So essentially this make sure

that you never exceed the capacity of the knapsack with the item that you select.

Okay.

Now essentially this constraint defined all the feasibility constraints.

It basically makes sure that the item that you

selected will not exceed the capacity of the knapsack.

And then the last thing that you have to do

is express your objective function and we use a singular

you know, expression, we multiply every one of these decision

variables by the value of the item, the corresponding item.

We've summed them, and we have the full value of the knapsack, okay?

At this point, we have the decision

variables, the feasibility constraints, the objective function.

We have a complete model.

This is the complete model of the knapsack.

You see the value

here that you are trying to maximize.

And you see the capacity constraints over here and

then you know that every one of the decision variables.

It took a value of zero or one.

You take the item or you don't. Okay?

And so this constitutes an optimization model.

Everything here is formalizing what you want to do.

You know exactly what's going to be a solution and

you also know the value of every one of these solutions.

What remains to be done, is what? Is finding these,

this, the value for these decision variables.

6:48

Now, now you can look at these problems and

you can start, but wow, how many solutions are there?

And in this particular case what you can do is enumerate

all the possible configuration for the values of the decision variables.

Okay?

So you can decide for instance, that you select no item,

you know, that's not, not going to give you very much value.

But this is one of the things that you can consider, or some configuration where you

see like one item or a com, or the

combination with two item, three items and so on, okay?

This is essentially your search space.

When you look at the decision variables

the value that you can take, the contingent

product of these guys are going to give you

all the configurations that you have to consider.

Okay? Not all of them are going to be feasible.

Some of them are going to violate the capacity constraints, and

that's going to be the feasible part of your search base.

Okay?

So this is going to give you the search base.

This is the feasibility

region in a sense, okay. And how many are there?

Well essentially in this particle or case the number of configuration,

if you ignore the ones that are violating the capacity constraints.

It's going to be two to the number of item and that's a huge

numbers if the items are actually, if there is a large number of items, okay?

So, so let me give you an idea of

how many and how fast this, this value is increasing.

Assume that it takes one millisecond

to test a configuration, to test if a configuration is feasible.

And we try to enumerate all of them, test, and then we want to select the best one.

We just, you know apply a brute force algorithm to do this.

If it takes one millisecond to test any

configuration, and you have 50 item, okay, it

will take that number of centuries like, you

know, more than a wow, that's a huge number.

It's more like a billion centuries to actually solve this right?

So, this is huge okay, so you can't solve the problem this way,

and what this class is about is exploring these configuration in a smaller way.

Okay?

And still finding optimal solution. Okay?

Or in a sense finding a very high quality solution that

you can say is very, very close to the optimal solution.

Okay?

So next time we're going to start looking at how you can actually do that.

How you can find optimal solution to the knapsack problems

in reasonable time. Okay, see you next time, thank you.

[BLANK_AUDIO]