But the other thing we'll do first is, we looked at three

different ways of escaping local minima. >> Wow, my understanding

is that every successful local search algorithm must have an effective

way of escaping from local minima. >> At least one.

>> At least one so-

>> So the simplest one, of course,

is just restart, we're just going to start the search from the beginning, and

hope we don't end up in the same spot. >> So this is stochastic in nature?

>> Exactly,

almost all local searches are random in nature, they ought to do that.

Okay, the second possibility is simulated annealing which is basically a way of saying,

sometimes we're allowed to jump up.

And so that means that if we're stuck in a local minima,

when we're allowed to jump up, we can move to a different place and then we can find

hopefully a better solution. >> But the probability of this so called

sometimes, it would decrease gradually. >> Absolutely, as we go down through

the search, we want to have less and less chance of jumping up.

Because we want to be more and more sure that we end up in a good local minimum, as

close to possible as the global minimum. >> And last way of escaping from local

minimum is actually to avoid going into local minimum, right,

by using a tabu list. >> Yes, or

you can think about it as filling up the local minimum.

So because I'm in local minimum, then it would be a tabu,

I won't be able to go back into it.

So I'll leep moving around, and hopefully that gives me enough freedom to push me

out of local minimum and find somewhere else.

And it's particularly useful if we have these sort of plateaus,

where we have lots and lots of solutions of equal value.

We need to be able to move around them without sort of just going back and

forth and back and forth. >> And

tabu lists can really prevent cycling >> Exactly

>> So we can go back talking

about constraints. >> Exactly, so we said already that it was

difficult to find the right penalty values to use for those constraints.

So Lagrange multipliers will give a kind of principled way to looking at what's

the right penalty for each constraint. >> So

the idea of this method is when we have an objective function to optimize,

we do not just optimize this objective function.

We combine it with the penalties associated with each of the constraints in

the problems as well.

So we have a new landscape, a new objective function to optimize.

>> Exactly, and

the key thing is that whenever we find a constraint is difficult to solve,

its multiplier or its penalty will get more and more high.

Which will change the landscape to make it more and

more unattractive to not satisfy that constraint.

And so we'll end up forcing it to be satisfied, to satisfy that constraint.

>> So in some sense,

we do not have to set the penalties in advance.

But we learn the importance of each of the constraints

gradually during search. >> Absolutely right, then the very last

thing is sort of a complete jump back to things we've seen before.

So now we look at large neighbourhood search, so

now we're going to be able to do in a local search,

we're going to look at a huge number of possibilities in our neighbourhood.

And that means it's a problem that we've already looked at before.

>> Wow, previously,

we were looking at small neighbourhoods.

So that we could afford to examine each of the neighbouring points, one by one,

look at the objective value, and then select the best one, right?

But when we have a large neighbourhood, there are too many search points.

We can afford to do that, so

how can we find the best neighbour? >> Well,

this is a discrete optimization problem, which this course is all about.

So we can use all of those complete methods we've already looked at for

solving this large neighbourhood search problem.

And in fact, that combination of using a complete method on a large

neighbourhood is a very, very powerful approach to using complete methods for

problems where they don't scale. >> Exactly, and

we can also see the power of such a method when it's combined with restart as well.

>> Absolutely, so

restarts in large neighbourhood search give us a very, very powerful way of tackling

all kinds of problems. >> How about our final workshop?

>> So our final workshop, and well,

you can tell us about the story, Jimmy. >> Yeah, it's about Wu kong as well,

he has to fan off the flames from the flaming mountains.

But then, there is a village that's nearby, so he has to ask for

help from Ne Zha to send his magical birds up to go and inform the villagers.

So that they could get prepared to fight off possible fires that fly over

to the villages.

But it turns out that this has become a very interesting, complex,

and difficult routing problem. >> Absolutely, and so

routing problems are one of the most common uses of large neighbourhood search,

one of the most powerful uses of large neighbourhood search.

So you're going to be building a routing problem,

a kind of interesting routing problem.

And then we want you to explore how to best search that, making use of restarts,

live neighbourhood search, search strategies, all of the tricks that

we've seen through this course. >> For a change, our learners will have to

build a model again. >> Exactly, well, it's the last workshop,

we have to make you work.

And that brings us to the end of- >>Peter,

not the end yet. >> No?

>> I realize that today,

you're wearing a different t-shirt with a character that has never

appeared in one of our modules. >> So what can we

tell our viewers about this character? >> Well, they will see.

>> [LAUGH]