Princess Iron Fan took out her palm leaf fan to blow away Wu kong and Zhuge Liang.
After being defeated, Wu kong returned and tried to trick the princess,
in order to obtain the fan.
With a magical spell, he shrunk himself and Zhuge Liang,
so that they were tiny enough to hide in the Princess' tea.
He planned to get into the Princess' intestines
to force her to give in and agree to lending them the fan.
Later on, they entered the Princess' esophagus as she drank her tea.
They needed to get further down into her stomach
by finding the deepest spot in her esophagus.
But they found there were two detection lines within.
If they stepped on these lines,
the Princess would be alerted and they would be ejected by her reflexive power.
In order to proceed carefully,
Zhuge Liang summoned the Dragon of Dimensions for assistance from the professors.
After his defeat by Princess Iron Fan,
Wu kong thought, "Well, let's try a different approach."
So, Princess Iron Fan had a dinner,
and she was relaxing and having tea.
And he has miniaturized himself and Zhuge Liang,
and put themselves in the tea, hidden in the tea.
And they're inside her body now.
So, he is trying to first,
find his way to her stomach.
All right, so he's currently in the esophagus and of course,
Princess Iron Fan has some magical defenses.
There are actually parts of her esophagus which are lined with alarms,
and if he goes into those untouchable lines then there will be all kinds of problems.
So, he has to avoid those untouchable lines.
So, this is the stomach problem.
Wu kong and Zhuge Liang are inside the esophagus,
and they are searching for the entrance to the stomach,
which is at the global minima.
And they have to avoid these untouchable lines,
which is where the absolute value of x equals the absolute value of y.
So, we're finding a global minima in a complex and non-convex landscape.
So, we have a look at the landscape of the esophagus.
It's this complex thing, non-convex,
lots of local minima, and in fact the global minima is here.
All right, so if we look at it from above, so of course,
the darker means deeper and this is the map from above.
And that is where the global minima is that he has to get to.
So, in any local search problem,
there's lots of ways to be trapped.
Basically, we can be trapped in a local minima where all the moves are increasing.
So, there is no way of moving which won't make it worse,
or a weak local minima which is no move is decreasing.
So, this is where some moves are equal but all the other moves are up.
All right, so we need to be able to escape
from these and there is a number of ways we can do that.
We'll look at three; restarts,
simulated annealing and tabu lists.
So, let us look at restarts.
So, this is a very simple approach.
Whenever you get stuck in a local minima,
you just restart the search from the beginning,
from a new random point.
So, the pseudo code looks a bit like this.
We'll just start with a random initial evaluation and we'll keep going,
basically doing a greedy local search inside here.
So, we will start with the initial random valuation,
we'll do a greedy local search.
If we found a better solution than we saw before,
we'll keep it, and then we will just restart and do another.
So, basically just doing greedy local search which just finds a local minima.
And hoping that one of the local minima we find is actually the global minimum.
All right, so let us do a restart with a steepest descent search.
So, first of all we pick a random point and we just
calculate the objective of that point so we just have a solution.
So, that's our current, best solution.
So, then we're going to start a greedy local search.
So, we're going to choose a position to start.
We're going to look at moves where we can move,
change one or two variables raised by one,
which is basically looking at the square of points around each point.
And we're going to choose the best downhill move because we are doing steepest descent.
So, let us say we move here.
We're going to evaluate that, we're going to look at its neighbors,
we're going to find that this neighbor is smaller, so we will move there.
Then, we will get here,
we'll evaluate that. We'll look at its neighbors.
None of them are better, so this is actually a local minima.
We're going to compare that with the best objective we had so far,
which was this, from the point here,
and we're going to find it's better,
and so we're going to replace.
That is our new, best solution that we have found so far, the last local minima.
All right, then we're going to restart from a new point.
So, we will go over to here, evaluate that,
look at its neighbors, find the best place to move.
Evaluate that, look at its neighbors,
find the best place to move, look at that, evaluate its neighbors.
Again, there is no better neighbor,
so this is a local mimima.
We're going to compare that with
the previous local minima we had and in this case, it is no better.
So, we're just going to keep the best one that we had already.
All right, we're going to restart again.
We'll jump over here, a new starting point.
We'll look at its neighbors, move there.
Look at its neighbors, move there.
Look at its neighbors, move there.
And again, we found another local minima.
We'll compare that. All right,
it is actually better than what we've seen before.
And in this case, it actually is the global minimum.
All right, in practice,
this time we found the optimal solution,
but in practice we're not guaranteed of doing that.
We're basically going to keep on trying until the results are exhausted,
and we are just going to keep
the best solution we have found so far. And that is what we'll do.
Remember, in a local search, we're basically never be able to enforce that,
it always finds the optimal solution.
All right, in stochastic search,
we're basically doing this kind of thing.
We're basically moving to a point,
going downhill until we find a local minima,
and then jumping to another point,
and then going downhill from there to find another local minima, and keep going.
So, basically we're combining the steepest descent with random jumps.
All right, so restart local search
is the hope that we'll find
different local minima by starting from different starting points.
It is a very robust approach but it actually spends
a great deal of time descending to the local minima.
And many times, if we have a large area which is convex,
we can spend a lot of time finding exactly the same local minima again.
So, we expect, of course,
that the good solutions are near the local minima.
We know that, actually, the best solution is a local minima,
or at least a weak local minima.
Now, there is other variations on the restart local search.
We can do iterated local search where we actually try to stay.
We don't just take an arbitrary random point to start,
but we try to start somewhere close to our previous local minima.
There is lots of variations but restart local search is a basic one which is very useful.