One night, the curious Zhuge Liang decided to

study the magical tablet to figure out what made it so powerful.

In doing so, he accidentally broke the tablet and this triggered

the emergence of a powerful time and space tunnel, causing chaos.

In the tunnel, Zhuge Liang met

this celestial old man that originally gave him the tablet.

The celestial old man encouraged Zhuge Liang to fix the chaos he had created and

gifted him a magical dragon that he could call on when confronting difficult problems.

At another point in the space and time tunnel,

Zhuge Liang found himself back in ancient times,

only to discover that the sky hard partially

broken, causing chaos such as a curse causing illness,

extreme heat wave and invasion of monsters.

An ancient goddess, the merciful Nuwa,

decided to save people from this disaster by

using magic stones to patch and stabilize the sky.

The original sky was composed of a grid like structure

of 81 magic stones of nine different types.

There were nine distinct strands in every road,

and nine in every column.

This rule also applied to smaller three by three grids.

Nuwa wanted to know where to put the stones in order to satisfy these rules.

But she found it too difficult to work out and thus sought help from Zhuge Liang.

Zhuge Liang, summoned the dragon of dimensions to seek assistance from the professors.

So the Nine Heavens are broken and Nuwa

needs to fix them and she's gone to Zhuge Liang for help.

So she can use her magical powers to patch

the heavens by moving stones that are fallen and back up to their place.

And there are nine different kinds of magical stone,

and we'll use the numbers one to nine to represent those.

Now, there's a partially broken sky.

Some of the stones are still in place,

but she's going to need to put stones in

all the places which are blank where the stones have fallen out.

But there are some rules about balancing the heavens,

so we need to make sure that the heavens are balanced horizontally -

that is, in each horizontal line there is one of each stone one to nine.

Similarly, they need to be balanced vertically so in each column there needs to be

one of each stone one to nine and

each of the nine individual heavens needs to be balanced.

And each of the three by three subsquares,

we need the numbers one to nine.

So in the end, we have this problem, there's 81 variables,

that's each of the cells and each cell

can take the values one to nine representing which stone we're talking about,

and we had these constraints.

Each row, column and each three by three subsquare contains

exactly the numbers one to nine, no repeats, each number is used.

And that's an assignment subproblem.

So we can build a MiniZinc model to do that,

of course, here's our decision variables in each cell,

which stone goes in there.

Then there'll be some fixed stones, which is the constraints here,

and then we've got our constraint saying well for each row,

they're all different values so they'll take the values one to nine for each column,

and for each three by three subsquare.

So let's step back from that for

a moment, because now we trying to see how we could solve this problem,

how a solver will actually tackle it.

And first of all, we need to understand why is constraint solving hard?

So we'll look at some simple problems here,

and I want you, before I give you the

answer, to think about what is a solution to this problem.

So this constraint X equals 5, Y equals six,

you should have understood is satisfiable,

well, because it's really just a solution written there.

X takes a value of five and Y takes a value of six.

What about this constraint, system of constraints here X equals three,

Y equals four, X is equal to five.

Well, it should be clear that this has no solution because

we can't have X being equal to three and five at the same time.

So that's an unsatisfiable problem.

And here's another set of equations.

This one turns out to be a very simple form.

Basically, we can define Y in terms X and then

Z in terms of Y and X and U in terms of Y and Z.

So if we just guess the value for X like zero,

then we can compute a value for Y which is two,

which will solve the first equation.

Then we can compute a value of Z which is four, which will solve

the second equation, and compute a value for U which is eight,

that will solve the last equation.

So this no longer has that nice behaviour,

because we can define Y in terms of X, Z in terms of Y and X,

but then X is also constrained in terms of Z.

So now, we need to do simultaneous equations solving,

if you remember that from distant high school days,

and in fact, there is a solution here,

you can see basically if we substitute for Y

in here, we get that Z is equal to four, so Z must be equal four.

Once we have that, we can compute that X has to be five from here,

and then we compute that Y is seven from here.

Alright, we can get a bit harder, so here now we don't have just simultaneous equations,

we've got inequalities as well,

and this is getting to be trickier to find a solution.

And in fact, there's none.

If we go through the same sort of stages as before,

if we substitute for Y in this equation here with one,

Z is four, and then we look at this,

and this tells us that X must be greater than or equal to five, obviously,

and this tells us that Y has to be less than or equal to three, but then because X,

Y is X plus two,

we know that Y is greater than or equal to seven, so in fact, there's no solution.

Okay, so why is constraint solving hard,

the problem is conjunction.

The problem is that I need to solve all of these constraints at the same time.

So solving any individual one of them is easy,

but when I put them all together as a conjunction, it becomes hard.

So constraint programming, is an approach to solving discrete optimization problems,

and it's made up of two components: propagation and search.

And basically, we're going to overcome the difficulty of

conjunction using propagation. So what is propagation?

It's a very weak inference method, but the idea is we're going to

reason about each constraint on its own, separately.

And then we're going to communicate the inferences via the variables between the two constraints.

And the nice thing about this is each constraint

sort of lives by itself, it's an inference engine.

We only have to understand that constraint by itself nothing else

and then each constraint can talk to the other via the variables.

We're going to have to add to that search,

which is we're going to have to guess bits of the solution, and that's true of

basically, any discrete optimization

solving approach, we're going to have to guess at some point.

And in order to make this into a viable technology,

we're going to add a lot of engineering, to make the inference very fast.

And then we might add other things like learning to

remember what we already did and not do it again.

So let's have a think about finite domain propagation and see how it works.

And in fact, our problem of fixing the heavens,

which some of you may have recognised as Sudoku as well,

exactly exposes what happens in a finite domain propagation engine.

So if we look at this partially filled in sky,

we can reason what goes in the green cell here?

So we can look at that column and we know the rule about the column

says that every number has to appear at least once,

and so the only possibility missing is three.

So we can do other kinds of reasoning.

So if we look at this nine by nine heaven over here, three by three heaven over here,

we can reason another way around we can say okay we

know that three has to occur in that heaven at some point.

And let's look at which places it can occur and we can see that this three here,

wipes out all the top row,

and this three here wipes out

the third row which means that the only place that three can occur is there.

So that three has to be there.

So we are basically making reasonings about

each of the different possible values that could happen.

So we can do more complicated reasoning,

so let's look at this row here and intersect it with a column.

So if we think about that green cell,

what could possibly go there?

Well, if we look at the row down the bottom,

the only possible values left over are one, two, and three.

And if we look at the column well there's a three in there.

So that leaves only one and two leftover as the only possible values there.

We can do similar reasoning here, again the only possible values here are one, two,

and three because of the row,

and because of the column, that only leaves one and two.

And this allows us to do a more complicated kind of reasoning,

so we look at this row here and everything is

almost filled in we've got what values could go here.

Well, we know that there can only be one two and three but in fact if we look at

these two entries over here we see that one and two must occur in these two positions.

So we know that one and two are taking up those two positions,

so the only thing that can occur in that green cell is three.

So you can think about if there is any further reasonings we can do from this thing.

And there's actually quite a few,

and we can think about representing all the possible values left over in every cell.

In fact that's exactly what the constraints are always going to do.

It's going to keep track of, for every one of the decisions,

what are the possible values that we can do.

But we haven't solved our problem.

So we now need to make a choice.

Now, one way to do this is to

choose one of the cells with the least possible number of values.

So that would be that cell there,

right, it only has two possible values,

that's one of the two cells which only has two possible values.

So this means that we're making the least amount of choice.

So now we're going to guess one of those values,

let's say we guess one.

Once we do that, we're going to get propagation, which is

going to change these other cells.

So fixing one of course fixes this to be two,

and then you can see we can remove one from these entries up here,

we can move two from these entries here, etc., etc.

So you can see that fixing one removes some values here, it also

fixes this two which causes more removal of values.

So we have this propagation of information.

So that leads us to more, now we have to search again so we

pick now another variable which has the least possible values, let's say this one here,

and we'll set it just to its least possible value for a moment

and we can see we get more propagation here just the direct propagation,

removes some values four from these cells which were affected by that.

And we can keep going,

it can pick the next value here,

pick a value we get propagation here,

and we get other kinds of propagation, further propagation.

So here once we know it's one and four here we know they can't be here for example.

And so we can see that this reasoning keeps on going.

Okay, so we can also reason up here once we pick that five, we know that two,

six, and nine must be in these three cells so they

can be in any of these cells around here for example.

And if we continue that after

23 guesses and reasoning steps we actually reach a solution.

Actually, we never make a mistake in this example.

So that's not bad really,

for 81 possible cells to fill in,

although some of them are filled in the beginning,

we make 23 guesses,

none of them are ever wrong, and we get to a solution. That's very very quick.

So in summary, what's happening in

a constraint programming solver when it solves this kind of problem?

We examine each constraint in turn and

the constraints remember we were looking at were these all different in a row,

in a column or in a three by three subsquare.

We reason about the possible values of

the variables in that constraint, we try to reduce those possible values.

And we just keep doing this until no further reduction is possible.

So this is this propagation loop.

This was why its propagating values from one to the other.

Then we'll get to this point, there's no further reduction we can do,

we have to search, what we do is we make a decision which is just

really a guess then we propagate again.

Now, three things can happen really.

It can fail so we can find that there is no solution after we make that guess,

in which case we have to backtrack undo that last guess and try a different value.

It can just propagate until nothing and that means you have to keep going for

the search or it could propagate so that

no variables will lift all the variables were fixed.

In this case, we've actually solved the problem and we can print out the result.

So we're going to look into constraint programming in detail in this module,

and we're going to start by looking about domains and valuations,

then constraints and propagators, propagation engines,

then how we do optimisation using this technology, advanced search strategies,

and they will look inside how we build some of the complex global constraints.

And one of the powers of constraint programming is that we

can have very expressive global constraints,

which we've already seen their power and modeling and this is how they work internally.