[MUSIC]

The next problem is that we're going to attack using of any in the program

of technique is so called chain matrix multiplication problem.

The input in this problem consist of M matrices

which we denote here by A0, A1 and so A n-1.

And what we need to find is the minimum cost of multiplying these n matrices.

We will now clarify what we mean by saying the cost.

Okay, first of all, recall that in order to be able to multiply

two matrices A and B, we need to ensure as at the number of columns

of the matrix A is equal to the number of rows of the matrix B.

This basically means that the number of columns of the matrix Ai

should be the same as the number of rows of the matrix Ai + 1, right?

So this allows us to denote the sizes of our n

matices by m0 times m1, m1 times m2 and so on.

So basically is the size of the matrix Ai is

equal to mi times mi + 1, okay?

Then recall that even for square matrices,

matrix multiplication in general is not commutative.

Namely, there are matrices A and B such that A times B is not equal to B times A.

This means actually that even if our matrices were square,

we would not be able to actually swap the order of matrices.

The same holds for four matrices.

So if we need to multiply A by B by C by D,

then there are at least two possible orders.

We may first want to multiple A by B then C by D then multiply two results,

or we may first multiply B by C then multiply A by the result.

And finally multiply the result by D.

Okay, so the cost of multiply two matrices of dimensions p by q and

q by r, we define to be equal to pqr.

This actually reflects the following fact.

So the resulting matrix has dimensions p by r, right?

So it has p by r cells and to compute the content of each cell, we need to multiply

a row of the first matrix of size q by a column of the second matrix of size q.

So it definitely can be done in time big O of q.

So definitely we can compute the product in time big O of pqr.

In fact faster algorithms are known for computing the product of two matrices,

but for the time being let's just assume that the cost of multiplying two matrices

p by q and q by r is just equal to pqr even without any constant.

And before designing an algorithm, let me illustrate you that even for

small data sets, for example for four matrices,

the order of multiplication actually matters.

So consider four matrices, A, B, C, and D of the following dimension.

So 50 by 20, 20 by 1, 1 by 10 and 10 by 100 and in the first

case I assumes that we going to multiply their products as follows.

So we first multiply B by C then multiply by D and

finally we multiply A by the result.

So after the first multiplication namely B by C,

we get a matrix B by C of dimensions 2 by 10, and

we pay for it 20 by 1 by 10, right?

Our next operation is multiplying this result by D.

Then what we get is a matrix of dimensions 20 by 100.

And we pay for it 20 by 10 by 100, okay?

Then finally, we multiply A by the resulting matrix.

And we pay for it 50 by 20 by 100.

As a result, the total cost of this multiplication is roughly 100,000, okay.

Now let me show you that in some other order multiplication of the same

two matrices, the cost is much lower.

Okay so assume that we first multiply A by B and then multiply it with C by D, okay?

So for the first multiplication, A by B, we pay actually 50 by 20 by 1.

And you see that we have this very thin matrix.

Then we multiply C by D, and we pay for this 1 times 10 times 100.

And again we see that here we get a matrix whose height is very low.

And finally we multiply these two resulting matrices,

and we pay 50 times 1 times 100, so

the result in cost in this case is just 7,000,

which is much lower than in the previous case.