How does one estimate a function like that?

What one does is postulates, that there are three factors in the regression

function. This is factor number one, volatility is

factor number two, and this is basically the intercept.

And then one runs a regression. One records, over a history, of trades,

how much extra cost did you end up paying for that particular trade.

So Qt is a particular trade that was executed at time t, or trade t.

PtQt should have been the price that you should have gotten.

cQt is the extra price that you had to pay or the extra revenue that you lost.

You divide that. That gives you one observation of this

regression function. And then you regress it to compute out

what a1 is going to be, what a2 is going to be, what a3 is going to be.

And this is what was proposed by Kissell and Glantz in mid 2000's, and this has

sort of become the standard pro- standard function that people use for trading

costs. There was another function the was

introduced by Lobe before, and which was slightly different.

And what he, what, in his model, the cost versus volume initially grew linearly,

and then it grew with a power. So the cost was some alpha 1 times q up

to sum q max. And then it was some alpha 2 to the

power, times q1 plus beta after Q max. And beta was estimated to be

approximately 0.65. So this is the cost under q, that's q,

and this was the Lobe function. So the low function is was suggested in

1983, and it was relatively simple. It did not take into consideration the

volatility. It did not take into consideration the

average daily volume. But it was the inspiration that led to

other liquidity functions later on, in particular the Kessell Glance function.

And we will focus with Kessel Glance function in this module.

Alright so once we have a price impact function, we can include that into our

portfolio selection problem. So there are two approaches to

introducing liquidity and portfolio selection.

One approach is to do the usual portfolio selection, and then account for liquidity

and executed traits. In the second module of the series we're

going to talk about, how to include. Liquidity in executing trades.

And the other approach is to incorporate liquidity concerns directly into the

portfolio selection problem. And that way you're choosing position, so

you're choosing portfolios that will have low cost, of execution.

The best practice is to do both, account for it with the portfolio selection and

then when you do trades. You account for it by trade execution as

well. So the generic problem that one solves

for the second approach which is to incorporate liquidity concerns directly

into the portfolio selection, is as follows.

You take your usual mean variance optimization problem, so I have a current

position Y. One transpose y tells me the total wealth

that I have, x is a new set of positions. So in this particular problem, the x's do

not add up to one. They are just dollar amounts, or any

other units, which add up to the initial amount of money that I have.

Mew transpose x minus lambda x transpose x, this quantity, is our usual mean

variance. Objective.

Mew, mew is the mean return and v is the covariance matrix, lambda is the risk

tolerance or the risk aversion parameter. Now instead of just stopping there, what

we are going to do is subtract from it, at the trading cost.

This is extra cost that I have to pay, and that actually reduces my mean return.

And I'm going to add an eta to try to incorporate the effect that I can control

the amount of liquidity cost that I'm going to incorporate into my portfolio

selection problem. So some part of it I might include here.

Some part I might handle while execution. Or I might just want to use eta as a way

to trade off between mean-variance returns and the trading cost.

And what is Cxy, Cxy is the cost of moving from the current position y to the

new position x and we can write it as using the Kissel-Glantz function as this

expression down here. XI minus YI is dollar amounts and that's

why I've now divided by PI. And so to adjust vi to include the fact

that it's now the percentage dollar transacted to the power beta.

This sigma value remains the same. A3 remains the same.

And xi minus yi is actually the dollar amount transacted.

Now if you expand it, you end up getting. You can take the constant over here, it

just becomes xi minus yi absolute value. You can take this one and take out the

extra part, so, it's a100 over pivi to the power beta, and xi minus yi to the

power 1 minus, 1 plus beta. Now, I have a function, I can incorporate

the function into my portfolio selection problem, and then, I can solve that

portfolio selection problem to compute what my new positions x are going to be.

In the next module, which is going to be an excel module, I'm going to show you

how to solve setup, and solve this optimization problem and we're going to

play around a little bit with what happens when eta changes values and so

on. In the rest of this module I'm going to

talk about a very simple model that has become popular, that was introduced by

Andy Lowe in one of his papers, and it's a easy model that incorporates some

aspects of liquidating. So this approach was taken, is taken by a

paper by Lo Petrov and Wierzbicki. And the title of the paper is very

interesting, it's 11 PM, do you know where your liquidity is?

The mean variance-liquidity frontier. What they do, is it that they ascribe to

each security a certain normalized liquidity measure.

So let L [UNKNOWN] note the measure of liquidity where high values, implies more

liquidity. So, if you're talking about turnover,

high turnover is a good measure. if you're talking about volume, and high

volumes is a good measure. When you're talking about trading costs,

or bid ask spreads, you take the reciprocal of those numbers.

So the high percentage bid ask spread is bad, low percentage bid ask spread is

good and therefore when you define this measure of liquidity in the model

introduced by Andy Lo. You take one over the beta percentage,

beta spread to define your LIT. And then your normalize this over a

certain period. So what you do is you look at LIT for a

particular amount of time. You take the minimum value that this

particular measure could take over all assets.

So it's i-prime ranging over all assets and all times.

And divided by the maximum value that can be achieved over all assets and all times

minus the minimum. So this number, whatever it is now,

becomes a number between zero and one. So this lies between zero and one.

They assumed in their model that all the wealth is in cash, and formulated three

different optimization problems that get at this notion of how do you incorporate

liquidity into portfolio selection. The first method they call Liquidity

Filtered Portfolio Selection, so you do the usual main variance portfolio

selection, so one transpose X equals one,mew transpose X minus lambda would do

X transpose VX. But now you insist that XI is equal to

zero for all I's that do not meet a particular liquidity threshold.

So L bar is your liquiddity threshold. And if doesn't meet that liquidity

threshold, you cannot hold that particular asset.