In this set of modules, set of three modules, we're going to talk about liquidity, how liquidity of assets affect trading costs, and how these trading costs have implications for portfolio selection. In the first module we're going to be mainly focusing on measures of liquidity, how do you set up a portfolio selection problem that takes liquidity into account. What is liquidity and what is a liquid security, is very hard to define in practice. What we do know is a liquid security is one that can be traded very quickly, has very little price impact, meaning when I try to put in an order to buy or sell, it moves prices very little. And it can be bought and sold in large quantities. That means it has a very deep order book. There are different measures of liquidity. Some of the measures based on volume and these include the trading volume and the turn over which is defined as the trading volume divided by the shares outstanding. Both of these measures, try get, try to get at the idea. That liquidity refers to the fact that one can execute your trades quickly. So if something has a very high trading volume or high turnover, then my order will get executed very quickly. There are cost base measures which look at the impact of liquidity on the cost of trading a particular set of securities, particular amount of securities. One of the measures that gets at this is the percentage bid and ask price. An ask price is the price at which people are willing to sell a particular security, and B, the bid price, is the price at which people are willing to buy a security. As always, the ask price is going to be greater than the bid price and the percentage bid as spread is defined as A minus B, the difference. Divided by A plus B, divided by two. So the mid price times 100. A volume weighted average prices, is another quantitiy that gets us this notion of liquidity. So what happens is that you want to sell a total amount of shares, v. This total amount gets split up into smaller trades. We want to vm. And each of these treats get's executed at a different price. P 1 through P m. So the volume weighted price that you got for the particular order would be simply Pi, Vi, some from I going from 1 through M, divided by the total volume V. So this quantity is known as the volume weighted average price. It tells you the price that you ended up getting. If you're trying to sell a particular commodity, what happens is that the price start to slip downwards so you instead of, P1 could be the opening price, Pm could be the price at which your last bit got sold. Typically P1 is greater that P2 is greater than P3 and so on. So the average price turns out to be less than the price that was existing in the market just before you put in your trade. Other authors have tried to get at the price impact function directly. There was a, the first of these functions was introduced by Lobe in 1983, called the Lobe price impact function. More recently, Kissel and Glance have introduced a price impact function, which measures how expensive or what is the cost of putting in a particular trait. In the next slide, I'm going to talk about the Kissel-Glantz price impact function and in that context I'll tell you what the Lobe function is as well. In the rest of this module, I am going to assume that we are going to be looking at basically, a cost based measures of liquidity and we are going to incorporate those cost based measures into my portfolio execution of portfolio selection procedure to get at what is the optimal. Choice for my portfolios. The trading cost function. The cost of creating a particular block of shares is typically assumed to be separable across assets. Which means that it ignores the cross asset price impact. This is an assumption which is not valid anymore. There is cross price impact and in the last module of the series, we going to. Start discussing some ideas of why this cross-price impact occurs. And what can be done to sort of of take it into account in your asset allocation decisions. So the Kissel-Glantz function says that if I want to trade Q shares of a particular asset and the cost of. Trading these shares, meaning the slippage, the extra price that I have to pay if I'm buying or the loss in price that I'm going to see if I'm selling is given by this function c of q. It has three components, the first component is just a constant, a three. So the cost is dependent on the total dollar amount that I'm going to sell. Its depended on the volatility. So if, if a particular, there is no index I there, if a particular stock is very volatile you expect to pay more, prices move around. And so when you've put in your order to the time it actually got executed. You will end up getting a higher volatility. And then a third component that depends on basically the percentage of your trade Q to the daily volume V. So Q over V is the percentage of the daily volume that you are trying to take times 100 that gives you the percentage raised to the power beta. And these are some const-, constants A1. How does one estimate a function like that? What one does is postulates, that there are three factors in the regression function. This is factor number one, volatility is factor number two, and this is basically the intercept. And then one runs a regression. One records, over a history, of trades, how much extra cost did you end up paying for that particular trade. So Qt is a particular trade that was executed at time t, or trade t. PtQt should have been the price that you should have gotten. cQt is the extra price that you had to pay or the extra revenue that you lost. You divide that. That gives you one observation of this regression function. And then you regress it to compute out what a1 is going to be, what a2 is going to be, what a3 is going to be. And this is what was proposed by Kissell and Glantz in mid 2000's, and this has sort of become the standard pro- standard function that people use for trading costs. There was another function the was introduced by Lobe before, and which was slightly different. And what he, what, in his model, the cost versus volume initially grew linearly, and then it grew with a power. So the cost was some alpha 1 times q up to sum q max. And then it was some alpha 2 to the power, times q1 plus beta after Q max. And beta was estimated to be approximately 0.65. So this is the cost under q, that's q, and this was the Lobe function. So the low function is was suggested in 1983, and it was relatively simple. It did not take into consideration the volatility. It did not take into consideration the average daily volume. But it was the inspiration that led to other liquidity functions later on, in particular the Kessell Glance function. And we will focus with Kessel Glance function in this module. Alright so once we have a price impact function, we can include that into our portfolio selection problem. So there are two approaches to introducing liquidity and portfolio selection. One approach is to do the usual portfolio selection, and then account for liquidity and executed traits. In the second module of the series we're going to talk about, how to include. Liquidity in executing trades. And the other approach is to incorporate liquidity concerns directly into the portfolio selection problem. And that way you're choosing position, so you're choosing portfolios that will have low cost, of execution. The best practice is to do both, account for it with the portfolio selection and then when you do trades. You account for it by trade execution as well. So the generic problem that one solves for the second approach which is to incorporate liquidity concerns directly into the portfolio selection, is as follows. You take your usual mean variance optimization problem, so I have a current position Y. One transpose y tells me the total wealth that I have, x is a new set of positions. So in this particular problem, the x's do not add up to one. They are just dollar amounts, or any other units, which add up to the initial amount of money that I have. Mew transpose x minus lambda x transpose x, this quantity, is our usual mean variance. Objective. Mew, mew is the mean return and v is the covariance matrix, lambda is the risk tolerance or the risk aversion parameter. Now instead of just stopping there, what we are going to do is subtract from it, at the trading cost. This is extra cost that I have to pay, and that actually reduces my mean return. And I'm going to add an eta to try to incorporate the effect that I can control the amount of liquidity cost that I'm going to incorporate into my portfolio selection problem. So some part of it I might include here. Some part I might handle while execution. Or I might just want to use eta as a way to trade off between mean-variance returns and the trading cost. And what is Cxy, Cxy is the cost of moving from the current position y to the new position x and we can write it as using the Kissel-Glantz function as this expression down here. XI minus YI is dollar amounts and that's why I've now divided by PI. And so to adjust vi to include the fact that it's now the percentage dollar transacted to the power beta. This sigma value remains the same. A3 remains the same. And xi minus yi is actually the dollar amount transacted. Now if you expand it, you end up getting. You can take the constant over here, it just becomes xi minus yi absolute value. You can take this one and take out the extra part, so, it's a100 over pivi to the power beta, and xi minus yi to the power 1 minus, 1 plus beta. Now, I have a function, I can incorporate the function into my portfolio selection problem, and then, I can solve that portfolio selection problem to compute what my new positions x are going to be. In the next module, which is going to be an excel module, I'm going to show you how to solve setup, and solve this optimization problem and we're going to play around a little bit with what happens when eta changes values and so on. In the rest of this module I'm going to talk about a very simple model that has become popular, that was introduced by Andy Lowe in one of his papers, and it's a easy model that incorporates some aspects of liquidating. So this approach was taken, is taken by a paper by Lo Petrov and Wierzbicki. And the title of the paper is very interesting, it's 11 PM, do you know where your liquidity is? The mean variance-liquidity frontier. What they do, is it that they ascribe to each security a certain normalized liquidity measure. So let L [UNKNOWN] note the measure of liquidity where high values, implies more liquidity. So, if you're talking about turnover, high turnover is a good measure. if you're talking about volume, and high volumes is a good measure. When you're talking about trading costs, or bid ask spreads, you take the reciprocal of those numbers. So the high percentage bid ask spread is bad, low percentage bid ask spread is good and therefore when you define this measure of liquidity in the model introduced by Andy Lo. You take one over the beta percentage, beta spread to define your LIT. And then your normalize this over a certain period. So what you do is you look at LIT for a particular amount of time. You take the minimum value that this particular measure could take over all assets. So it's i-prime ranging over all assets and all times. And divided by the maximum value that can be achieved over all assets and all times minus the minimum. So this number, whatever it is now, becomes a number between zero and one. So this lies between zero and one. They assumed in their model that all the wealth is in cash, and formulated three different optimization problems that get at this notion of how do you incorporate liquidity into portfolio selection. The first method they call Liquidity Filtered Portfolio Selection, so you do the usual main variance portfolio selection, so one transpose X equals one,mew transpose X minus lambda would do X transpose VX. But now you insist that XI is equal to zero for all I's that do not meet a particular liquidity threshold. So L bar is your liquiddity threshold. And if doesn't meet that liquidity threshold, you cannot hold that particular asset. Another one is to use a mean variance liquidity objective. again this time I'm just adding to it l i x i and then saying this is different from the formulation on the previous page, because for, this is no longer a cost, but this is a quality measure. So l i high is good, so I want to add that liquidity measure instead of subtracting as if it was a cost. A third version that is suggested was liquidity constrained portfolios, so you do not put a cost but you say that, LIXI over the average value of XI is greater than, equal to L bar. So difference between the first model and the last one, is that the only thing this is measuring liquidity on a portfolio level, so this is a portfolio level measure. Instead of asset. In the first formulation over here, this is asset by asset. If a particular asset does meet the liquidity threshold, you throw it away. In this particular case, you just want to make sure that on overall portfolio level this happens. in that way. The expression for overall liquidity in the portfolio is slightly different here, this is to prevent short positions in illiquid stocks, cancelling long position in other liquid stocks. And we'd, because of that, instead of taking just XI, we take the absolute value of XI. All of these portfolio selection problems can also be solved in Excel in much the same way. That the fourth formulation is going to be solved. [BLANK_AUDIO]