Hello. We are going to go over options pricing via transform techniques, lecture four. In the previous lecture, what we did was actually, we said if we have the, let's do a quick recap of it, having the conditional probability density functions so-called f of st given s naught, if it's known and available in closed form, then we can employ some numerical integration to actually approximate the value of the option. That's exactly what I did. We used the trapezoidal rule. We noticed that, under appropriate choices of n and eta, we can actually do extremely well. We can do really well. Actually, it turns out that in most cases, this conditional probability density function might not be available in closed form or it's very expensive to calculate. Now, what I mean by in most cases depending on the process, for example, the process we had in mind was log-normal distribution, which was a distribution for geometric Brownian motion, without me getting into details. But it turns out most processes that we may assume for a strike price process, the evolution of a strike price, they might not actually have either as I said closed form for this density, or it might be actually very expensive to calculate, then one wonders, can we do better than this, as opposed to utilizing that conditional density, can we utilize something else? The answer to it is actually yes. That's exactly our goal for these lecture series. It turns out, in many cases, the characteristic function of the log of the stock price process is known and available. Our goal is to show how we can link it. That means how we can link the characteristic function of log of this stock price process to options pricing. For the case of having this, that was a straightforward, because we wrote the entire options pricing as an integral of payoff versus this conditional probability density function and we discounted back. Now, the goal here is to show how we can link characteristic function. It seems that it is not as explicit to the options pricing. Now, in order to do this, I need to go through some definitions, and when I'm saying definitions, we always take it for granted. The first definition is actually a definition of a Fourier transform. It's just a definition. We not just going to go through any proof. For any function f of x, its Fourier transform is given by this integral. Here i is the imaginary number. i is square root of minus 1, and this integral is what we call the Fourier transform of f. Now, assume that you have this, assume we have the Fourier transform of a function, can we get a function back? The answer is yes, via its inverse Fourier transform, which we are again given according to this integral. Where this one over two Pi is coming from, I'm not going to prove it. Assuming I have this now, to get it back, this is what we would do. Just notice here that, over here, we have e i Nu x, here we have e minus i Nu x. One more thing to notice is we do the integral with respect to x. That means we're integrating x out. Here we're taking the integral with respect to Nu. We're getting x back. Now, what's the definition of characteristic function? If it turns that f is a PDF, which you already know what we mean by PDF. That means it is non-negative and it integrates to one. If it turns out that f is a probability density function of a random variable x, its Fourier transform is called characteristic function. This is exactly as you've seen it before. But now because this is a PDF, that means this becomes an expectation, which we write it as expectation of this. Nothing more. Now, one property that characteristic function has is it's bounded, it never blows up, unlike moment-generating function. That's something that I'm not going to prove here. This is just one of the properties of it. Now, one more time, having this now, definitely what we can do is we can get f of x back. That's what I'm saying, f of x can be recovered from its characteristic function via inverse Fourier transform. That's exactly what they had in the previous slide as well. Now let's just see some of these characteristic functions. For example, for the case that we have geometric Brownian motion or Black Merton Scholes, which follows the following stochastic differential equation because I said I'm not going to get into any of these things. I just want to show you the characteristic function. Then assuming that we have log-normal distribution for a stock price in future given a spot, which follows this SDE, its characteristic function of the log of a stock price is given by this formula. I'm not going to go through the proof. The proof can be found actually you can analytical drive it. But in most cases what we are doing is we are assuming that the characteristic function of log of the stock price on the whatever process assuming is given to you, then the whole thing is given the characteristic function that we tried to do the options pricing. What I would do is, for various different stock price evolution, I'm going to provide you with these characteristic function. Then what I would be asking you is, after I develop a generic way of pricing the options pricing having the characteristic function, then I'm just going to provide a new process to with this characteristic function, and I'm going to ask you actually to price the option onto that price process. Now then for the case of Black Merton Scholes, the characteristic function of the log of the stock price is given according to this formula. One more time, this is the imaginary number. That's the spot, the log of the spot price, interest rate, dividend rate, volatility, maturity, and this is just a dummy, whatever you call it, but that's the value of the function. The rest, again, the volatility, the score of it and the maturity.