The following theorem plays a very important role in theory Gaussian process. Let function m be a function from R+ to R, and the function K is the function from R+ times R+ to R. So, this can be any functions. And we assume that K is symmetric. And positive semi-definite. Then, there exists a Gaussian process X_t, such that mathematical expectation of X_t is equal exactly to m(t) and covariance matrix of X_t is equal to K(t, s). Well, this is a very important theorem. And let me first explain why it is important. First of all, from the theorem, we conclude that these functions m and K determine the Gaussian process. One can have the following picture in the head. In the case of Gaussian random variables, one has to fix two numbers. The first number is mu. It can be any number, real number and the second number is sigma. So, this two numbers can quickly determine the normal distribution. In the case of Gaussian vectors, one has to fix vector mu from Rn and the covariance matrix C. This is a matrix of size n times n, and this matrix is symmetric and positive semi-definite. According to the theorem, which I have proven previously, these two object determine the vector as the distribution of this vector. Just recall that these two elements appears in the formula for the characteristic function. Therefore, fixing these two elements, you fix characteristic function and there is one by one correspondence between the set of characteristic functions and the set of distributions. And finally, in the case of Gaussian processes, according to this theorem, one shall fix two functions. The first function m from R+ to R, and the second function, K, function from R+ times R+ to R. And this function shall be also symmetric and positive semi-definite. Therefore, there is something similar between Gaussian random variables, Gaussian vectors and Gaussian processes. In any case, one should fix two elements, it can be either number, a vector, or a function. And this element doesn't mind the mathematical expectation. And the second element is either number, or a matrix, or a function. And this second element will determine variance and covariance structure. Before we proceed with further properties of Gaussian processes, let me show how this theorem can be applied in various situations. For instance, this theorem helps us to provide some examples of functions which are positive semi-definite or not. Let me provide a full example. Let K_(t,s) be the absolute value of the difference, t - s. Let me show that this function is not positive semi-definite. In fact, let me assume the upper side, that this function is positive semi-definite. Therefore, since it is also symmetric, there is a Gaussian process X_t such that the covariance between X_t and X_s is equal exactly to t - s. So, if k is positive semi-definite, then there exist X_t such that covariance between X_t and X_s = t - s. Let me substitute s equal to t in this formula. The left-hand side will get the variance of X_t and the right-hand side will get 0. So, the variance of X_t = 0. This means that X_t is in effect a deterministic function for any t deterministic. But here, it follows that the convariance between X_t and X_s which is equal to mathematical expectaion X_t, X_s minus mathematical expectation X_s is equal to 0 because we have (F_t,F_s)-(F_t,F_s)=0, 0 of course is not equal to t-s. So, we conclude that our assumption was not correct and in fact, K is not positive semi-definite. Let me now provide an example. Of a function which is positive semi-definite and you will see that this function will play essential role in theory the theory of Brownian motion. So let me consider the following example. Function K_(t,s) is equal to minimum between t and s. Let me show that this function is positive semi-definite. This is not a really simple task because if you can see the sum U_j, U_k minimum (t_j,t_k), for any time moments t_1 and so on t_1. And for any random real variables U_1 and so on U_n. It isn't clear why this sum should be non-negative. But let me do the following. I will introduce the function F_t of x which is equal to the indicator that X is from the interval of 0 to t. If we now take two functions F_t and F_s of x, then the integral from here to infinity is equal to minimum between t and s. In fact what we have here is this function and the integral is equal to one if and only if X belongs to (0,t) and simultaneously it belongs to (0,s). Therefore if and only if X is from 0 to minimum between t and s, and therefore this integral is equal to this minimal. If I now substitute this expression for minimum into this sum, we get the following that this sum is equal sum j_k from 1 to n U_j,U_k and then we consider the integral plus f_t,j of x f_t,k of (x) dx. And this integrals expression can be represented as a full form. We can simply put all this U's inside the integrals and also take sum inside the integral and then what we will have inside that integral is a double sum of U_j, U_k, f_t,j, f_t,k. And this double sum can be composed as a product of two sums, sum U_j, f_t,j of x and sum U_k, f_t,k of x. And basically this means that this expression is equal to the integral over R+ sum U_k,f_t,k of (x)- squared dx. Sum of k from 1 to m and here will have an integral of some positive function. You know is as integral is a negative. Therefore we conclude that this function minimum of the s is a covariance function. And looking at this theorem, we arrive at the conclusion that there exists Gaussian process which have exactly these covariance function.