In this lecture, we'll talk about autocovariance coefficients. Objectives are the following. We'll recall the covariance coefficient for a bivariate data set. We will define autocovariance coefficients for a time series. And we'll estimate autocovariance coefficients of a time series at different lags. Remember if you have two random variables x and y, covariance is basically measuring the linear dependence in those two random variables. And we have seen the same definition of covariance in the previous video lecture. Covariance of x, y is expectation of x, y minus expectation of y. Now, usually we do not get the random variables in real life. We get, let's say, datasets. We have this paired dataset (x1, y1) (x2, y2) (xN, y1). And we would like to estimate the covariance between these two sets, this is basically sample covariance. We would like to look at the set x1, x2, xN, and y1, y2, yN, and somehow measure the linear dependence between these two sets. And the estimation formula is basically Sxy. We sum xt- xy yt- y bar. X bar and y bar here are the sample average of each data set. And we divide by not n, but n-1. Now in R, we don't have to calculate this by hand or with any loop. We can just use the covariance routine in R with cov. And we can put the dataset one and dataset two, it will calculate this covariance for us. Now we going to talk about autocovariance coefficient. So autocovariance coefficients at different lags defined to be lambda, I'm sorry gamma k. And this is what we defined the last lecture. Gamma k's the covariance between xt and xt plus k. And since we assume the weak stationarity, it doesn't matter what t is. We would have same gamma k as long as the distance between these two random variables is k. And ck is going to be an estimation for gamma k. And this is how we're going to define our estimation. It's very much like what we defined to be covariance. In this case, we don't have x's and y's, we just have x's. So we look at xt values from 1 to N-k, x values starting from k+1 to the N. And we calculate their difference from the x bar, x bar being the sample average, and we calculate whole sum and divide by N. This is going to be our estimation for our autocovariance coefficients. Now, again in R, we will not do it by hand. Although we can write just one simple loop to calculate this autovariance coefficients, we will use what's called acf routine. Now, acf stands for autocorrelation function, which I'm going to talk about in my next lecture. But for now, we'll use acf routine in the following way. We're going to acf, the time series, and the type, we're going to type in covariance. If we type in covariance, it will give us all autocovariance coefficients. Now we're going to simulate a purely random process. It's a purely random process the time series with no special pattern. And we're going to use rnorm routine. We will call our time series purely_random_process, and we will use the ts routine, which will take the dataset that we generate and put time series structure on it. And inside that ts routine, I have rnorm routine. R stands for random, norm stands for normal random variables, so we will generate, let's say, 100 data points from normal distribution. In fact, it's going to generate 100 data points from standard normal distribution with a mean 0 and standard deviation 1. When I do that, now we have purely random process. Let's just print, purely_random_process. And if I print it, I see that it's a time series object that starts at time 1, ends up at time 100. And the frequency is 1, and we have our 100th data points. So we will be using acf routine. Acf routine usually gives us a plot. We're going to change the type to the covariance because we would like to get autocovariance. And I'm going to put parenthesis around it so that it will print out the data that it produces, and you obtain the plot. Along with the plot it produces autocovariance coefficients for every single lag. So when we type in this command, it gives us a plot, which I will talk about in the next lecture. But what we are concentrating on would be these numbers here. Basically, this is autocorrelation coefficient estimation for our autocorrelation coefficient at lag 0. This is at lag 1, this is at lag 2, and this is at lag 3, and so forth. And this is how we're going to calculate our autocovariance coefficients in R. So, what have learned in this lecture? You have learned the definition of autocovariance coefficients at different lags. And you have learned how to estimate the autocovariance coefficient of a time series using acf routine.