Perhaps the oldest known example of a discrete-time sequence is shown here in this photograph. This is called the Palermo Stone, and it represents in hieroglyphics the level of the River Nile for a number of years. So each box here is a year with the level of the river, for a number of years around 2500 BC. In Ancient Egypt, the fertility of the riverbanks really regulated the wealth of the state. And so it was very important for the pharaohs to have a historical record of previous flood levels, so that they could try and anticipate what the next year would bring. So here we have discrete-time sequence that was used also for some kind of processing, namely a prediction on the fertility of the following agricultural season. Today, the same data would be represented like so, here we have a plot where on the horizontal axis, we have the years, on the vertical axis, the flow of the river in cubic meters per second. And here, you see that each data point is represented using the so-called lollipop notation. So we have a stand that goes from the horizontal axis to the actual data value. And we use this notation when we want to make sure to indicate that the data is actually a discrete-time sequence, and we have a countable set of values, not a continuous time function. Sometimes however, the number of data points is so high that the lollipop notation would be too cumbersome. So we forgo the stems, we just put the data points as dots, and they end up being so close that they give the impression of a continuous time function. Here, for instance, you have a plot of the daily temperature over 3,000 points. So 3,000 days, which is about 10 years. Here you see that it looks like a continuous time function, but it's actually discrete measurements taken every day. And even visually, you can see that there is a periodic pattern. It's actually very easy to superimpose a sinusoid over this data set. And the sinusoid happens to have a period of 365 days, which is, of course, consistent with the fact that the Earth repeats its seasonal pattern at each passage around the sun. Astronomy uses a lot of discrete-time sequences. This is, for instance, a sequence of solar spots that measure the sun activity over time. The interesting thing about this series is that it has been kept for a very long time. So this is a monthly time series. So there is a data point for each month, and it has been kept since 1749. So there's a lot of data points to work with. History and sociology are interesting in discrete-time sequences as well, for instance, the world population can be measured, say, annually. And here we have an estimate that starts from year 1 AD up to a projection for year 2030. And you could see that probably, we're going to have a little problem here, since the trend seems to be exponential. Other time series are completely man-made, such as in economics. Here you have the Dow Jones index, which measures some sort of health state of the economy in certain circles. And you can see once again that there were some trends, for instance, the Dow Jones has been pretty low for the best part of last century and then has grown, again, kind of exponentially. But what is interesting is probably you're familiar with the crash of 1929, which is this little dip here, and compare that, which went down in history as some sort of major disaster, with the kind of swings that we have today when the Dow is at such high levels. Okay, so we have seen some examples, now let's try to formalize the concept of discrete-time signal, for us, this is a sequence of complex numbers. So it is a one-dimensional sequence, at least for now. The notation is x[n], where n is in square brackets to indicate that n is an integer. It's a two-sided sequence, so n goes from minus infinity to plus infinity, and it's a mapping, therefore, from Z, the set of integers, to C the set of complex numbers. n is what we call an a-dimensional time, so we can think of it as time if we want, but we have to make sure not to associate a physical unit to n. n is a-dimensional, it just sets an order on the sequence of samples. Discrete-time signals can be created by an analysis process where we take periodic measurements of a physical phenomenon, think of the floods of the Nile if you want. Or in a synthesis process where we use say a computer program to generate data point that simulate a physical phenomenon that we want to reproduce, we will see an example very soon. Let's now look at some prototypical signals that will appear again and again in this class. The simplest non-trivial signal that you can think of is a signal where every sample is equal to 0 except for n equal to 0 where the samples is equal to 1. This is called the delta signal, and it exemplifies a physical phenomenon that has a very, very short duration in time. To help your memory, you can associate the delta signal to a clapper, the device that is used in the movie industry, although perhaps not in the mechanical form that you see here in this picture, to synchronize the audio and the video tracks. When you shoot a movie, the video and the audio are recorded on separate devices, and then you have to synchronize the two tracks together. So the way this is done, is by filming the clapper and then having the top part of the clapper slam down on the bottom part. This will generate a very short instantaneous sound that on the audio track will look like a delta signal or a combination of positive and negative delta signals. When you need to synchronize audio and video, you will look for this pattern in the audio track. You will look for the delta, and associate it to the frame, where the top part of the clapper is hitting the bottom part. Another useful signal is the unit step. This is a signal that is 0 for all negative values of the index. So x[n] = 0 for n less than 0, and is equal to 1 for n greater than or equal to 0. This depicts a very simple phenomenon, the flipping of a switch. So think of a Frankenstein switch when this is pulled up, then the contact is made, and the signal will go from zero to one and stay at one forever. Another common signal is the exponential decay. We take a number a less than 1 in magnitude, and we take successive powers of the absolute value of a. Because a is less that 1 in magnitude, successive powers will go down exponentially to 0, but of course, will never reach 0 unless we go to infinity. In order to prevent the signal from exploding when n is negative, we multiply the signal by the unit steps. So we basically force to 0 all values of the sequence for negative values of the index. The exponential decay captures the behavior of a lot of physical systems, for instance, it shows how your coffee cup gets cold. Newton's law of cooling says that the rate of change of the temperature of a body is proportional to the difference in temperature between the environment and the body itself. So if you solve this differential equation, you find out that the evolution of the temperature follows indeed an exponentially decaying trend. Of course, this is an idealized version of how a coffee gets cold, because you should have only convection and large conductivity. But in general, this is a common behavior for a lot of physical systems. We have seen, for instance, that the rate of discharge of a capacitor in an RC circuit is also an exponentially decaying curve. In discrete-time, the exponential decay, a to the power of n, models this kind of behavior. And finally, we have sinusoidal signals. Here we have, for instance, an example using the sin function. Discrete-time sequence is simply the sine of an angular frequency of omega 0 times the index n + na initial phase theta. Omega 0 is measured in radians. Theta is measured in radians as well. Because n is a-dimensional, so the sum of a omega 0 n + theta is measured in radiance. There is certainly no need to stress the importance of oscillatory behavior in nature, your heartbeat, engines, the motion of the waves, the vibration of strings in musical instruments. But in signal processing, oscillations are particularly important because they are at the heart of Fourier analysis, as we will see very soon. It is useful to divide discrete-time signals into four classes, finite-length signals, infinite-length signals, periodic signals, and finite-support signals. We will now look at them in turn. Finite-length signals are signals that contain only capital N samples. We indicate them with the notation x[n], as for standard sequences, but we always specify the range of the index, n, that goes from 0 to capital N- 1. Sometimes we will also use vector notation, in this case, the signal is a column vector, like so. And the connection between finite-length signals and vectors will be clear very soon in one of the future lectures. Finite-length signals are very practical entities, and they're good for numerical packages. You will always deal with array of data where the size of the array is finite. However, it's not practical to develop the entire signal process in theory concentrating only on finite-length signals because the length gets in the way. Infinite-length signals are standard sequences where the index n ranges over the entire set of integers from minus infinity to plus infinity. And these are, of course, abstract entities because they contain potentially an infinite amount of data. But they're very good for theorems and results that do not depend on the length of the data. Periodic sequences are sequences where the data repeats every capital N samples. So we indicate that with a notation tilde of x, the symbol here indicates periodicity explicitly, and the relationship for a periodic sequence of period capital N is that x[n] = x[n + kN] for all integer values of N and k. The amount of information contained by a periodic sequence is exactly equivalent to the amount of information contained by a finite-length signals of length capital N. So somehow, periodic sequences are a natural bridge between finite and infinite-lengths, they have a finite amount of information, but they have an infinite-length. Finally, finite-support signals are infinite-length sequences with only a finite number of nonzero samples. We will indicate this with the notation bar x, and that means that the support of the signal is compact. The amount of information of a finite-support sequence is the same as a finite-length sequence of length capital N. And they constitute another bridge between finite and infinite-length sequences. In a way, we can always embed a finite-length sequence into an infinite-length sequence, either by periodizing the finite-length sequence, so turning that into a periodic signal, or by turning it into a finite-support signal by appending 0s before and after the interval 0 capital N minus 1. Elementary operators for signals include scaling where we take a sequence, and we multiply each element in the sequence by a factor alpha that belongs to the field of complex numbers. We can sum two signals together where we take a sequence and we add to each element of the sequence the corresponding element of the second sequence. The product is like the sum, except that we multiply each element in the first sequence by the corresponding element in the second sequence. And finally, the shift by k where we anticipate or delay a signal by shifting the sequence by an integer number of samples k, k belongs to z. The definitions of the first three operators is valid for all classes of signals. In the case of the shift, however, we have to be careful when we apply a shift to a finite-length signal. Remember that for a finite-length signal, the index in x[n] can only range between 0 and N- 1. Now, if we choose k too large or too small, we can easily send the argument here outside of the prescribed balance. So in order to apply shift to a finite-length signal, we have to decide how to embed that signal into an infinite-length sequence. And we have two types of shifts according to the embedding that we choose. Imagine we embed the finite-length signal into a finite-support sequence. In that case, it's as if we were appending and prepending 0s outside of the range of the signal. So in this case when we shift to signal, say suppose we shifted towards the right, 0s will be pulled in into the range that is valid for the finite-length signal. And we will lose the last points in the signal. So here graphically, we see what happens. Here's the original signal imagine embedded into a finite-support signal, and here is the result of the shift by 1, 2, 3, and so on, so forth. As we shift, we pull in 0s, and we lose data. Conversely, if we image a periodic extension, a periodization of the original sequence, the shift will become a circular shift. If we shift say, towards the right, what goes out here will come back on the other side. And the result, as you can see here graphically, is that we're circulating the data around the support of the signal. We will see later that the periodic extension and therefore, the circular shift is actually the natural way to interpret the shift for a finite-length signal. We also have a definition of energy for a discrete-time signal. This is a sum for all elements in the sequence of the square magnitude of the elements. If you think of the signal's values as voltages across a 1 ohm resistor, then you can see that this definition of energy is consistent with the physical interpretation of energy. Many sequences have an infinite amount of energy, the unit step, for instance. If you do the sum, you will see that Ex goes to infinity. So to describe the energetic properties of the sequences, we use the concept of power. The power is the rate of production of energy for a sequence, and it is defined as the limit for capital N that goes to infinity of the local energy computed over a window of size 2N + 1 divided by the size of the window. Take, for instance, periodic sequences. Periodic sequences have infinite energy because we are summing the values in one period an infinite number of times. But their power, if you work it out, is equal to the energy over one period divided by the length of the period.