Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

Do curso por École Polytechnique Fédérale de Lausanne

Processamento Digital de Sinais

284 ratings

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Na lição

Module 1: Basics of Digital Signal Processing

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

Hi and welcome to the new edition of our digital signal processing class.

In this introduction, we would like to talk about what signal processing is and

what makes it so interesting and relevant today.

And we will start the usual way by picking apart the name of the class.

So we have three words, digital, signal, processing.

Let's start with the centerpiece, signal, and try to define what a signal is.

The best definition that I can come up with,

is that a signal is a description of the evolution of a physical phenomenon.

And that is best explained by example, so take the weather for instance.

If you measure the temperature over time,

you have one possible description of the phenomenon, weather.

Take a sound, a sound is generated by, say,

my vocal tract, by creating a pressure wave.

If you measure this pressure at a point in space with a microphone for

instance, you have a description of the phenomenon sound.

But of course that is just one possible description.

If the sound had been recorded on a tape recorder, for

instance, the description of that phenomenon would take the form of

the magnetic deviation that is recording on the tape.

In more dimensions, if you take the light intensity on the surface, and

you encode that as a gray level on paper.

You have two dimensional black and white photograph.

That is also a signal that varies in space rather than in time.

So, we have a signal then we have the processing part.

And the processing part is where we make sense of this information

that has been described by the signal.

We can process the signal in two ways.

We can analyze it, namely we want to understand the information carried by

the signal and perhaps extract some more high level description.

Or we can synthesize a signal, that's also signal processing.

And that's when we create a physical phenomenon that contains a certain amount

of information, that we want to put out in the world.

And this is really what we do when we transmit information, like when we use our

cell phone or the radio, or when we generate sounds with a music synthesizer.

So we said that the signal is a description of the evolution of a physical

phenomenon, but

signals like so, are not an exclusivity of the digital signal processing.

In physics, for instance, perhaps you remember the equation that

describes the motion of a project dial, that was discovered by Galileo in 1638.

And you have a simple equation that relates

the vertical position to the initial velocity and to time.

And you have this beautiful curve that describes the instantaneous position.

In electronics, for instanc,

you have a lot of equations that describe the inner workings of electronic circuits.

Here you have an RC network, and

the voltage at the output is described by this law of charge or

discharge of the capacitor that would be something like this.

So what makes digital signal processing interesting?

In these two examples you can see that the mathematical model that we used to

describe the signals, is that of a function of a real variable time.

This is a standard model for what we call analog signals.

But if we go back to the motion of a project dial we see that this equation

has only one degree of freedom.

The only thing you can choose is the initial velocity, but

then the shape of the signal will always be the same.

And similarly in an RC network, you can choose the value for the resistor and for

the capacitor, but the signal will always have the same exponential decaying shape.

Now if I show you a wave form like this, you probably have seen this before and

you can guess that it is some sort of representation of a sound or speech.

And if we were to apply this mathematical model to this kind of information.

The question is, what is the function that describes the sound?

Well there's no really an easy answer for that.

What I can try to do is record this information, and

people have invented extremely sophisticated devices to do so.

So for sound, for instance, I could come up with a record player, I could come up

with a tape recorder, and then if I want to measure a temperature signal.

Then I would have to come up with a mechanical system that

drags a pencil on a piece of paper to record the evolution of temperature.

And to capture photographs, I will have to invent a camera.

But you see every device is specific to a certain signal.

It will record any information but

it will not let me manipulate this information easily and in a generic way.

In other words, the recording device will give me something like a picture,

something like this.

But it will not answer the question,

what is the function that describes the phenomenon?

This is the problem with analogue signals.

The big paradigm shift and the power inherent to digital signal processing,

is that we're moving away from an analog model for signals like this.

So we're not asking, what is the function that represents the signal anymore,

we're just moving to a recording of the values of this function.

And represent the phenomenon just as a series of numbers.

And in particular, for digital signals, these numbers are integers.

The so called digital paradigm is composed of two fundamental ingredients,

discrete time and discrete amplitude.

And let's start with discrete time because this is really the paradigm shift in

the way we perceive the world.

When you were in elementary school,

you were probably asked to run a little experiment.

You recorded the temperature every morning, and

then plotted it on a piece of graph paper.

And you obtained a plot that probably looked like this, so

you look at a little point on the mercury scale.

And plot it in the right position on the graph paper.

And when you do this for a number of days, the resulting representation,

it seems pretty reasonable.

It makes sense to measuring the temperature once, or at most, twice a day,

is a pretty descriptive representation of the weather.

But can we really do that, can we really splice up

time into a series of discrete instance, and not lose information?

The question is really tied to the very nature of time.

And this is something that philosophers have been grappling with

since the beginning of intellectual speculation.

One of the most famous analysis of the problem of time was conducted by

Augustine of Hippo in the 5th century.

And his analysis led to, fundamentally, a negation of the existence of time.

Because the past has already happened, the future is unknown, and the present instant

cannot be pinpointed the moment I mention it, it's already become part of the past.

And so Augustine said time does not exist.

Centuries later, one of the last philosophers to try and

systematize the world in philosophical terms, Immanuel Kant.

Solved the problem of time, but says that time and

space are fundamental categories of the spirit.

Fundamentally we are hardwired to project a notion of time and

space on everything that we observe.

But perhaps the best known philosophical reflection on the concept of time,

was performed by Zeno of Elea in 5th century BC.

Zeno came up with the famous paradoxes and

a dichotomy paradox is one of the most celebrated.

It states that motion can not really take place and the reasoning goes like so.

If you want to move from point A to point B, so cover some distance.

Well in order to reach point B, you will have to go through a midpoint,

let's call this midpoint C.

And even if you go through that midpoint then you will have another midpoint

between C and B to go through, and then another midpoint, and then another, and

then another, and so on and so forth.

Now, Zeno said, no matter how small this subsegment that you can divide A and

B into, you will need a finite amount of time to cross each one of them.

So now you have an infinite number of segments.

And the sum of an infinite number of times,

however small, cannot be finite and therefore you will never reach point B.

Of course, today any student with a little bit of a Calculus

will tell you no because, if you formalize this, you can see that

you're actually taking an infinite sum of a geometric series with ratio one-half.

We know that this sum is equal to 1.

This is not the solution to Zeno's paradox.

The solution to Zeno's paradox is represented

by the last four centuries of mathematical research that have produced

a mathematical model where infinite sums do not lead to contradictions.

Zeno's intuition was very profound.

If we apply a simplistic mathematical model to reality, like for instance,

the possibility of dividing a segment into an infinite number of subsegments,

then we run into contradictions.

So we have to be careful to avoid these pitfalls.

Back to our signal processing problem.

If we start with a model of reality while we have, say for, the projectile motion.

An ideal trajectory that can be model as a function of a real variable T, we are in

a situation similar to the Infinite divisibility of the segment we have a real

number time that can be split into an infinite number of smaller intervals.

So how do we go from this to a model in which we have a finite

number of, we can now call them with their name, samples of the original trajectory?

A set of measurements which is finite in nature.

Incidentally, having the discrete model of reality, would be extremely practical for

a lot of applications.

So for instance, consider this temperature signal between a and

b, if we use a continuous model of reality an analog model and

we want to compute the average temperature,

then we need to know the function f(t) that represents the temperature.

And then, we need calculus to compute the integral of this function over

the interval interest.

And then, divide by the length of the interval.

Conversely, if we only have temperatures measurements on a discrete set of times,

like so for instance, well the average is very simple.

We just sum these values and divide by the number of samples.

This we know how to do intuitively.

And this is valid in general.

Discrete models are extremely easy to use computationally speaking.

So we want to move away from this plutonic ideal functions of a real variable time

Adopt and model the reality where signals are described as sequences.

Mathematically these are mappings from a set of integers to a set of values,

V, which could be the set of real numbers.

Or, as in the case of fully digital signals a set of integers as well.

The notation is very simple.

We will indicate a discrete time signal as x of n where x is our signal and

n is quote on quote time.

We will see pretty soon that n does not have a physical dimension.

It's just an ordinal number that orders the samples one after the other.

Are we burning bridges?

Are we losing some power of representation when we move to this discrete model

of reality?

So what happens inside an RC circuit?

Can we still describe this physical reality Using a discrete time sequence?

The question is a fundamental importance and the answer was given to use at

the beginning of last century by Harry Nyquist and Claude Shannon.

The answer is positive and it states that under very mild conditions, the continuous

time representation and the discrete time representation are equivalent.

Mathematically, the result is known as a sampling theorem, and

it has a very simple statement.

The relationship between the continuous time a presentation of a signal, and

its discreet time counterpart is given by this formula.

And you can see that we can build the continuous stamp representation as

a linear combination of copies of a typical function or building block

called the sinc shifted and scaled by the values of the discrete time sequence.

The sinc look like so, you probably have seen it

in countless signal processing logos it's actually an infinite support function

that keeps oscillating from minus infinity to plus infinity, and

the sampling theorem graphically looks like so.

You start with a continuous times signal and then you take measurements.

So you convert this to a discreet time sequence.

You take regular measurements of this function and then you

throw away all the rest and you're just left with the discreet time sequence.

To go back to continuous time,

all you need to do is take copies of the sync function and

place them at each sample location scaled by the amplitude of the sample.

And when you do that, and then you sum all these copies of the sync together,

you get back exactly the original function.

The conditions under which you can do this are given by at the statement of

the sampling theorem, and will require a tool called Fourier analysis.

The Fourier transform will give us a quantitative measure

of how fast a signal moves.

And once we know this speed, we will always be able to choose a sampling

interval, namely a space between measurements when we convert a function

to a sequence that will satisfy the hypothesis of the sampling theory.

So we have said that digital signals are composed of two ingredients.

And we have talked, at length, about discreditization of time.

Now, let's look at the other aspect of discreditization of amplitude.

Take for instance this sign wave.

The first discreditization happens in time,

and we get a discrete set of samples.

And then, the second discretization happens in amplitude.

Where each sample can take values now

only amongst a predetermined set of possible levels.

And the very important consequence of discretization

is that independently on the number of levels, the set of levels is countable.

So we can always map the level of the sample onto an integer.

Now, if our data is just a set of integers it means that its representation is

completely abstract and completely general.

And this has very important consequences in three domains storage, Becomes

very easy because any memory support that can store integers can store signals now.

And computer memory comes to mind as the first candidate for a storage medium.

Processing now becomes completely independent on the nature of the signal

because all we need is a processor that can deal with integers.

And again, CPUs, the heart of computers,

are general purpose processors that can deal with integers very, very well.

And finally, transmission.

With digital signals, we can deploy extremely effective

ways to combat noise and maximize the capacity of the communication channel, and

what we'll see in example in a second.

As far as storage is concerned, just consider the difference between attempting

to store an analog signal, which required a medium

that was dependent on each kind of signal and on each kind of application.

And compare that to what we do today,

which is using general purpose computer memory for all kinds of signal.

So here in this picture, for instance, you can see what we used to do for sound.

We started with phonograph, perhaps wax cylinders and records and then moved on

to tape records and each median was incompatible with the previous one.

Same goes with temperature no way of transmitting the temperature out of them

physically delivering cylinder of paper to weather station and so on and so forth.

Today, all we do is stores 0s and 1s.

And this is a consequence of the fact that the possible values of discreet time

sequence are mapped onto a countable set of integers and

therefore onto binary digits.

There is a famous picture going around the Internet that shows how much

information you can store today on a MicroSD card

compared to the storage abilities of just a few years back.

And this fast evolution of technology is what you get when you can pool resources.

You don't have to specialize into different devices for

different purposes anymore.

And all the research goes into perfecting

a single technology that is shared across a variety of applications.

Processing was also a problem in the past.

With analog signals, you had to devise specific devices that would be able

to react to the physical phenomena that had to interact with.

So for instance, here you see a thermostat that had to had special coil that expanded

and triggered switches in order to regulate the temperature.

Mechanical systems require the design of very complex gears and

sound equalization system required discrete electronics.

Today with digital signals, all you need to do is write a piece of computer code

that will run on a general purpose architecture and

perform the same tasks that required hardware in the past.

Finally, let's consider the problem of data transmission

which is probably the domain where digital signal processing has made the most

difference in our day to day life.

So if you have a communication channel, and you try to send information from

a transmitter to a receiver, you're faced with the fundamental problem of noise.

So let's see what happens inside the channel.

You have a signal that will be put into the channel.

The channel will introduce an attenuation.

It will lower the volume of the signal so to speak, but

it will also introduce some noise, indicated here as sigma t.

And what you will receive at the end is an attenuated copy of your original symbol,

plus noise.

These are just facts of nature that you cannot escape.

So if this is your original signal, what you will get at the end

is an attenuated copy scaled by a factor of g, plus noise.

So how do you recover the original information?

Well, you try to undo the effects introduced by the channel but

the only thing you can undo is the attenuation.

So you can try and multiply the received signal by a gain factor that is

the reciprocal of the attenuation introduced by the channel.

So if you do that, you introduce a gain here active receiver and

what you get is start a gain with your original signal, attenuated copy,

some noise added and then let's undo the attenuation.

What will happens, unsurprisingly is that the gain factor

has also amplified the noise that was introduced by the channel.

So you get a copy of the signal that is yes of a comparable amplitude

to the original signal, but in which the noise is much larger as well.

This is a typical situation that you get in second generation or third generation

of say, a tape or if you're trying to do a photo-copy of a photo-copy.

Just to give you an idea of what happens with this noise amplification problem.

Now why is this very important?

This is important because if you have a very long cable, so for

instance if you have a cable that goes from Europe to the United States, and

you tried to send a telephone conversation over there.

What happens is that you have to split the channel into several chunks and

try to undo the attenuation of a chunk in sequence.

So you actually put what are called repeaters along the line that regenerate

the signal to the original level every say, 10 kilometers of cable or so.

But unfortunately, the cumulative effect of this chain of receiver is that some

noise gets introduced at each stage and gets amplified over and over again.

So for instance, if this is our original signal which again,

gets attenuated and gets corrupted by noise in the first segment of the cable.

After amplification, you would get this, we just did that before.

And this signal is injected into the second section of the cable.

It gets attenuated, new noise gets added to it and

when you amplify it you get double the amplified noise.

And after N sections of the cable you have N times the amplified noise.

This can lead very quickly to complete loss of intelligibility

in a phone conversation.

Let's now consider the problem of transmitting a digital signal

over the same transoseanic cable.

Now a digital signal, as we've said before,

is composed of samples who's values belong to accountable finite set of levels.

And so their values can be mapped to a set of integers.

Now transmitting set of integers means that we can uncode this integers

in binary format.

And therefore we end up transmitting basically just sequence of zero's and

one's, binary digits.

We can build an analog signal associate and say the level plus 5 volt to

the digit 0 and minus 5 volt to the digit 1.

And we will have a signal

that will also lay out these two levels as the digits are transmitted.

What happens on the channel is the same as before.

We will have an attenuation, we will have the addition of noise, and

we will have an amplifier at each repeater then will try and undo the attenuation.

But on top of it all we will have what's called a threshold

operator then we'll try to reconstitute the original signal as best as possible.

Let's see how that works.

If this is what we transmit, say an alternation of zero and

one mapped to this two voltage levels, the attenuation and

the noise will reduce the signal to this state,

the amplification will regenerate the levels and will amplify the noise.

So the noise is much larger than before.

But now we can just threshold and say, if the signal value is above zero,

we just output 5 volts and vice versa.

If it's below zero, we will output minus 5 volts.

So the thresholding operator will reconstruct a signal like so.

So you can see that at the end of the first repeater, we actually have an exact

copy of the transmitted signal and not a noise corrupted copy.

The effectiveness of the digital transmission schemes can be appreciated by

looking at the evolution of the throughput.

The amount of information that can be put on a transatlantic cable.

In 1866 the first cable was laid down and it had a capacity of eight words

per minute, which corresponded to approximately 5 bits per second.

In 1956, when the digital cable was laid down on the ocean floor.

The capacity all of sudden skyrocketed to 3 megabits per second, so

10 to the power of 6.

Six ordinal magnitude larger than the analog cable.

In 2005 when the fiber cable was laid down, another six orders of magnitude

were added were added for a capacity of 8.4 terabits per second.

Similarly and literally closer to home,

we can look at the evolution of the throughput for in home data transmission.

In the 50s, the first voiceband modems came out of Bell Labs.

Voiceband, meaning that there were devices design to operate

over a standard telephone channel.

Their capacity was very low, 1,200 bits per second, and they were analog devices.

With the digital revolution in the 90's, digital modems started to appear and

very quickly reached basically,

the ultimate limit of data transmission over the voiceband channel,

which was 56 kilobits per second at the end of the 90's.

The transition to ADSL pushed that limit up to over 24 megabits per second in 2008.

Now this evolution is, of course, partly due to improvements in electronics and

to better phone lines.

But fundamentally, its success and

its affordability are due to the use of digital signal processing.

We can use small, yet very powerful and

cheap general purpose processors to bring the power of error correcting codes and

data recovery even in small home consumer devices.

In the next few weeks we will study signal processing starting from the ground up.

And by the end of the class,

we will have enough tricks in our bag to fully understand how an ADSL modem works.

For now let's resume the key ideas that we have seen in this perhaps,

slightly rambling introduction.

The discretization of time allows us to replace the idealized models of physics

and electronics with sequences of measurements of samples.

And in so doing,

we will be able to replace calculus with much simpler math in our processing.

The discretization of values, on the other hand, is of extreme practical importance

because it will allow us to use general-purpose storage for

our signals and general-purpose processing units for processing.

Discrete values also allow us to control the noise very effectively and

build very efficient communication systems.

So let's go on with the digital revolution.

O Coursera proporciona acesso universal à melhor educação do mundo fazendo parcerias com as melhores universidades e organizações para oferecer cursos on-line.