Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

Do curso por École Polytechnique Fédérale de Lausanne

Processamento Digital de Sinais

367 classificações

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Na lição

Module 5: Sampling and Quantization

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

Quantization is really the second half of this story in digital signal processing.

The first half being the discretization of time.

We soon realize that digital devices can only deal with integers no matter how many

bits we use inside each memory cell.

And so we need to map the numeric range that discrete time samples live on or

to a finite set of values.

In so doing there is an irreversible loss of information because we're

chopping these amplitudes according to the resolution that our system allows for.

If we were to represent the situation graphically we have a sequence of discrete

time samples here that belong say, to the set of complex numbers.

These samples go through a quantizer, and the sequence of quantized samples come out

where each quantized sample now belongs to the set of integers.

We model input as a stochastic process.

And to study the effects of the system, we have to consider several factors.

How many bits per sample this quantizer will allocate?

What is the storage scheme used to represent the quantize samples?

For instance is it fixed point or floating point, and

what are the properties of the input as a stochastic process?

What is its range and what is its probability distribution?

The simplest quantizer is the Scalar quantizer.

In this quantizer, each sample is encoded individually.

So we don't take into account, relationship between neighboring samples.

Each sample is quantized independently, so there is no memory of

previous quantization operations, and each sample is encoded using R bits.

So the rate here is R bits per sample.

Let's see what happens when we Scalar quantize an input.

Assume we know that each input sample is strictly between A and B.

Each sample is quantized over 2 to the R possible values, because we are using R

bits per sample and this defines 2 to the R intervals over the range A to B.

Each interval will be associated to a quantization value.

Which means that whenever the sample, say, falls into this interval here,

it will be replaced by this representative value for

the interval and similarly for the other intervals.

So let's look at an example for R = 2.

The range A to B, would be divided into four intervals, and

these are the boundaries of each interval.

We would associate a representative point to each interval,

and we would encode each interval using two bits, so the sequence zero,

zero would be associated to the first interval, and so on and so forth.

In other words, the quantized values would be one of these four possible values.

And internally,

the quantizer would know how to associate this binary value to this real value.

The two natural questions at this point are, what are the optimal

interval boundaries IFK and what are the optimal quantization values for each

interval [INAUDIBLE] to find an answer, let's consider the quantization error.

So this is defined as difference between the quantized value,

namely the representative value for each interval, minus the real value.

We model the input as a stochastic process, as we said in the beginning, and

we model the error as a white noise sequence.

In other words, we assume the samples are uncorrelated.

And we assume that all error samples have the same distribution.

These are rather drastic assumptions, but as a first approximation,

they will give us a good feeling for the effects of a quantizer.

In order to proceed further,

we need the statistical description of the input samples.

Let's also make some assumptions on the general structure of the quantizer.

And let's consider the simple, but very common case of uniform quantization.

The range in this case is split into 2 to the R equal intervals of width delta,

which is equal to B- A, the range of the input samples.

Divided by 2 to the R,

the number of levels afforded to by a rate of R beats percent.

So in the case of R equal to 2, as before,

our range would be split into four equal width intervals.

The Mean Square Quantization Error is the variance of the the error signal,

namely the expectation of the difference between the quantized samples and

the original samples.

If we know the probability distribution function for the input,

we can right that as an integral from A to B of the PDF of the input

times the error function, which is the quantized value

of the integration variable minus the integration variable squared.

This is the standard application of the expectation theorem.

Then we can finally split the integral

over the independent quantization intervals.

And we get this last formulation, here.

Now, in order to proceed, we need to know

the probability distribution function of the input to compute these integrals.

So now we make a further hypothesis on the input,

we assume that it is uniformly distributed.

That means that the probability distribution function

is just a constant from A to B, a value 1 over B- A.

The mean squared error becomes the sum of 2 to the r- 1 independent integrals.

Each on in which is the integral over the quantization interval

of the representative value for the interval, which we haven't determined yet,

minus tau squared divided by B- A.

In order to find the optimal quantization points we minimize

the mean square error with respect to the quantization points themselves.

We take a partial derivatives of the error with respect to x hat of m.

When we take partial derivatives of the sum, the partial derivative will

kill all terms of the sum except the one that depends on the derivation variable so

we're left with an integral over the interval I m of 2 times

the quantization point minus tau divided by B- A in the tau.

We have to compute this integral over the quantization interval number m and

you remember the range is from a to b, we divide this into 2 to the r

equal intervals of size delta and a lower and upper boundary for

the interval number m will be A+m delta and

A+ m delta + delta.

In order to minimize the error we set the partial derivatives to 0 for

all quantization intervals, and we find that this happens when the quantization

point Is, the interval's midpoint.

With this we can plot the quantizer's characteristic.

Here we show it for R equal to 3, three bits per sample.

And you can see that the quantizer associates with quantization interval,

to its midpoint.

And you have a typical staircase characteristic of the uniform quantizor.

Back to the mean square error, we now replace into the expression for

the mean square error the values that we found in the previous analysis.

Namely, the boundaries for each quantization interval, the value for

the midpoint and the expression for the probability of distribution of the input.

And if we compute this integral,

we obtain the fundamental result of uniform quantization.

The mean square error for uniform quantizer is equal to delta

square over 12 where delta is (B-A)/(2^R).

If we analyze the R a little bit further,

we can relate the expression of the error to the expression for the signal's energy.

Since we assume that information is uniformly distributed,

we can compute its variance, i.e.

it's energy, as (B-A)^2 over 12.

It’s the variance over uniformly distributed variable and so

we can compute the signal-to-noise ratio as,

the power of the signal divided by the power of the error and

the signal-to-noise ratio happens to be 2 to the 2R.

So if the input is uniformly distributed and the quantizer is a uniform quantizer,

which means it's matched to the input, the signal to noise ratio

is only a function of the number of bits per sample that we allocate.

We can express this result in decibels by taking 10 times the log in base 10.

Of 2 to the power of 2R, and

we get the famous and handy formula of 6 dB's per bit.

In other words, every bit we add to the internal representation

of a quantized signal adds 6 dB's of signal-to-noise ratio.

So for instance, a compact disk has 16 bits per sample, so

the maximum signal-to-noise ratio that you can achieve in a CD is 96 dB's.

A DVD on the other hand has 24 bits per sample so

your signal-to noise ratio grows to 144 dB.

O Coursera proporciona acesso universal à melhor educação do mundo fazendo parcerias com as melhores universidades e organizações para oferecer cursos on-line.