Welcome back. Now, let's take a look at video processing. This is the block diagram that we were looking at. The main workhorse of video codecs for bit rate reduction [COUGH] is motion estimation, and motion prediction at the sender side. And those bits are sent along with the texture information. And at the receiver, we do motion compensation. I do not show that in this block diagram and we build on top of the still image compression JPEG texture coding aspects that we just talked about, and on top of that, we add motion processing. So this is how motion estimation and compensation works. [COUGH] We use the current frame, when we are working on the current frame, we use one or more of the neighboring frames. It can be previous frame or it can be a future frame, which of course introduces some delay. And these are called the reference frames. And we estimate a pixel or groups of pixels that have moved with respect to the reference frame and once we know which groups of pixels have moved, we get a difference in the current frame that we are working. And we actually work only on the differences. That's why this is called DPCM motion estimation and differential pulse code modulation. We use DCT as the transform and then the quantization and the run length encoding and the Huffman, they are exactly the same as what we used in JPEG. [COUGH] So this leads to the concept of the, what we call, group of pictures GOP and when the content is completely new, when we get a brand new frame which has no correlation with either past or future frames. We called it as an intra-frame that can be decoded completely on its own. This is like JPEG. The predicted frames use only the previous frames to compute the current frame and the differences are encoded and sent. This is like ADPCM that we saw in speech and audio. You can also use bilateral prediction, prediction from future and back and preview and past. These are called B-Frames. So till you get another I-Frame, you're basically building off the previous I-frame and the predicted delta. So there is some additional latency involved between the I-frame. So if you get a nearer in the transmission, [COUGH] you would not recover it very easily. I'll get back to that in a minute. The other important way, the video coding can get very high compression ratios even for the texture and also its so factor on motion aspects is. Instead of just looking at 8 by 8 blocks wow we look at 16 by 16, 4 by 4 and the combination of these. And all this acts a lot of complexity at the encoder and slightly more complexity at the decoder. But it's really the encoder that gets bit [COUGH] very badly. But you do get much bigger data rate reductions. So as I was saying, one of the weak links of getting all this compression and sucking away all the redundancy is, even if you have a slight error between the two I-Frames. The prediction loop for the P-Frames and the B-Frames between the sender and the receiver is completely broken. They're not in sync anymore. So the error stops building up. Till you correct it with an I-Frame or some other mechanism. And the visual quality can goes out very, very fast, till you manage to synchronize the prediction loop at the receiver with that of the sender. [COUGH] Unfortunately the MPEG and ITU standards that define MPEG 4, H.264, H.265. [COUGH] excuse me, this specified only the big stream and the operation of the receiver. So the encoder is free to do whatever they want as long as it is generating compliant bit streams. This is slightly different from real time communication standards such as in 3GPP, in 3GPP2, very unique to define the encoder operation and the decoder operation and the bits completely. So, what happens is, if you do not have other mechanisms, this is a simulation in a cellular network with pedestrian video model. You can see the quality, the error occurs at frame number 88. And from that time onwards, at 98 and 123, you can see significant degradations in the video quality. I'm sure you have seen this on IPTV and video streaming also when the channel is not good. Other than that you do get significant reductions in the data rate compared to what you could do before.