So, we know how to calculate the periodic schedule right? And we know how to calculate a latency bound for it. And I've claimed that that bound is going to be a bound on a eager schedule as well. It's not so surprising because an eager schedule fires as soon as possible. Which means that a periodic schedule can only fire later than that. And so, a bound on the periodic schedule is also going to be a bound on the eager schedule. But can we actually prove that. I would like to have a proof because I'm not so sure if this still holds after a number of firings. Maybe in an eager scheduling something happens along the way. So, I would like a more rigorous proof. Luckily, all the ingredients that we need for that proof are already available. You may remember from the proof about the throughput. That it was bounded by 1 over the maximum cycle mean of the graph. That this formula, or this inequality. Actually captures about on the production time of any arrow, or the consumption time of any arrow has a similar bound. And you will also remember that we could also iterate that bound over the graph. So, if you were looking for a bound on the cert token coming out of the graph. Then you could enter a pass back to find that bound by just looking at the production times needed over a pass but only considering paths that have in total have more tokens Then N. So, in this case, more tokens than three. What I actually discovered when I wanted to explain how the latency bound worked is that you can also look at slightly different paths. The principle is still the same, but the thing is that it's not necessary to go over the total number of tokens. If you go back through the graph, and you end up first here, then you are still looking for a q(1) and then if you would go this way, you would end up at the input. So, we still need a path that lead to the input, but if you would go this way they wouldn't then. This buffer already has enough tokens in it to allow Q to fire immediately. So, from the point of view of this buffer, the worst case response time of Q is zero. The only thing is that Q is still being delayed by this pass going up here. And from this point of view, so if you are here in the graph. Then if you go back, you still have to look for r 2(1). But each of the incoming buffers has at least one token in it. So, the worst case response time of r 2 1 is going to be 0 because this first token can't be produced immediately, or is going to be of r2, actually. So, the consumption time is 0. The production time of that one is going to be the production time of that actor. Now, if we put that in a formula, at first we get this. We get that the output is bounded by pass, which pass? Well the pass from the input to P, and along those parts we add up all the execution times of the actors. And we take the worst-case response time of the input, minus the number of tokens that were on the path. And the other one, we take only the paths from p to q that are longest in the sense that if you would take any longer paths, you would have to go over this bound saying that, on the path there can only be n tokens. And the fact that you start that path from an actor q, that has enough tokens on the input, actually on all of its inputs, means that this q(n- iota(X)) This number is actually smaller than the number of tokens that's available. So q(n-i(X)) is going to be zero, because that one can fire immediately. Except that you still have to add up the time of the actor, but that's in tau X. And then 0 is actually going to be smaller than the offset that was already there in the periodic schedule. So, we can increase that with the offset and then we can start looking at the Input, because on the input, we also had a bound. The worst-case input bound we can also fill in. And that one contains the offset of the first actor that is involved in the periodic schedule. And after that, you get this The input bound which says that you will have at least a certain number of tokens per period. Now, if we now look at what we have, then we have pause, with a certain length. And what we can do is we can eliminate all the cycles from those paths. So, we split up a path into all the cycles in the path and goes on straight which looks like this. And then the cycles in the path for that we know something. Namely, we know that the execution time in the cycle is going to be less than the maximum cycle mean. Or the number tokens in the cycle times the maximum cycle mean. So, we can replace that by a formula holding the maximum cycle mean. And And maximum cycle mean by construction is smaller than the period of the schedule. So, we can increase the bound a little big further by filling in the period of the schedule. Now, because the number of tokens in. The cycles is less than the total number of tokens in the path, and so we now have the rest of the parts was called x. So, the number of so tokens in the total path was going to be n minus x. The number of tokens in the cycle has to be less than n minus yotta x, which means that we can replace the number of tokens, here by n minus yotta x, and then we've still increased the amount. And then we can merge the two formulas, actually, because they now Have become the same. And if we merge them, then this is actually very familiar because it is the formula that we used to construct one step When we were doing the construction of the periodic schedule. Because in the periodic schedule, we said that the offset of p had to be larger than the offset of any actor before in the In the graph plus the number of tokens times the period that you needed to get to that actor minus the execution time plus the number of tokens that you need to get to that actor minus the execution time, and if you iterate that over path, you get this formula, which means that if you fill in this here, then here we actually get t p. And then, this crosses out that part and this crosses out that part, so we only get plus n t. In short, the output time is going to be less than the offset of the output actor plus and times the period. And that's exactly the output time of the periodic schedule. So, now you know that the output of a periodic schedule is indeed going to be later always than the output of the eager schedule, and since the input has not changed, that means that the latency is also going to be later than the latency of an eager schedule, which means that the latency bound still works for eager schedules. [MUSIC]