0:27

So the first column, you can either write column-wise or row-wise.

Â The first column corresponds to session a, which uses links 1, 3, and 4 so it's

Â 1011. Okay?

Â Because the first, third, and fourth links are used, the second link is not

Â used by session a. The second column corresponds to session

Â b, which uses links one, two. So the first two positions are ones and

Â then zero. Third column: session c, which only uses

Â link four, so 0001. Can also see that link one, the first row

Â of the routing matrix, is used by two sessions, A and B.

Â So now, we can write down the constraint whihch is 101111000001 and multiplying.

Â The rates of these three sessions. Since we label them by a, b, c, we'll

Â call them xa, xa, xb, xc. Less than or equal to the vector, four by

Â one vector, the link capacities. As mentioned before, we assume unit

Â capacitor on each link. One megabit per second.

Â That's the constraint and of course, we need all the xa, xb, xcs to be

Â non-negative. To be physically meaningful.

Â Now the objective function. Is to maximize the sum of utilities.

Â Let's say, they all have the same utility shape and they are logarythmic functions

Â to enforce proportional fairness. So, maximize log of XA, plus log of XB,

Â plus log of XC, okay? Now this is an optimization problem.

Â Maybe the fourth one or fifth one we've encountered throughout this course.

Â And we have a couple of more coming up in the future like this.

Â Okay? Maximize, a decompost, logarithmic, as a

Â concave fucntion, subject to linear capacity constraints and non-activity.

Â Now this matrix induces coupling but, as in the advanced material, we can decouple

Â that. And we're going to run the following,

Â 3:44

Let's just say we initialize the p's. That's p1 equal p2, equal p3 equals p4 at

Â initialization times zero, all to be say, one unit.

Â As step size better be one unit in a, in a Homar problem you will experiment with

Â different step sizes, and you'll see that one step size is too big you will lead to

Â convert. you will lead to divergence,

Â but when the step size is too small then it guarantees convergence or the

Â convergence is very slow. So there is a trade off between guarantee

Â of convergence. Versus the speed of convergence if you

Â actually can get it, all right? But now we've picked beta to be one to illustrate

Â the algorithm. Not iteration starts like this, okay?

Â At time1, equal to 1, we have the following, okay?

Â First look at the qs, okay? For session ap1+p2+p3, equals p1 plus p2

Â plus p3, we say from the last iteration, so that is 11+1=3 + 1 + 1 equals 3 units

Â and QB, path price for session B is 2. QC path price for session C is one unit.

Â Well, this is easy to see and then say well it's just telling was basically that

Â session A uses three lengths, session B uses two, session C uses one.

Â Isn't this the flawed intuition we mentioned early on?

Â Right? Shouldn't we be counting the bottleneck

Â lengths, not just the number of lengths? Well we'll come to that actually right

Â around the corner in the second iteration.

Â But at this point we have not captured that fact, yet.

Â As soon as the source start responding, we will see.

Â Now the source rates, therefore, work like this.

Â Okay? It is u' inverse of q, right?

Â Well, for log utility function, the demand function, u prime inverse is just

Â reciprocal 1a. over qa that means 1 / 3 okay? And XB is

Â 1 / 2 and XC is just 1. These are measured in megabit per second.

Â 7:18

What we have is Y1, which is one over three + one over 2, okay, minus C1 which

Â is one. This is already past the number.

Â So that's okay. This is in three decimal places,

Â accurate, 0.833. P2 follows the same pattern so we have to

Â look at basically one plus. Well, half minus one, that is 15.

Â Similarly you can write out P3, which is 1333, and P4, which is 1 plus the load,

Â which is 1.33, minus, capacity one. Again, this is the capacity one megabit

Â per second. This was the last iterations priors, even

Â though they both are one, they have totally different physical meaning and

Â units, okay?

Â And now. With this beta however the units

Â basically get converged. The beta numerical value is one, okay?

Â So this means, is 1.33. Just as we expect intuitively.

Â Now, the first three links price drop, because you are now fulling using the

Â capacity but the fourth lengths price actually increases from one in the last

Â iteration. And now, we can write down the Qs.

Â The QA is now this number plus this number plus this number.

Â Add up to 2.5. Qb is 1.33.

Â Qc is 1.33. This implies next time this was right.

Â Xa is one over 2.4, which is 2.5 which is.4.

Â Xb is one over, 1.33 which is.75 and Xc is 1.1 over 1.33, which is 0.75.

Â And now you already see that sessions b and c are being treated the same way as

Â driven by basically the topology. They both use one bottleneck link at

Â iteration two. That fact is implicitly captured because

Â by this time we see both at the same number, okay?

Â And both are bigger than 0.4 for this session a that uses two bottleneck links.

Â And now from x you can update y, from y you can update p, from p you can update

Â q, back to update x. And this iteration continues.

Â And this is the iteration that you see over about fourteen iterations.

Â This graph shows the source race. This graph shows the link cost.

Â So you've got three curves, XA, XB, XC here.

Â And you can see them converging to what? To the optimal solutions being XA star is

Â one-third. XB star equals XC star equals two-third.

Â Exactly the intuitive answer we mentioned as the proportionately fair and

Â efficient, and of course feasible, answer to the num problem.

Â The corresponding link cost at convergence are P1* = P2* = P4* which is

Â 1.5. Where as P2* and equals P3* which is 0.

Â So we can see, they actually get to zero within 4 iterations.

Â And just to have a sanity check, does this make sense?

Â Indeed, you can see the routing matrix multiplying this answer of one-third,

Â two-thirds, two-thirds equals one times one and then two-third, one-third, one

Â which is indeed less than or equal to one, one, one, one.

Â The capacity of one mega per second for everything.

Â So it is feasible and you can check sufficiency and you can check

Â disproportional fairness but you can also check that hey, links one and four are

Â kind of different. Right?

Â Because they are completely used up in capacity.

Â They form the bottleneck link. And we can detect bottleneck links

Â because the corresponding prices are positive.

Â But the so-called complementary selectiveness property in the Lagrange

Â duality theory of optimization problem, which we will briefly mention, the mass

Â material part, we can say that when the optimal price is actually non-zero,

Â strictly positive, then the corresponding links must be bottleneck links.

Â Conversely, if the process for some links are actually authored, if the links are

Â not bottleneck links. For example, links two and three, they

Â are slack, you are not even fully utilizing the

Â given capacities, okay? They did not cause the reason for

Â you to stop adding rates to the sources, then the corresponding optimal prices

Â must be zero. This is called the complementary

Â slackness property. Alright, so that was a numerical example.

Â You may feel a little disconnect between the first two modules of the course and

Â the last two module okay? Between the engineering artifact of TCP

Â congest control protocols on the one hand, and the mathematical models of

Â capacity allocation through a non-distributed solution and actually

Â there is a tight coupling. About ten years after the invention of

Â TCP Tahoe. Applied mathematicians and engineers

Â collectively figure out a good way to understand the engineering artifact of

Â TCP protocols. In particular, they have reverse

Â engineered TCP renode as implicitly maximizing a specific utility function,

Â arctangent. And packet loss, used as the congestion

Â price or link price or mathematically the Lagrange dual variables piece.

Â Whereas for TCP Vegas is, been reverse engineered to be implicitly maximizing

Â for logarithmic utility with delay used as the link congestion price.

Â In other words, if you give me an engineering protocol, I can give you, in

Â return, the underlying optimization or gain problem implicitly being solved for

Â by these protocols. Now this is a weird angle.

Â It's like, you give me the solution, I'll tell you the problem.

Â You may say, I've already got the solution, why do I care about the

Â problem. Well, knowing the problem is a good way

Â to do forward engineering. For you to be able to understand why

Â sometimes it works, sometimes it doesn't, and how can I improve it's working.

Â And indeed reverse engineering had lead, led to several interesting implications.

Â For example, one implication is that for TCP Rino, if you increase the buffer size

Â of these routers, it does not help with equilibrium packet loss.

Â Mathematically does tribute to C, because in TCP Reno, the non-problem is

Â maximizing our tangent utility. Okay?

Â Subject to link capacity constraints. The buffer size does not even come into

Â the problem definition. So if you are the solution, though, to a

Â problem not even defined by buffer size. The solution cannot possibly depend on

Â buffer size. And packet loss is the solution to the

Â dual problem. So packet loss cannot be dependent on the

Â buffer size at equilibrium. Physically, what happens is that you can

Â make the buffer bigger and bigger. It will just delay the onset of

Â congestion. We just have to wait a bit longer until

Â all the sources pump so much that for any finite sized buffer, you will overflow

Â it. So you make congestion start later, but

Â you not avoid congestion at equilibrium. Another implication this time for labels

Â is that because it's been reverse engineered and shown to be maximizing

Â logarithmic utility, it actually achieves proportional fairness in the allocation.

Â If it can converge, back to that question, can it converge?

Â Well mathematically we'll say we need small enough step size beta which is not

Â always the case for Vegas. But you can change the game parameter and

Â the sources as inspired by this reverse engineering, and carry out a forward

Â engineering. This actually what has led to the

Â development. And deployment of fast TCP which has been

Â commercialized In certain, long distance, high delay

Â bandwidth kind of projects. high delay bandwidth product kind of

Â networks. Okay,

Â so that's an example of going from reverse engineering to forward

Â engineering. You know delay based, congestion like

Â TCP, contra porto, is going to give you some good result like proportional fair,

Â provided you can make it converge. And now you can work hard to make it

Â converge. Now we're going to go through a little

Â bit of reverse engineering in the last material part of the video, and then a

Â little bit further in one of the homework problems.

Â If you're interested in going through this long advanced material section.

Â As you can tell it is the longest advanced material I guess across all the

Â lectures in this court. Now this spirit of reverse engineering is

Â not new to us. We were trying to reverse engineer

Â topological. Property such as, small world S scale

Â three. And we saw the potential pitfalls of

Â reverse engineering without predictive power.

Â Now we are reverse engineering functionality, especially protocols such

Â as congestion control. And we'll see that you're going to

Â validate it with empirical evidence, and indeed it has been, and it has been used

Â for forward engineering. Okay.

Â So in summary, we have touched on quite a few different corners today.

Â We talked about the principles, five principles of distributing congestion

Â control. It is a network version of the economics

Â principle of demand-supply regulation. We've also looked at the mechanics of

Â some of the popular congestion for particles in TCP.

Â Then we looked the mathematics of capacity allocation formulated as an

Â optimization problem and solved distributively.

Â But equally important are the messages in the big picture.

Â One is that feedback signals can be generated.

Â And then used for distributive coordination in a network,

Â in this case, is the Internet. So we can view the Internet.

Â TCP as the largest manmade machine solving,

Â